##// END OF EJS Templates
cmdutil: drop aliases for logcmdutil functions (API)...
Yuya Nishihara -
r35962:c8e2d6ed default
parent child Browse files
Show More

The requested changes are too big and content was truncated. Show full diff

@@ -1,1125 +1,1125 b''
1 1 # bugzilla.py - bugzilla integration for mercurial
2 2 #
3 3 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
4 4 # Copyright 2011-4 Jim Hague <jim.hague@acm.org>
5 5 #
6 6 # This software may be used and distributed according to the terms of the
7 7 # GNU General Public License version 2 or any later version.
8 8
9 9 '''hooks for integrating with the Bugzilla bug tracker
10 10
11 11 This hook extension adds comments on bugs in Bugzilla when changesets
12 12 that refer to bugs by Bugzilla ID are seen. The comment is formatted using
13 13 the Mercurial template mechanism.
14 14
15 15 The bug references can optionally include an update for Bugzilla of the
16 16 hours spent working on the bug. Bugs can also be marked fixed.
17 17
18 18 Four basic modes of access to Bugzilla are provided:
19 19
20 20 1. Access via the Bugzilla REST-API. Requires bugzilla 5.0 or later.
21 21
22 22 2. Access via the Bugzilla XMLRPC interface. Requires Bugzilla 3.4 or later.
23 23
24 24 3. Check data via the Bugzilla XMLRPC interface and submit bug change
25 25 via email to Bugzilla email interface. Requires Bugzilla 3.4 or later.
26 26
27 27 4. Writing directly to the Bugzilla database. Only Bugzilla installations
28 28 using MySQL are supported. Requires Python MySQLdb.
29 29
30 30 Writing directly to the database is susceptible to schema changes, and
31 31 relies on a Bugzilla contrib script to send out bug change
32 32 notification emails. This script runs as the user running Mercurial,
33 33 must be run on the host with the Bugzilla install, and requires
34 34 permission to read Bugzilla configuration details and the necessary
35 35 MySQL user and password to have full access rights to the Bugzilla
36 36 database. For these reasons this access mode is now considered
37 37 deprecated, and will not be updated for new Bugzilla versions going
38 38 forward. Only adding comments is supported in this access mode.
39 39
40 40 Access via XMLRPC needs a Bugzilla username and password to be specified
41 41 in the configuration. Comments are added under that username. Since the
42 42 configuration must be readable by all Mercurial users, it is recommended
43 43 that the rights of that user are restricted in Bugzilla to the minimum
44 44 necessary to add comments. Marking bugs fixed requires Bugzilla 4.0 and later.
45 45
46 46 Access via XMLRPC/email uses XMLRPC to query Bugzilla, but sends
47 47 email to the Bugzilla email interface to submit comments to bugs.
48 48 The From: address in the email is set to the email address of the Mercurial
49 49 user, so the comment appears to come from the Mercurial user. In the event
50 50 that the Mercurial user email is not recognized by Bugzilla as a Bugzilla
51 51 user, the email associated with the Bugzilla username used to log into
52 52 Bugzilla is used instead as the source of the comment. Marking bugs fixed
53 53 works on all supported Bugzilla versions.
54 54
55 55 Access via the REST-API needs either a Bugzilla username and password
56 56 or an apikey specified in the configuration. Comments are made under
57 57 the given username or the user associated with the apikey in Bugzilla.
58 58
59 59 Configuration items common to all access modes:
60 60
61 61 bugzilla.version
62 62 The access type to use. Values recognized are:
63 63
64 64 :``restapi``: Bugzilla REST-API, Bugzilla 5.0 and later.
65 65 :``xmlrpc``: Bugzilla XMLRPC interface.
66 66 :``xmlrpc+email``: Bugzilla XMLRPC and email interfaces.
67 67 :``3.0``: MySQL access, Bugzilla 3.0 and later.
68 68 :``2.18``: MySQL access, Bugzilla 2.18 and up to but not
69 69 including 3.0.
70 70 :``2.16``: MySQL access, Bugzilla 2.16 and up to but not
71 71 including 2.18.
72 72
73 73 bugzilla.regexp
74 74 Regular expression to match bug IDs for update in changeset commit message.
75 75 It must contain one "()" named group ``<ids>`` containing the bug
76 76 IDs separated by non-digit characters. It may also contain
77 77 a named group ``<hours>`` with a floating-point number giving the
78 78 hours worked on the bug. If no named groups are present, the first
79 79 "()" group is assumed to contain the bug IDs, and work time is not
80 80 updated. The default expression matches ``Bug 1234``, ``Bug no. 1234``,
81 81 ``Bug number 1234``, ``Bugs 1234,5678``, ``Bug 1234 and 5678`` and
82 82 variations thereof, followed by an hours number prefixed by ``h`` or
83 83 ``hours``, e.g. ``hours 1.5``. Matching is case insensitive.
84 84
85 85 bugzilla.fixregexp
86 86 Regular expression to match bug IDs for marking fixed in changeset
87 87 commit message. This must contain a "()" named group ``<ids>` containing
88 88 the bug IDs separated by non-digit characters. It may also contain
89 89 a named group ``<hours>`` with a floating-point number giving the
90 90 hours worked on the bug. If no named groups are present, the first
91 91 "()" group is assumed to contain the bug IDs, and work time is not
92 92 updated. The default expression matches ``Fixes 1234``, ``Fixes bug 1234``,
93 93 ``Fixes bugs 1234,5678``, ``Fixes 1234 and 5678`` and
94 94 variations thereof, followed by an hours number prefixed by ``h`` or
95 95 ``hours``, e.g. ``hours 1.5``. Matching is case insensitive.
96 96
97 97 bugzilla.fixstatus
98 98 The status to set a bug to when marking fixed. Default ``RESOLVED``.
99 99
100 100 bugzilla.fixresolution
101 101 The resolution to set a bug to when marking fixed. Default ``FIXED``.
102 102
103 103 bugzilla.style
104 104 The style file to use when formatting comments.
105 105
106 106 bugzilla.template
107 107 Template to use when formatting comments. Overrides style if
108 108 specified. In addition to the usual Mercurial keywords, the
109 109 extension specifies:
110 110
111 111 :``{bug}``: The Bugzilla bug ID.
112 112 :``{root}``: The full pathname of the Mercurial repository.
113 113 :``{webroot}``: Stripped pathname of the Mercurial repository.
114 114 :``{hgweb}``: Base URL for browsing Mercurial repositories.
115 115
116 116 Default ``changeset {node|short} in repo {root} refers to bug
117 117 {bug}.\\ndetails:\\n\\t{desc|tabindent}``
118 118
119 119 bugzilla.strip
120 120 The number of path separator characters to strip from the front of
121 121 the Mercurial repository path (``{root}`` in templates) to produce
122 122 ``{webroot}``. For example, a repository with ``{root}``
123 123 ``/var/local/my-project`` with a strip of 2 gives a value for
124 124 ``{webroot}`` of ``my-project``. Default 0.
125 125
126 126 web.baseurl
127 127 Base URL for browsing Mercurial repositories. Referenced from
128 128 templates as ``{hgweb}``.
129 129
130 130 Configuration items common to XMLRPC+email and MySQL access modes:
131 131
132 132 bugzilla.usermap
133 133 Path of file containing Mercurial committer email to Bugzilla user email
134 134 mappings. If specified, the file should contain one mapping per
135 135 line::
136 136
137 137 committer = Bugzilla user
138 138
139 139 See also the ``[usermap]`` section.
140 140
141 141 The ``[usermap]`` section is used to specify mappings of Mercurial
142 142 committer email to Bugzilla user email. See also ``bugzilla.usermap``.
143 143 Contains entries of the form ``committer = Bugzilla user``.
144 144
145 145 XMLRPC and REST-API access mode configuration:
146 146
147 147 bugzilla.bzurl
148 148 The base URL for the Bugzilla installation.
149 149 Default ``http://localhost/bugzilla``.
150 150
151 151 bugzilla.user
152 152 The username to use to log into Bugzilla via XMLRPC. Default
153 153 ``bugs``.
154 154
155 155 bugzilla.password
156 156 The password for Bugzilla login.
157 157
158 158 REST-API access mode uses the options listed above as well as:
159 159
160 160 bugzilla.apikey
161 161 An apikey generated on the Bugzilla instance for api access.
162 162 Using an apikey removes the need to store the user and password
163 163 options.
164 164
165 165 XMLRPC+email access mode uses the XMLRPC access mode configuration items,
166 166 and also:
167 167
168 168 bugzilla.bzemail
169 169 The Bugzilla email address.
170 170
171 171 In addition, the Mercurial email settings must be configured. See the
172 172 documentation in hgrc(5), sections ``[email]`` and ``[smtp]``.
173 173
174 174 MySQL access mode configuration:
175 175
176 176 bugzilla.host
177 177 Hostname of the MySQL server holding the Bugzilla database.
178 178 Default ``localhost``.
179 179
180 180 bugzilla.db
181 181 Name of the Bugzilla database in MySQL. Default ``bugs``.
182 182
183 183 bugzilla.user
184 184 Username to use to access MySQL server. Default ``bugs``.
185 185
186 186 bugzilla.password
187 187 Password to use to access MySQL server.
188 188
189 189 bugzilla.timeout
190 190 Database connection timeout (seconds). Default 5.
191 191
192 192 bugzilla.bzuser
193 193 Fallback Bugzilla user name to record comments with, if changeset
194 194 committer cannot be found as a Bugzilla user.
195 195
196 196 bugzilla.bzdir
197 197 Bugzilla install directory. Used by default notify. Default
198 198 ``/var/www/html/bugzilla``.
199 199
200 200 bugzilla.notify
201 201 The command to run to get Bugzilla to send bug change notification
202 202 emails. Substitutes from a map with 3 keys, ``bzdir``, ``id`` (bug
203 203 id) and ``user`` (committer bugzilla email). Default depends on
204 204 version; from 2.18 it is "cd %(bzdir)s && perl -T
205 205 contrib/sendbugmail.pl %(id)s %(user)s".
206 206
207 207 Activating the extension::
208 208
209 209 [extensions]
210 210 bugzilla =
211 211
212 212 [hooks]
213 213 # run bugzilla hook on every change pulled or pushed in here
214 214 incoming.bugzilla = python:hgext.bugzilla.hook
215 215
216 216 Example configurations:
217 217
218 218 XMLRPC example configuration. This uses the Bugzilla at
219 219 ``http://my-project.org/bugzilla``, logging in as user
220 220 ``bugmail@my-project.org`` with password ``plugh``. It is used with a
221 221 collection of Mercurial repositories in ``/var/local/hg/repos/``,
222 222 with a web interface at ``http://my-project.org/hg``. ::
223 223
224 224 [bugzilla]
225 225 bzurl=http://my-project.org/bugzilla
226 226 user=bugmail@my-project.org
227 227 password=plugh
228 228 version=xmlrpc
229 229 template=Changeset {node|short} in {root|basename}.
230 230 {hgweb}/{webroot}/rev/{node|short}\\n
231 231 {desc}\\n
232 232 strip=5
233 233
234 234 [web]
235 235 baseurl=http://my-project.org/hg
236 236
237 237 XMLRPC+email example configuration. This uses the Bugzilla at
238 238 ``http://my-project.org/bugzilla``, logging in as user
239 239 ``bugmail@my-project.org`` with password ``plugh``. It is used with a
240 240 collection of Mercurial repositories in ``/var/local/hg/repos/``,
241 241 with a web interface at ``http://my-project.org/hg``. Bug comments
242 242 are sent to the Bugzilla email address
243 243 ``bugzilla@my-project.org``. ::
244 244
245 245 [bugzilla]
246 246 bzurl=http://my-project.org/bugzilla
247 247 user=bugmail@my-project.org
248 248 password=plugh
249 249 version=xmlrpc+email
250 250 bzemail=bugzilla@my-project.org
251 251 template=Changeset {node|short} in {root|basename}.
252 252 {hgweb}/{webroot}/rev/{node|short}\\n
253 253 {desc}\\n
254 254 strip=5
255 255
256 256 [web]
257 257 baseurl=http://my-project.org/hg
258 258
259 259 [usermap]
260 260 user@emaildomain.com=user.name@bugzilladomain.com
261 261
262 262 MySQL example configuration. This has a local Bugzilla 3.2 installation
263 263 in ``/opt/bugzilla-3.2``. The MySQL database is on ``localhost``,
264 264 the Bugzilla database name is ``bugs`` and MySQL is
265 265 accessed with MySQL username ``bugs`` password ``XYZZY``. It is used
266 266 with a collection of Mercurial repositories in ``/var/local/hg/repos/``,
267 267 with a web interface at ``http://my-project.org/hg``. ::
268 268
269 269 [bugzilla]
270 270 host=localhost
271 271 password=XYZZY
272 272 version=3.0
273 273 bzuser=unknown@domain.com
274 274 bzdir=/opt/bugzilla-3.2
275 275 template=Changeset {node|short} in {root|basename}.
276 276 {hgweb}/{webroot}/rev/{node|short}\\n
277 277 {desc}\\n
278 278 strip=5
279 279
280 280 [web]
281 281 baseurl=http://my-project.org/hg
282 282
283 283 [usermap]
284 284 user@emaildomain.com=user.name@bugzilladomain.com
285 285
286 286 All the above add a comment to the Bugzilla bug record of the form::
287 287
288 288 Changeset 3b16791d6642 in repository-name.
289 289 http://my-project.org/hg/repository-name/rev/3b16791d6642
290 290
291 291 Changeset commit comment. Bug 1234.
292 292 '''
293 293
294 294 from __future__ import absolute_import
295 295
296 296 import json
297 297 import re
298 298 import time
299 299
300 300 from mercurial.i18n import _
301 301 from mercurial.node import short
302 302 from mercurial import (
303 cmdutil,
304 303 error,
304 logcmdutil,
305 305 mail,
306 306 registrar,
307 307 url,
308 308 util,
309 309 )
310 310
311 311 xmlrpclib = util.xmlrpclib
312 312
313 313 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
314 314 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
315 315 # be specifying the version(s) of Mercurial they are tested with, or
316 316 # leave the attribute unspecified.
317 317 testedwith = 'ships-with-hg-core'
318 318
319 319 configtable = {}
320 320 configitem = registrar.configitem(configtable)
321 321
322 322 configitem('bugzilla', 'apikey',
323 323 default='',
324 324 )
325 325 configitem('bugzilla', 'bzdir',
326 326 default='/var/www/html/bugzilla',
327 327 )
328 328 configitem('bugzilla', 'bzemail',
329 329 default=None,
330 330 )
331 331 configitem('bugzilla', 'bzurl',
332 332 default='http://localhost/bugzilla/',
333 333 )
334 334 configitem('bugzilla', 'bzuser',
335 335 default=None,
336 336 )
337 337 configitem('bugzilla', 'db',
338 338 default='bugs',
339 339 )
340 340 configitem('bugzilla', 'fixregexp',
341 341 default=(r'fix(?:es)?\s*(?:bugs?\s*)?,?\s*'
342 342 r'(?:nos?\.?|num(?:ber)?s?)?\s*'
343 343 r'(?P<ids>(?:#?\d+\s*(?:,?\s*(?:and)?)?\s*)+)'
344 344 r'\.?\s*(?:h(?:ours?)?\s*(?P<hours>\d*(?:\.\d+)?))?')
345 345 )
346 346 configitem('bugzilla', 'fixresolution',
347 347 default='FIXED',
348 348 )
349 349 configitem('bugzilla', 'fixstatus',
350 350 default='RESOLVED',
351 351 )
352 352 configitem('bugzilla', 'host',
353 353 default='localhost',
354 354 )
355 355 configitem('bugzilla', 'notify',
356 356 default=configitem.dynamicdefault,
357 357 )
358 358 configitem('bugzilla', 'password',
359 359 default=None,
360 360 )
361 361 configitem('bugzilla', 'regexp',
362 362 default=(r'bugs?\s*,?\s*(?:#|nos?\.?|num(?:ber)?s?)?\s*'
363 363 r'(?P<ids>(?:\d+\s*(?:,?\s*(?:and)?)?\s*)+)'
364 364 r'\.?\s*(?:h(?:ours?)?\s*(?P<hours>\d*(?:\.\d+)?))?')
365 365 )
366 366 configitem('bugzilla', 'strip',
367 367 default=0,
368 368 )
369 369 configitem('bugzilla', 'style',
370 370 default=None,
371 371 )
372 372 configitem('bugzilla', 'template',
373 373 default=None,
374 374 )
375 375 configitem('bugzilla', 'timeout',
376 376 default=5,
377 377 )
378 378 configitem('bugzilla', 'user',
379 379 default='bugs',
380 380 )
381 381 configitem('bugzilla', 'usermap',
382 382 default=None,
383 383 )
384 384 configitem('bugzilla', 'version',
385 385 default=None,
386 386 )
387 387
388 388 class bzaccess(object):
389 389 '''Base class for access to Bugzilla.'''
390 390
391 391 def __init__(self, ui):
392 392 self.ui = ui
393 393 usermap = self.ui.config('bugzilla', 'usermap')
394 394 if usermap:
395 395 self.ui.readconfig(usermap, sections=['usermap'])
396 396
397 397 def map_committer(self, user):
398 398 '''map name of committer to Bugzilla user name.'''
399 399 for committer, bzuser in self.ui.configitems('usermap'):
400 400 if committer.lower() == user.lower():
401 401 return bzuser
402 402 return user
403 403
404 404 # Methods to be implemented by access classes.
405 405 #
406 406 # 'bugs' is a dict keyed on bug id, where values are a dict holding
407 407 # updates to bug state. Recognized dict keys are:
408 408 #
409 409 # 'hours': Value, float containing work hours to be updated.
410 410 # 'fix': If key present, bug is to be marked fixed. Value ignored.
411 411
412 412 def filter_real_bug_ids(self, bugs):
413 413 '''remove bug IDs that do not exist in Bugzilla from bugs.'''
414 414
415 415 def filter_cset_known_bug_ids(self, node, bugs):
416 416 '''remove bug IDs where node occurs in comment text from bugs.'''
417 417
418 418 def updatebug(self, bugid, newstate, text, committer):
419 419 '''update the specified bug. Add comment text and set new states.
420 420
421 421 If possible add the comment as being from the committer of
422 422 the changeset. Otherwise use the default Bugzilla user.
423 423 '''
424 424
425 425 def notify(self, bugs, committer):
426 426 '''Force sending of Bugzilla notification emails.
427 427
428 428 Only required if the access method does not trigger notification
429 429 emails automatically.
430 430 '''
431 431
432 432 # Bugzilla via direct access to MySQL database.
433 433 class bzmysql(bzaccess):
434 434 '''Support for direct MySQL access to Bugzilla.
435 435
436 436 The earliest Bugzilla version this is tested with is version 2.16.
437 437
438 438 If your Bugzilla is version 3.4 or above, you are strongly
439 439 recommended to use the XMLRPC access method instead.
440 440 '''
441 441
442 442 @staticmethod
443 443 def sql_buglist(ids):
444 444 '''return SQL-friendly list of bug ids'''
445 445 return '(' + ','.join(map(str, ids)) + ')'
446 446
447 447 _MySQLdb = None
448 448
449 449 def __init__(self, ui):
450 450 try:
451 451 import MySQLdb as mysql
452 452 bzmysql._MySQLdb = mysql
453 453 except ImportError as err:
454 454 raise error.Abort(_('python mysql support not available: %s') % err)
455 455
456 456 bzaccess.__init__(self, ui)
457 457
458 458 host = self.ui.config('bugzilla', 'host')
459 459 user = self.ui.config('bugzilla', 'user')
460 460 passwd = self.ui.config('bugzilla', 'password')
461 461 db = self.ui.config('bugzilla', 'db')
462 462 timeout = int(self.ui.config('bugzilla', 'timeout'))
463 463 self.ui.note(_('connecting to %s:%s as %s, password %s\n') %
464 464 (host, db, user, '*' * len(passwd)))
465 465 self.conn = bzmysql._MySQLdb.connect(host=host,
466 466 user=user, passwd=passwd,
467 467 db=db,
468 468 connect_timeout=timeout)
469 469 self.cursor = self.conn.cursor()
470 470 self.longdesc_id = self.get_longdesc_id()
471 471 self.user_ids = {}
472 472 self.default_notify = "cd %(bzdir)s && ./processmail %(id)s %(user)s"
473 473
474 474 def run(self, *args, **kwargs):
475 475 '''run a query.'''
476 476 self.ui.note(_('query: %s %s\n') % (args, kwargs))
477 477 try:
478 478 self.cursor.execute(*args, **kwargs)
479 479 except bzmysql._MySQLdb.MySQLError:
480 480 self.ui.note(_('failed query: %s %s\n') % (args, kwargs))
481 481 raise
482 482
483 483 def get_longdesc_id(self):
484 484 '''get identity of longdesc field'''
485 485 self.run('select fieldid from fielddefs where name = "longdesc"')
486 486 ids = self.cursor.fetchall()
487 487 if len(ids) != 1:
488 488 raise error.Abort(_('unknown database schema'))
489 489 return ids[0][0]
490 490
491 491 def filter_real_bug_ids(self, bugs):
492 492 '''filter not-existing bugs from set.'''
493 493 self.run('select bug_id from bugs where bug_id in %s' %
494 494 bzmysql.sql_buglist(bugs.keys()))
495 495 existing = [id for (id,) in self.cursor.fetchall()]
496 496 for id in bugs.keys():
497 497 if id not in existing:
498 498 self.ui.status(_('bug %d does not exist\n') % id)
499 499 del bugs[id]
500 500
501 501 def filter_cset_known_bug_ids(self, node, bugs):
502 502 '''filter bug ids that already refer to this changeset from set.'''
503 503 self.run('''select bug_id from longdescs where
504 504 bug_id in %s and thetext like "%%%s%%"''' %
505 505 (bzmysql.sql_buglist(bugs.keys()), short(node)))
506 506 for (id,) in self.cursor.fetchall():
507 507 self.ui.status(_('bug %d already knows about changeset %s\n') %
508 508 (id, short(node)))
509 509 del bugs[id]
510 510
511 511 def notify(self, bugs, committer):
512 512 '''tell bugzilla to send mail.'''
513 513 self.ui.status(_('telling bugzilla to send mail:\n'))
514 514 (user, userid) = self.get_bugzilla_user(committer)
515 515 for id in bugs.keys():
516 516 self.ui.status(_(' bug %s\n') % id)
517 517 cmdfmt = self.ui.config('bugzilla', 'notify', self.default_notify)
518 518 bzdir = self.ui.config('bugzilla', 'bzdir')
519 519 try:
520 520 # Backwards-compatible with old notify string, which
521 521 # took one string. This will throw with a new format
522 522 # string.
523 523 cmd = cmdfmt % id
524 524 except TypeError:
525 525 cmd = cmdfmt % {'bzdir': bzdir, 'id': id, 'user': user}
526 526 self.ui.note(_('running notify command %s\n') % cmd)
527 527 fp = util.popen('(%s) 2>&1' % cmd)
528 528 out = fp.read()
529 529 ret = fp.close()
530 530 if ret:
531 531 self.ui.warn(out)
532 532 raise error.Abort(_('bugzilla notify command %s') %
533 533 util.explainexit(ret)[0])
534 534 self.ui.status(_('done\n'))
535 535
536 536 def get_user_id(self, user):
537 537 '''look up numeric bugzilla user id.'''
538 538 try:
539 539 return self.user_ids[user]
540 540 except KeyError:
541 541 try:
542 542 userid = int(user)
543 543 except ValueError:
544 544 self.ui.note(_('looking up user %s\n') % user)
545 545 self.run('''select userid from profiles
546 546 where login_name like %s''', user)
547 547 all = self.cursor.fetchall()
548 548 if len(all) != 1:
549 549 raise KeyError(user)
550 550 userid = int(all[0][0])
551 551 self.user_ids[user] = userid
552 552 return userid
553 553
554 554 def get_bugzilla_user(self, committer):
555 555 '''See if committer is a registered bugzilla user. Return
556 556 bugzilla username and userid if so. If not, return default
557 557 bugzilla username and userid.'''
558 558 user = self.map_committer(committer)
559 559 try:
560 560 userid = self.get_user_id(user)
561 561 except KeyError:
562 562 try:
563 563 defaultuser = self.ui.config('bugzilla', 'bzuser')
564 564 if not defaultuser:
565 565 raise error.Abort(_('cannot find bugzilla user id for %s') %
566 566 user)
567 567 userid = self.get_user_id(defaultuser)
568 568 user = defaultuser
569 569 except KeyError:
570 570 raise error.Abort(_('cannot find bugzilla user id for %s or %s')
571 571 % (user, defaultuser))
572 572 return (user, userid)
573 573
574 574 def updatebug(self, bugid, newstate, text, committer):
575 575 '''update bug state with comment text.
576 576
577 577 Try adding comment as committer of changeset, otherwise as
578 578 default bugzilla user.'''
579 579 if len(newstate) > 0:
580 580 self.ui.warn(_("Bugzilla/MySQL cannot update bug state\n"))
581 581
582 582 (user, userid) = self.get_bugzilla_user(committer)
583 583 now = time.strftime(r'%Y-%m-%d %H:%M:%S')
584 584 self.run('''insert into longdescs
585 585 (bug_id, who, bug_when, thetext)
586 586 values (%s, %s, %s, %s)''',
587 587 (bugid, userid, now, text))
588 588 self.run('''insert into bugs_activity (bug_id, who, bug_when, fieldid)
589 589 values (%s, %s, %s, %s)''',
590 590 (bugid, userid, now, self.longdesc_id))
591 591 self.conn.commit()
592 592
593 593 class bzmysql_2_18(bzmysql):
594 594 '''support for bugzilla 2.18 series.'''
595 595
596 596 def __init__(self, ui):
597 597 bzmysql.__init__(self, ui)
598 598 self.default_notify = \
599 599 "cd %(bzdir)s && perl -T contrib/sendbugmail.pl %(id)s %(user)s"
600 600
601 601 class bzmysql_3_0(bzmysql_2_18):
602 602 '''support for bugzilla 3.0 series.'''
603 603
604 604 def __init__(self, ui):
605 605 bzmysql_2_18.__init__(self, ui)
606 606
607 607 def get_longdesc_id(self):
608 608 '''get identity of longdesc field'''
609 609 self.run('select id from fielddefs where name = "longdesc"')
610 610 ids = self.cursor.fetchall()
611 611 if len(ids) != 1:
612 612 raise error.Abort(_('unknown database schema'))
613 613 return ids[0][0]
614 614
615 615 # Bugzilla via XMLRPC interface.
616 616
617 617 class cookietransportrequest(object):
618 618 """A Transport request method that retains cookies over its lifetime.
619 619
620 620 The regular xmlrpclib transports ignore cookies. Which causes
621 621 a bit of a problem when you need a cookie-based login, as with
622 622 the Bugzilla XMLRPC interface prior to 4.4.3.
623 623
624 624 So this is a helper for defining a Transport which looks for
625 625 cookies being set in responses and saves them to add to all future
626 626 requests.
627 627 """
628 628
629 629 # Inspiration drawn from
630 630 # http://blog.godson.in/2010/09/how-to-make-python-xmlrpclib-client.html
631 631 # http://www.itkovian.net/base/transport-class-for-pythons-xml-rpc-lib/
632 632
633 633 cookies = []
634 634 def send_cookies(self, connection):
635 635 if self.cookies:
636 636 for cookie in self.cookies:
637 637 connection.putheader("Cookie", cookie)
638 638
639 639 def request(self, host, handler, request_body, verbose=0):
640 640 self.verbose = verbose
641 641 self.accept_gzip_encoding = False
642 642
643 643 # issue XML-RPC request
644 644 h = self.make_connection(host)
645 645 if verbose:
646 646 h.set_debuglevel(1)
647 647
648 648 self.send_request(h, handler, request_body)
649 649 self.send_host(h, host)
650 650 self.send_cookies(h)
651 651 self.send_user_agent(h)
652 652 self.send_content(h, request_body)
653 653
654 654 # Deal with differences between Python 2.6 and 2.7.
655 655 # In the former h is a HTTP(S). In the latter it's a
656 656 # HTTP(S)Connection. Luckily, the 2.6 implementation of
657 657 # HTTP(S) has an underlying HTTP(S)Connection, so extract
658 658 # that and use it.
659 659 try:
660 660 response = h.getresponse()
661 661 except AttributeError:
662 662 response = h._conn.getresponse()
663 663
664 664 # Add any cookie definitions to our list.
665 665 for header in response.msg.getallmatchingheaders("Set-Cookie"):
666 666 val = header.split(": ", 1)[1]
667 667 cookie = val.split(";", 1)[0]
668 668 self.cookies.append(cookie)
669 669
670 670 if response.status != 200:
671 671 raise xmlrpclib.ProtocolError(host + handler, response.status,
672 672 response.reason, response.msg.headers)
673 673
674 674 payload = response.read()
675 675 parser, unmarshaller = self.getparser()
676 676 parser.feed(payload)
677 677 parser.close()
678 678
679 679 return unmarshaller.close()
680 680
681 681 # The explicit calls to the underlying xmlrpclib __init__() methods are
682 682 # necessary. The xmlrpclib.Transport classes are old-style classes, and
683 683 # it turns out their __init__() doesn't get called when doing multiple
684 684 # inheritance with a new-style class.
685 685 class cookietransport(cookietransportrequest, xmlrpclib.Transport):
686 686 def __init__(self, use_datetime=0):
687 687 if util.safehasattr(xmlrpclib.Transport, "__init__"):
688 688 xmlrpclib.Transport.__init__(self, use_datetime)
689 689
690 690 class cookiesafetransport(cookietransportrequest, xmlrpclib.SafeTransport):
691 691 def __init__(self, use_datetime=0):
692 692 if util.safehasattr(xmlrpclib.Transport, "__init__"):
693 693 xmlrpclib.SafeTransport.__init__(self, use_datetime)
694 694
695 695 class bzxmlrpc(bzaccess):
696 696 """Support for access to Bugzilla via the Bugzilla XMLRPC API.
697 697
698 698 Requires a minimum Bugzilla version 3.4.
699 699 """
700 700
701 701 def __init__(self, ui):
702 702 bzaccess.__init__(self, ui)
703 703
704 704 bzweb = self.ui.config('bugzilla', 'bzurl')
705 705 bzweb = bzweb.rstrip("/") + "/xmlrpc.cgi"
706 706
707 707 user = self.ui.config('bugzilla', 'user')
708 708 passwd = self.ui.config('bugzilla', 'password')
709 709
710 710 self.fixstatus = self.ui.config('bugzilla', 'fixstatus')
711 711 self.fixresolution = self.ui.config('bugzilla', 'fixresolution')
712 712
713 713 self.bzproxy = xmlrpclib.ServerProxy(bzweb, self.transport(bzweb))
714 714 ver = self.bzproxy.Bugzilla.version()['version'].split('.')
715 715 self.bzvermajor = int(ver[0])
716 716 self.bzverminor = int(ver[1])
717 717 login = self.bzproxy.User.login({'login': user, 'password': passwd,
718 718 'restrict_login': True})
719 719 self.bztoken = login.get('token', '')
720 720
721 721 def transport(self, uri):
722 722 if util.urlreq.urlparse(uri, "http")[0] == "https":
723 723 return cookiesafetransport()
724 724 else:
725 725 return cookietransport()
726 726
727 727 def get_bug_comments(self, id):
728 728 """Return a string with all comment text for a bug."""
729 729 c = self.bzproxy.Bug.comments({'ids': [id],
730 730 'include_fields': ['text'],
731 731 'token': self.bztoken})
732 732 return ''.join([t['text'] for t in c['bugs'][str(id)]['comments']])
733 733
734 734 def filter_real_bug_ids(self, bugs):
735 735 probe = self.bzproxy.Bug.get({'ids': sorted(bugs.keys()),
736 736 'include_fields': [],
737 737 'permissive': True,
738 738 'token': self.bztoken,
739 739 })
740 740 for badbug in probe['faults']:
741 741 id = badbug['id']
742 742 self.ui.status(_('bug %d does not exist\n') % id)
743 743 del bugs[id]
744 744
745 745 def filter_cset_known_bug_ids(self, node, bugs):
746 746 for id in sorted(bugs.keys()):
747 747 if self.get_bug_comments(id).find(short(node)) != -1:
748 748 self.ui.status(_('bug %d already knows about changeset %s\n') %
749 749 (id, short(node)))
750 750 del bugs[id]
751 751
752 752 def updatebug(self, bugid, newstate, text, committer):
753 753 args = {}
754 754 if 'hours' in newstate:
755 755 args['work_time'] = newstate['hours']
756 756
757 757 if self.bzvermajor >= 4:
758 758 args['ids'] = [bugid]
759 759 args['comment'] = {'body' : text}
760 760 if 'fix' in newstate:
761 761 args['status'] = self.fixstatus
762 762 args['resolution'] = self.fixresolution
763 763 args['token'] = self.bztoken
764 764 self.bzproxy.Bug.update(args)
765 765 else:
766 766 if 'fix' in newstate:
767 767 self.ui.warn(_("Bugzilla/XMLRPC needs Bugzilla 4.0 or later "
768 768 "to mark bugs fixed\n"))
769 769 args['id'] = bugid
770 770 args['comment'] = text
771 771 self.bzproxy.Bug.add_comment(args)
772 772
773 773 class bzxmlrpcemail(bzxmlrpc):
774 774 """Read data from Bugzilla via XMLRPC, send updates via email.
775 775
776 776 Advantages of sending updates via email:
777 777 1. Comments can be added as any user, not just logged in user.
778 778 2. Bug statuses or other fields not accessible via XMLRPC can
779 779 potentially be updated.
780 780
781 781 There is no XMLRPC function to change bug status before Bugzilla
782 782 4.0, so bugs cannot be marked fixed via XMLRPC before Bugzilla 4.0.
783 783 But bugs can be marked fixed via email from 3.4 onwards.
784 784 """
785 785
786 786 # The email interface changes subtly between 3.4 and 3.6. In 3.4,
787 787 # in-email fields are specified as '@<fieldname> = <value>'. In
788 788 # 3.6 this becomes '@<fieldname> <value>'. And fieldname @bug_id
789 789 # in 3.4 becomes @id in 3.6. 3.6 and 4.0 both maintain backwards
790 790 # compatibility, but rather than rely on this use the new format for
791 791 # 4.0 onwards.
792 792
793 793 def __init__(self, ui):
794 794 bzxmlrpc.__init__(self, ui)
795 795
796 796 self.bzemail = self.ui.config('bugzilla', 'bzemail')
797 797 if not self.bzemail:
798 798 raise error.Abort(_("configuration 'bzemail' missing"))
799 799 mail.validateconfig(self.ui)
800 800
801 801 def makecommandline(self, fieldname, value):
802 802 if self.bzvermajor >= 4:
803 803 return "@%s %s" % (fieldname, str(value))
804 804 else:
805 805 if fieldname == "id":
806 806 fieldname = "bug_id"
807 807 return "@%s = %s" % (fieldname, str(value))
808 808
809 809 def send_bug_modify_email(self, bugid, commands, comment, committer):
810 810 '''send modification message to Bugzilla bug via email.
811 811
812 812 The message format is documented in the Bugzilla email_in.pl
813 813 specification. commands is a list of command lines, comment is the
814 814 comment text.
815 815
816 816 To stop users from crafting commit comments with
817 817 Bugzilla commands, specify the bug ID via the message body, rather
818 818 than the subject line, and leave a blank line after it.
819 819 '''
820 820 user = self.map_committer(committer)
821 821 matches = self.bzproxy.User.get({'match': [user],
822 822 'token': self.bztoken})
823 823 if not matches['users']:
824 824 user = self.ui.config('bugzilla', 'user')
825 825 matches = self.bzproxy.User.get({'match': [user],
826 826 'token': self.bztoken})
827 827 if not matches['users']:
828 828 raise error.Abort(_("default bugzilla user %s email not found")
829 829 % user)
830 830 user = matches['users'][0]['email']
831 831 commands.append(self.makecommandline("id", bugid))
832 832
833 833 text = "\n".join(commands) + "\n\n" + comment
834 834
835 835 _charsets = mail._charsets(self.ui)
836 836 user = mail.addressencode(self.ui, user, _charsets)
837 837 bzemail = mail.addressencode(self.ui, self.bzemail, _charsets)
838 838 msg = mail.mimeencode(self.ui, text, _charsets)
839 839 msg['From'] = user
840 840 msg['To'] = bzemail
841 841 msg['Subject'] = mail.headencode(self.ui, "Bug modification", _charsets)
842 842 sendmail = mail.connect(self.ui)
843 843 sendmail(user, bzemail, msg.as_string())
844 844
845 845 def updatebug(self, bugid, newstate, text, committer):
846 846 cmds = []
847 847 if 'hours' in newstate:
848 848 cmds.append(self.makecommandline("work_time", newstate['hours']))
849 849 if 'fix' in newstate:
850 850 cmds.append(self.makecommandline("bug_status", self.fixstatus))
851 851 cmds.append(self.makecommandline("resolution", self.fixresolution))
852 852 self.send_bug_modify_email(bugid, cmds, text, committer)
853 853
854 854 class NotFound(LookupError):
855 855 pass
856 856
857 857 class bzrestapi(bzaccess):
858 858 """Read and write bugzilla data using the REST API available since
859 859 Bugzilla 5.0.
860 860 """
861 861 def __init__(self, ui):
862 862 bzaccess.__init__(self, ui)
863 863 bz = self.ui.config('bugzilla', 'bzurl')
864 864 self.bzroot = '/'.join([bz, 'rest'])
865 865 self.apikey = self.ui.config('bugzilla', 'apikey')
866 866 self.user = self.ui.config('bugzilla', 'user')
867 867 self.passwd = self.ui.config('bugzilla', 'password')
868 868 self.fixstatus = self.ui.config('bugzilla', 'fixstatus')
869 869 self.fixresolution = self.ui.config('bugzilla', 'fixresolution')
870 870
871 871 def apiurl(self, targets, include_fields=None):
872 872 url = '/'.join([self.bzroot] + [str(t) for t in targets])
873 873 qv = {}
874 874 if self.apikey:
875 875 qv['api_key'] = self.apikey
876 876 elif self.user and self.passwd:
877 877 qv['login'] = self.user
878 878 qv['password'] = self.passwd
879 879 if include_fields:
880 880 qv['include_fields'] = include_fields
881 881 if qv:
882 882 url = '%s?%s' % (url, util.urlreq.urlencode(qv))
883 883 return url
884 884
885 885 def _fetch(self, burl):
886 886 try:
887 887 resp = url.open(self.ui, burl)
888 888 return json.loads(resp.read())
889 889 except util.urlerr.httperror as inst:
890 890 if inst.code == 401:
891 891 raise error.Abort(_('authorization failed'))
892 892 if inst.code == 404:
893 893 raise NotFound()
894 894 else:
895 895 raise
896 896
897 897 def _submit(self, burl, data, method='POST'):
898 898 data = json.dumps(data)
899 899 if method == 'PUT':
900 900 class putrequest(util.urlreq.request):
901 901 def get_method(self):
902 902 return 'PUT'
903 903 request_type = putrequest
904 904 else:
905 905 request_type = util.urlreq.request
906 906 req = request_type(burl, data,
907 907 {'Content-Type': 'application/json'})
908 908 try:
909 909 resp = url.opener(self.ui).open(req)
910 910 return json.loads(resp.read())
911 911 except util.urlerr.httperror as inst:
912 912 if inst.code == 401:
913 913 raise error.Abort(_('authorization failed'))
914 914 if inst.code == 404:
915 915 raise NotFound()
916 916 else:
917 917 raise
918 918
919 919 def filter_real_bug_ids(self, bugs):
920 920 '''remove bug IDs that do not exist in Bugzilla from bugs.'''
921 921 badbugs = set()
922 922 for bugid in bugs:
923 923 burl = self.apiurl(('bug', bugid), include_fields='status')
924 924 try:
925 925 self._fetch(burl)
926 926 except NotFound:
927 927 badbugs.add(bugid)
928 928 for bugid in badbugs:
929 929 del bugs[bugid]
930 930
931 931 def filter_cset_known_bug_ids(self, node, bugs):
932 932 '''remove bug IDs where node occurs in comment text from bugs.'''
933 933 sn = short(node)
934 934 for bugid in bugs.keys():
935 935 burl = self.apiurl(('bug', bugid, 'comment'), include_fields='text')
936 936 result = self._fetch(burl)
937 937 comments = result['bugs'][str(bugid)]['comments']
938 938 if any(sn in c['text'] for c in comments):
939 939 self.ui.status(_('bug %d already knows about changeset %s\n') %
940 940 (bugid, sn))
941 941 del bugs[bugid]
942 942
943 943 def updatebug(self, bugid, newstate, text, committer):
944 944 '''update the specified bug. Add comment text and set new states.
945 945
946 946 If possible add the comment as being from the committer of
947 947 the changeset. Otherwise use the default Bugzilla user.
948 948 '''
949 949 bugmod = {}
950 950 if 'hours' in newstate:
951 951 bugmod['work_time'] = newstate['hours']
952 952 if 'fix' in newstate:
953 953 bugmod['status'] = self.fixstatus
954 954 bugmod['resolution'] = self.fixresolution
955 955 if bugmod:
956 956 # if we have to change the bugs state do it here
957 957 bugmod['comment'] = {
958 958 'comment': text,
959 959 'is_private': False,
960 960 'is_markdown': False,
961 961 }
962 962 burl = self.apiurl(('bug', bugid))
963 963 self._submit(burl, bugmod, method='PUT')
964 964 self.ui.debug('updated bug %s\n' % bugid)
965 965 else:
966 966 burl = self.apiurl(('bug', bugid, 'comment'))
967 967 self._submit(burl, {
968 968 'comment': text,
969 969 'is_private': False,
970 970 'is_markdown': False,
971 971 })
972 972 self.ui.debug('added comment to bug %s\n' % bugid)
973 973
974 974 def notify(self, bugs, committer):
975 975 '''Force sending of Bugzilla notification emails.
976 976
977 977 Only required if the access method does not trigger notification
978 978 emails automatically.
979 979 '''
980 980 pass
981 981
982 982 class bugzilla(object):
983 983 # supported versions of bugzilla. different versions have
984 984 # different schemas.
985 985 _versions = {
986 986 '2.16': bzmysql,
987 987 '2.18': bzmysql_2_18,
988 988 '3.0': bzmysql_3_0,
989 989 'xmlrpc': bzxmlrpc,
990 990 'xmlrpc+email': bzxmlrpcemail,
991 991 'restapi': bzrestapi,
992 992 }
993 993
994 994 def __init__(self, ui, repo):
995 995 self.ui = ui
996 996 self.repo = repo
997 997
998 998 bzversion = self.ui.config('bugzilla', 'version')
999 999 try:
1000 1000 bzclass = bugzilla._versions[bzversion]
1001 1001 except KeyError:
1002 1002 raise error.Abort(_('bugzilla version %s not supported') %
1003 1003 bzversion)
1004 1004 self.bzdriver = bzclass(self.ui)
1005 1005
1006 1006 self.bug_re = re.compile(
1007 1007 self.ui.config('bugzilla', 'regexp'), re.IGNORECASE)
1008 1008 self.fix_re = re.compile(
1009 1009 self.ui.config('bugzilla', 'fixregexp'), re.IGNORECASE)
1010 1010 self.split_re = re.compile(r'\D+')
1011 1011
1012 1012 def find_bugs(self, ctx):
1013 1013 '''return bugs dictionary created from commit comment.
1014 1014
1015 1015 Extract bug info from changeset comments. Filter out any that are
1016 1016 not known to Bugzilla, and any that already have a reference to
1017 1017 the given changeset in their comments.
1018 1018 '''
1019 1019 start = 0
1020 1020 hours = 0.0
1021 1021 bugs = {}
1022 1022 bugmatch = self.bug_re.search(ctx.description(), start)
1023 1023 fixmatch = self.fix_re.search(ctx.description(), start)
1024 1024 while True:
1025 1025 bugattribs = {}
1026 1026 if not bugmatch and not fixmatch:
1027 1027 break
1028 1028 if not bugmatch:
1029 1029 m = fixmatch
1030 1030 elif not fixmatch:
1031 1031 m = bugmatch
1032 1032 else:
1033 1033 if bugmatch.start() < fixmatch.start():
1034 1034 m = bugmatch
1035 1035 else:
1036 1036 m = fixmatch
1037 1037 start = m.end()
1038 1038 if m is bugmatch:
1039 1039 bugmatch = self.bug_re.search(ctx.description(), start)
1040 1040 if 'fix' in bugattribs:
1041 1041 del bugattribs['fix']
1042 1042 else:
1043 1043 fixmatch = self.fix_re.search(ctx.description(), start)
1044 1044 bugattribs['fix'] = None
1045 1045
1046 1046 try:
1047 1047 ids = m.group('ids')
1048 1048 except IndexError:
1049 1049 ids = m.group(1)
1050 1050 try:
1051 1051 hours = float(m.group('hours'))
1052 1052 bugattribs['hours'] = hours
1053 1053 except IndexError:
1054 1054 pass
1055 1055 except TypeError:
1056 1056 pass
1057 1057 except ValueError:
1058 1058 self.ui.status(_("%s: invalid hours\n") % m.group('hours'))
1059 1059
1060 1060 for id in self.split_re.split(ids):
1061 1061 if not id:
1062 1062 continue
1063 1063 bugs[int(id)] = bugattribs
1064 1064 if bugs:
1065 1065 self.bzdriver.filter_real_bug_ids(bugs)
1066 1066 if bugs:
1067 1067 self.bzdriver.filter_cset_known_bug_ids(ctx.node(), bugs)
1068 1068 return bugs
1069 1069
1070 1070 def update(self, bugid, newstate, ctx):
1071 1071 '''update bugzilla bug with reference to changeset.'''
1072 1072
1073 1073 def webroot(root):
1074 1074 '''strip leading prefix of repo root and turn into
1075 1075 url-safe path.'''
1076 1076 count = int(self.ui.config('bugzilla', 'strip'))
1077 1077 root = util.pconvert(root)
1078 1078 while count > 0:
1079 1079 c = root.find('/')
1080 1080 if c == -1:
1081 1081 break
1082 1082 root = root[c + 1:]
1083 1083 count -= 1
1084 1084 return root
1085 1085
1086 1086 mapfile = None
1087 1087 tmpl = self.ui.config('bugzilla', 'template')
1088 1088 if not tmpl:
1089 1089 mapfile = self.ui.config('bugzilla', 'style')
1090 1090 if not mapfile and not tmpl:
1091 1091 tmpl = _('changeset {node|short} in repo {root} refers '
1092 1092 'to bug {bug}.\ndetails:\n\t{desc|tabindent}')
1093 spec = cmdutil.logtemplatespec(tmpl, mapfile)
1094 t = cmdutil.changeset_templater(self.ui, self.repo, spec,
1095 False, None, False)
1093 spec = logcmdutil.templatespec(tmpl, mapfile)
1094 t = logcmdutil.changesettemplater(self.ui, self.repo, spec,
1095 False, None, False)
1096 1096 self.ui.pushbuffer()
1097 1097 t.show(ctx, changes=ctx.changeset(),
1098 1098 bug=str(bugid),
1099 1099 hgweb=self.ui.config('web', 'baseurl'),
1100 1100 root=self.repo.root,
1101 1101 webroot=webroot(self.repo.root))
1102 1102 data = self.ui.popbuffer()
1103 1103 self.bzdriver.updatebug(bugid, newstate, data, util.email(ctx.user()))
1104 1104
1105 1105 def notify(self, bugs, committer):
1106 1106 '''ensure Bugzilla users are notified of bug change.'''
1107 1107 self.bzdriver.notify(bugs, committer)
1108 1108
1109 1109 def hook(ui, repo, hooktype, node=None, **kwargs):
1110 1110 '''add comment to bugzilla for each changeset that refers to a
1111 1111 bugzilla bug id. only add a comment once per bug, so same change
1112 1112 seen multiple times does not fill bug with duplicate data.'''
1113 1113 if node is None:
1114 1114 raise error.Abort(_('hook type %s does not pass a changeset id') %
1115 1115 hooktype)
1116 1116 try:
1117 1117 bz = bugzilla(ui, repo)
1118 1118 ctx = repo[node]
1119 1119 bugs = bz.find_bugs(ctx)
1120 1120 if bugs:
1121 1121 for bug in bugs:
1122 1122 bz.update(bug, bugs[bug], ctx)
1123 1123 bz.notify(bugs, util.email(ctx.user()))
1124 1124 except Exception as e:
1125 1125 raise error.Abort(_('Bugzilla error: %s') % e)
@@ -1,71 +1,72 b''
1 1 # Mercurial extension to provide the 'hg children' command
2 2 #
3 3 # Copyright 2007 by Intevation GmbH <intevation@intevation.de>
4 4 #
5 5 # Author(s):
6 6 # Thomas Arendsen Hein <thomas@intevation.de>
7 7 #
8 8 # This software may be used and distributed according to the terms of the
9 9 # GNU General Public License version 2 or any later version.
10 10
11 11 '''command to display child changesets (DEPRECATED)
12 12
13 13 This extension is deprecated. You should use :hg:`log -r
14 14 "children(REV)"` instead.
15 15 '''
16 16
17 17 from __future__ import absolute_import
18 18
19 19 from mercurial.i18n import _
20 20 from mercurial import (
21 21 cmdutil,
22 logcmdutil,
22 23 pycompat,
23 24 registrar,
24 25 )
25 26
26 27 templateopts = cmdutil.templateopts
27 28
28 29 cmdtable = {}
29 30 command = registrar.command(cmdtable)
30 31 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
31 32 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
32 33 # be specifying the version(s) of Mercurial they are tested with, or
33 34 # leave the attribute unspecified.
34 35 testedwith = 'ships-with-hg-core'
35 36
36 37 @command('children',
37 38 [('r', 'rev', '',
38 39 _('show children of the specified revision'), _('REV')),
39 40 ] + templateopts,
40 41 _('hg children [-r REV] [FILE]'),
41 42 inferrepo=True)
42 43 def children(ui, repo, file_=None, **opts):
43 44 """show the children of the given or working directory revision
44 45
45 46 Print the children of the working directory's revisions. If a
46 47 revision is given via -r/--rev, the children of that revision will
47 48 be printed. If a file argument is given, revision in which the
48 49 file was last changed (after the working directory revision or the
49 50 argument to --rev if given) is printed.
50 51
51 52 Please use :hg:`log` instead::
52 53
53 54 hg children => hg log -r "children(.)"
54 55 hg children -r REV => hg log -r "children(REV)"
55 56
56 57 See :hg:`help log` and :hg:`help revsets.children`.
57 58
58 59 """
59 60 opts = pycompat.byteskwargs(opts)
60 61 rev = opts.get('rev')
61 62 if file_:
62 63 fctx = repo.filectx(file_, changeid=rev)
63 64 childctxs = [fcctx.changectx() for fcctx in fctx.children()]
64 65 else:
65 66 ctx = repo[rev]
66 67 childctxs = ctx.children()
67 68
68 displayer = cmdutil.show_changeset(ui, repo, opts)
69 displayer = logcmdutil.changesetdisplayer(ui, repo, opts)
69 70 for cctx in childctxs:
70 71 displayer.show(cctx)
71 72 displayer.close()
@@ -1,210 +1,211 b''
1 1 # churn.py - create a graph of revisions count grouped by template
2 2 #
3 3 # Copyright 2006 Josef "Jeff" Sipek <jeffpc@josefsipek.net>
4 4 # Copyright 2008 Alexander Solovyov <piranha@piranha.org.ua>
5 5 #
6 6 # This software may be used and distributed according to the terms of the
7 7 # GNU General Public License version 2 or any later version.
8 8
9 9 '''command to display statistics about repository history'''
10 10
11 11 from __future__ import absolute_import
12 12
13 13 import datetime
14 14 import os
15 15 import time
16 16
17 17 from mercurial.i18n import _
18 18 from mercurial import (
19 19 cmdutil,
20 20 encoding,
21 logcmdutil,
21 22 patch,
22 23 pycompat,
23 24 registrar,
24 25 scmutil,
25 26 util,
26 27 )
27 28
28 29 cmdtable = {}
29 30 command = registrar.command(cmdtable)
30 31 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
31 32 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
32 33 # be specifying the version(s) of Mercurial they are tested with, or
33 34 # leave the attribute unspecified.
34 35 testedwith = 'ships-with-hg-core'
35 36
36 37 def changedlines(ui, repo, ctx1, ctx2, fns):
37 38 added, removed = 0, 0
38 39 fmatch = scmutil.matchfiles(repo, fns)
39 40 diff = ''.join(patch.diff(repo, ctx1.node(), ctx2.node(), fmatch))
40 41 for l in diff.split('\n'):
41 42 if l.startswith("+") and not l.startswith("+++ "):
42 43 added += 1
43 44 elif l.startswith("-") and not l.startswith("--- "):
44 45 removed += 1
45 46 return (added, removed)
46 47
47 48 def countrate(ui, repo, amap, *pats, **opts):
48 49 """Calculate stats"""
49 50 opts = pycompat.byteskwargs(opts)
50 51 if opts.get('dateformat'):
51 52 def getkey(ctx):
52 53 t, tz = ctx.date()
53 54 date = datetime.datetime(*time.gmtime(float(t) - tz)[:6])
54 55 return date.strftime(opts['dateformat'])
55 56 else:
56 57 tmpl = opts.get('oldtemplate') or opts.get('template')
57 tmpl = cmdutil.makelogtemplater(ui, repo, tmpl)
58 tmpl = logcmdutil.maketemplater(ui, repo, tmpl)
58 59 def getkey(ctx):
59 60 ui.pushbuffer()
60 61 tmpl.show(ctx)
61 62 return ui.popbuffer()
62 63
63 64 state = {'count': 0}
64 65 rate = {}
65 66 df = False
66 67 if opts.get('date'):
67 68 df = util.matchdate(opts['date'])
68 69
69 70 m = scmutil.match(repo[None], pats, opts)
70 71 def prep(ctx, fns):
71 72 rev = ctx.rev()
72 73 if df and not df(ctx.date()[0]): # doesn't match date format
73 74 return
74 75
75 76 key = getkey(ctx).strip()
76 77 key = amap.get(key, key) # alias remap
77 78 if opts.get('changesets'):
78 79 rate[key] = (rate.get(key, (0,))[0] + 1, 0)
79 80 else:
80 81 parents = ctx.parents()
81 82 if len(parents) > 1:
82 83 ui.note(_('revision %d is a merge, ignoring...\n') % (rev,))
83 84 return
84 85
85 86 ctx1 = parents[0]
86 87 lines = changedlines(ui, repo, ctx1, ctx, fns)
87 88 rate[key] = [r + l for r, l in zip(rate.get(key, (0, 0)), lines)]
88 89
89 90 state['count'] += 1
90 91 ui.progress(_('analyzing'), state['count'], total=len(repo),
91 92 unit=_('revisions'))
92 93
93 94 for ctx in cmdutil.walkchangerevs(repo, m, opts, prep):
94 95 continue
95 96
96 97 ui.progress(_('analyzing'), None)
97 98
98 99 return rate
99 100
100 101
101 102 @command('churn',
102 103 [('r', 'rev', [],
103 104 _('count rate for the specified revision or revset'), _('REV')),
104 105 ('d', 'date', '',
105 106 _('count rate for revisions matching date spec'), _('DATE')),
106 107 ('t', 'oldtemplate', '',
107 108 _('template to group changesets (DEPRECATED)'), _('TEMPLATE')),
108 109 ('T', 'template', '{author|email}',
109 110 _('template to group changesets'), _('TEMPLATE')),
110 111 ('f', 'dateformat', '',
111 112 _('strftime-compatible format for grouping by date'), _('FORMAT')),
112 113 ('c', 'changesets', False, _('count rate by number of changesets')),
113 114 ('s', 'sort', False, _('sort by key (default: sort by count)')),
114 115 ('', 'diffstat', False, _('display added/removed lines separately')),
115 116 ('', 'aliases', '', _('file with email aliases'), _('FILE')),
116 117 ] + cmdutil.walkopts,
117 118 _("hg churn [-d DATE] [-r REV] [--aliases FILE] [FILE]"),
118 119 inferrepo=True)
119 120 def churn(ui, repo, *pats, **opts):
120 121 '''histogram of changes to the repository
121 122
122 123 This command will display a histogram representing the number
123 124 of changed lines or revisions, grouped according to the given
124 125 template. The default template will group changes by author.
125 126 The --dateformat option may be used to group the results by
126 127 date instead.
127 128
128 129 Statistics are based on the number of changed lines, or
129 130 alternatively the number of matching revisions if the
130 131 --changesets option is specified.
131 132
132 133 Examples::
133 134
134 135 # display count of changed lines for every committer
135 136 hg churn -T "{author|email}"
136 137
137 138 # display daily activity graph
138 139 hg churn -f "%H" -s -c
139 140
140 141 # display activity of developers by month
141 142 hg churn -f "%Y-%m" -s -c
142 143
143 144 # display count of lines changed in every year
144 145 hg churn -f "%Y" -s
145 146
146 147 It is possible to map alternate email addresses to a main address
147 148 by providing a file using the following format::
148 149
149 150 <alias email> = <actual email>
150 151
151 152 Such a file may be specified with the --aliases option, otherwise
152 153 a .hgchurn file will be looked for in the working directory root.
153 154 Aliases will be split from the rightmost "=".
154 155 '''
155 156 def pad(s, l):
156 157 return s + " " * (l - encoding.colwidth(s))
157 158
158 159 amap = {}
159 160 aliases = opts.get(r'aliases')
160 161 if not aliases and os.path.exists(repo.wjoin('.hgchurn')):
161 162 aliases = repo.wjoin('.hgchurn')
162 163 if aliases:
163 164 for l in open(aliases, "r"):
164 165 try:
165 166 alias, actual = l.rsplit('=' in l and '=' or None, 1)
166 167 amap[alias.strip()] = actual.strip()
167 168 except ValueError:
168 169 l = l.strip()
169 170 if l:
170 171 ui.warn(_("skipping malformed alias: %s\n") % l)
171 172 continue
172 173
173 174 rate = countrate(ui, repo, amap, *pats, **opts).items()
174 175 if not rate:
175 176 return
176 177
177 178 if opts.get(r'sort'):
178 179 rate.sort()
179 180 else:
180 181 rate.sort(key=lambda x: (-sum(x[1]), x))
181 182
182 183 # Be careful not to have a zero maxcount (issue833)
183 184 maxcount = float(max(sum(v) for k, v in rate)) or 1.0
184 185 maxname = max(len(k) for k, v in rate)
185 186
186 187 ttywidth = ui.termwidth()
187 188 ui.debug("assuming %i character terminal\n" % ttywidth)
188 189 width = ttywidth - maxname - 2 - 2 - 2
189 190
190 191 if opts.get(r'diffstat'):
191 192 width -= 15
192 193 def format(name, diffstat):
193 194 added, removed = diffstat
194 195 return "%s %15s %s%s\n" % (pad(name, maxname),
195 196 '+%d/-%d' % (added, removed),
196 197 ui.label('+' * charnum(added),
197 198 'diffstat.inserted'),
198 199 ui.label('-' * charnum(removed),
199 200 'diffstat.deleted'))
200 201 else:
201 202 width -= 6
202 203 def format(name, count):
203 204 return "%s %6d %s\n" % (pad(name, maxname), sum(count),
204 205 '*' * charnum(sum(count)))
205 206
206 207 def charnum(count):
207 208 return int(round(count * width / maxcount))
208 209
209 210 for name, count in rate:
210 211 ui.write(format(name, count))
@@ -1,517 +1,519 b''
1 1 # journal.py
2 2 #
3 3 # Copyright 2014-2016 Facebook, Inc.
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7 """track previous positions of bookmarks (EXPERIMENTAL)
8 8
9 9 This extension adds a new command: `hg journal`, which shows you where
10 10 bookmarks were previously located.
11 11
12 12 """
13 13
14 14 from __future__ import absolute_import
15 15
16 16 import collections
17 17 import errno
18 18 import os
19 19 import weakref
20 20
21 21 from mercurial.i18n import _
22 22
23 23 from mercurial import (
24 24 bookmarks,
25 25 cmdutil,
26 26 dispatch,
27 27 error,
28 28 extensions,
29 29 hg,
30 30 localrepo,
31 31 lock,
32 logcmdutil,
32 33 node,
33 34 pycompat,
34 35 registrar,
35 36 util,
36 37 )
37 38
38 39 from . import share
39 40
40 41 cmdtable = {}
41 42 command = registrar.command(cmdtable)
42 43
43 44 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
44 45 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
45 46 # be specifying the version(s) of Mercurial they are tested with, or
46 47 # leave the attribute unspecified.
47 48 testedwith = 'ships-with-hg-core'
48 49
49 50 # storage format version; increment when the format changes
50 51 storageversion = 0
51 52
52 53 # namespaces
53 54 bookmarktype = 'bookmark'
54 55 wdirparenttype = 'wdirparent'
55 56 # In a shared repository, what shared feature name is used
56 57 # to indicate this namespace is shared with the source?
57 58 sharednamespaces = {
58 59 bookmarktype: hg.sharedbookmarks,
59 60 }
60 61
61 62 # Journal recording, register hooks and storage object
62 63 def extsetup(ui):
63 64 extensions.wrapfunction(dispatch, 'runcommand', runcommand)
64 65 extensions.wrapfunction(bookmarks.bmstore, '_write', recordbookmarks)
65 66 extensions.wrapfilecache(
66 67 localrepo.localrepository, 'dirstate', wrapdirstate)
67 68 extensions.wrapfunction(hg, 'postshare', wrappostshare)
68 69 extensions.wrapfunction(hg, 'copystore', unsharejournal)
69 70
70 71 def reposetup(ui, repo):
71 72 if repo.local():
72 73 repo.journal = journalstorage(repo)
73 74 repo._wlockfreeprefix.add('namejournal')
74 75
75 76 dirstate, cached = localrepo.isfilecached(repo, 'dirstate')
76 77 if cached:
77 78 # already instantiated dirstate isn't yet marked as
78 79 # "journal"-ing, even though repo.dirstate() was already
79 80 # wrapped by own wrapdirstate()
80 81 _setupdirstate(repo, dirstate)
81 82
82 83 def runcommand(orig, lui, repo, cmd, fullargs, *args):
83 84 """Track the command line options for recording in the journal"""
84 85 journalstorage.recordcommand(*fullargs)
85 86 return orig(lui, repo, cmd, fullargs, *args)
86 87
87 88 def _setupdirstate(repo, dirstate):
88 89 dirstate.journalstorage = repo.journal
89 90 dirstate.addparentchangecallback('journal', recorddirstateparents)
90 91
91 92 # hooks to record dirstate changes
92 93 def wrapdirstate(orig, repo):
93 94 """Make journal storage available to the dirstate object"""
94 95 dirstate = orig(repo)
95 96 if util.safehasattr(repo, 'journal'):
96 97 _setupdirstate(repo, dirstate)
97 98 return dirstate
98 99
99 100 def recorddirstateparents(dirstate, old, new):
100 101 """Records all dirstate parent changes in the journal."""
101 102 old = list(old)
102 103 new = list(new)
103 104 if util.safehasattr(dirstate, 'journalstorage'):
104 105 # only record two hashes if there was a merge
105 106 oldhashes = old[:1] if old[1] == node.nullid else old
106 107 newhashes = new[:1] if new[1] == node.nullid else new
107 108 dirstate.journalstorage.record(
108 109 wdirparenttype, '.', oldhashes, newhashes)
109 110
110 111 # hooks to record bookmark changes (both local and remote)
111 112 def recordbookmarks(orig, store, fp):
112 113 """Records all bookmark changes in the journal."""
113 114 repo = store._repo
114 115 if util.safehasattr(repo, 'journal'):
115 116 oldmarks = bookmarks.bmstore(repo)
116 117 for mark, value in store.iteritems():
117 118 oldvalue = oldmarks.get(mark, node.nullid)
118 119 if value != oldvalue:
119 120 repo.journal.record(bookmarktype, mark, oldvalue, value)
120 121 return orig(store, fp)
121 122
122 123 # shared repository support
123 124 def _readsharedfeatures(repo):
124 125 """A set of shared features for this repository"""
125 126 try:
126 127 return set(repo.vfs.read('shared').splitlines())
127 128 except IOError as inst:
128 129 if inst.errno != errno.ENOENT:
129 130 raise
130 131 return set()
131 132
132 133 def _mergeentriesiter(*iterables, **kwargs):
133 134 """Given a set of sorted iterables, yield the next entry in merged order
134 135
135 136 Note that by default entries go from most recent to oldest.
136 137 """
137 138 order = kwargs.pop(r'order', max)
138 139 iterables = [iter(it) for it in iterables]
139 140 # this tracks still active iterables; iterables are deleted as they are
140 141 # exhausted, which is why this is a dictionary and why each entry also
141 142 # stores the key. Entries are mutable so we can store the next value each
142 143 # time.
143 144 iterable_map = {}
144 145 for key, it in enumerate(iterables):
145 146 try:
146 147 iterable_map[key] = [next(it), key, it]
147 148 except StopIteration:
148 149 # empty entry, can be ignored
149 150 pass
150 151
151 152 while iterable_map:
152 153 value, key, it = order(iterable_map.itervalues())
153 154 yield value
154 155 try:
155 156 iterable_map[key][0] = next(it)
156 157 except StopIteration:
157 158 # this iterable is empty, remove it from consideration
158 159 del iterable_map[key]
159 160
160 161 def wrappostshare(orig, sourcerepo, destrepo, **kwargs):
161 162 """Mark this shared working copy as sharing journal information"""
162 163 with destrepo.wlock():
163 164 orig(sourcerepo, destrepo, **kwargs)
164 165 with destrepo.vfs('shared', 'a') as fp:
165 166 fp.write('journal\n')
166 167
167 168 def unsharejournal(orig, ui, repo, repopath):
168 169 """Copy shared journal entries into this repo when unsharing"""
169 170 if (repo.path == repopath and repo.shared() and
170 171 util.safehasattr(repo, 'journal')):
171 172 sharedrepo = share._getsrcrepo(repo)
172 173 sharedfeatures = _readsharedfeatures(repo)
173 174 if sharedrepo and sharedfeatures > {'journal'}:
174 175 # there is a shared repository and there are shared journal entries
175 176 # to copy. move shared date over from source to destination but
176 177 # move the local file first
177 178 if repo.vfs.exists('namejournal'):
178 179 journalpath = repo.vfs.join('namejournal')
179 180 util.rename(journalpath, journalpath + '.bak')
180 181 storage = repo.journal
181 182 local = storage._open(
182 183 repo.vfs, filename='namejournal.bak', _newestfirst=False)
183 184 shared = (
184 185 e for e in storage._open(sharedrepo.vfs, _newestfirst=False)
185 186 if sharednamespaces.get(e.namespace) in sharedfeatures)
186 187 for entry in _mergeentriesiter(local, shared, order=min):
187 188 storage._write(repo.vfs, entry)
188 189
189 190 return orig(ui, repo, repopath)
190 191
191 192 class journalentry(collections.namedtuple(
192 193 u'journalentry',
193 194 u'timestamp user command namespace name oldhashes newhashes')):
194 195 """Individual journal entry
195 196
196 197 * timestamp: a mercurial (time, timezone) tuple
197 198 * user: the username that ran the command
198 199 * namespace: the entry namespace, an opaque string
199 200 * name: the name of the changed item, opaque string with meaning in the
200 201 namespace
201 202 * command: the hg command that triggered this record
202 203 * oldhashes: a tuple of one or more binary hashes for the old location
203 204 * newhashes: a tuple of one or more binary hashes for the new location
204 205
205 206 Handles serialisation from and to the storage format. Fields are
206 207 separated by newlines, hashes are written out in hex separated by commas,
207 208 timestamp and timezone are separated by a space.
208 209
209 210 """
210 211 @classmethod
211 212 def fromstorage(cls, line):
212 213 (time, user, command, namespace, name,
213 214 oldhashes, newhashes) = line.split('\n')
214 215 timestamp, tz = time.split()
215 216 timestamp, tz = float(timestamp), int(tz)
216 217 oldhashes = tuple(node.bin(hash) for hash in oldhashes.split(','))
217 218 newhashes = tuple(node.bin(hash) for hash in newhashes.split(','))
218 219 return cls(
219 220 (timestamp, tz), user, command, namespace, name,
220 221 oldhashes, newhashes)
221 222
222 223 def __str__(self):
223 224 """String representation for storage"""
224 225 time = ' '.join(map(str, self.timestamp))
225 226 oldhashes = ','.join([node.hex(hash) for hash in self.oldhashes])
226 227 newhashes = ','.join([node.hex(hash) for hash in self.newhashes])
227 228 return '\n'.join((
228 229 time, self.user, self.command, self.namespace, self.name,
229 230 oldhashes, newhashes))
230 231
231 232 class journalstorage(object):
232 233 """Storage for journal entries
233 234
234 235 Entries are divided over two files; one with entries that pertain to the
235 236 local working copy *only*, and one with entries that are shared across
236 237 multiple working copies when shared using the share extension.
237 238
238 239 Entries are stored with NUL bytes as separators. See the journalentry
239 240 class for the per-entry structure.
240 241
241 242 The file format starts with an integer version, delimited by a NUL.
242 243
243 244 This storage uses a dedicated lock; this makes it easier to avoid issues
244 245 with adding entries that added when the regular wlock is unlocked (e.g.
245 246 the dirstate).
246 247
247 248 """
248 249 _currentcommand = ()
249 250 _lockref = None
250 251
251 252 def __init__(self, repo):
252 253 self.user = util.getuser()
253 254 self.ui = repo.ui
254 255 self.vfs = repo.vfs
255 256
256 257 # is this working copy using a shared storage?
257 258 self.sharedfeatures = self.sharedvfs = None
258 259 if repo.shared():
259 260 features = _readsharedfeatures(repo)
260 261 sharedrepo = share._getsrcrepo(repo)
261 262 if sharedrepo is not None and 'journal' in features:
262 263 self.sharedvfs = sharedrepo.vfs
263 264 self.sharedfeatures = features
264 265
265 266 # track the current command for recording in journal entries
266 267 @property
267 268 def command(self):
268 269 commandstr = ' '.join(
269 270 map(util.shellquote, journalstorage._currentcommand))
270 271 if '\n' in commandstr:
271 272 # truncate multi-line commands
272 273 commandstr = commandstr.partition('\n')[0] + ' ...'
273 274 return commandstr
274 275
275 276 @classmethod
276 277 def recordcommand(cls, *fullargs):
277 278 """Set the current hg arguments, stored with recorded entries"""
278 279 # Set the current command on the class because we may have started
279 280 # with a non-local repo (cloning for example).
280 281 cls._currentcommand = fullargs
281 282
282 283 def _currentlock(self, lockref):
283 284 """Returns the lock if it's held, or None if it's not.
284 285
285 286 (This is copied from the localrepo class)
286 287 """
287 288 if lockref is None:
288 289 return None
289 290 l = lockref()
290 291 if l is None or not l.held:
291 292 return None
292 293 return l
293 294
294 295 def jlock(self, vfs):
295 296 """Create a lock for the journal file"""
296 297 if self._currentlock(self._lockref) is not None:
297 298 raise error.Abort(_('journal lock does not support nesting'))
298 299 desc = _('journal of %s') % vfs.base
299 300 try:
300 301 l = lock.lock(vfs, 'namejournal.lock', 0, desc=desc)
301 302 except error.LockHeld as inst:
302 303 self.ui.warn(
303 304 _("waiting for lock on %s held by %r\n") % (desc, inst.locker))
304 305 # default to 600 seconds timeout
305 306 l = lock.lock(
306 307 vfs, 'namejournal.lock',
307 308 self.ui.configint("ui", "timeout"), desc=desc)
308 309 self.ui.warn(_("got lock after %s seconds\n") % l.delay)
309 310 self._lockref = weakref.ref(l)
310 311 return l
311 312
312 313 def record(self, namespace, name, oldhashes, newhashes):
313 314 """Record a new journal entry
314 315
315 316 * namespace: an opaque string; this can be used to filter on the type
316 317 of recorded entries.
317 318 * name: the name defining this entry; for bookmarks, this is the
318 319 bookmark name. Can be filtered on when retrieving entries.
319 320 * oldhashes and newhashes: each a single binary hash, or a list of
320 321 binary hashes. These represent the old and new position of the named
321 322 item.
322 323
323 324 """
324 325 if not isinstance(oldhashes, list):
325 326 oldhashes = [oldhashes]
326 327 if not isinstance(newhashes, list):
327 328 newhashes = [newhashes]
328 329
329 330 entry = journalentry(
330 331 util.makedate(), self.user, self.command, namespace, name,
331 332 oldhashes, newhashes)
332 333
333 334 vfs = self.vfs
334 335 if self.sharedvfs is not None:
335 336 # write to the shared repository if this feature is being
336 337 # shared between working copies.
337 338 if sharednamespaces.get(namespace) in self.sharedfeatures:
338 339 vfs = self.sharedvfs
339 340
340 341 self._write(vfs, entry)
341 342
342 343 def _write(self, vfs, entry):
343 344 with self.jlock(vfs):
344 345 version = None
345 346 # open file in amend mode to ensure it is created if missing
346 347 with vfs('namejournal', mode='a+b') as f:
347 348 f.seek(0, os.SEEK_SET)
348 349 # Read just enough bytes to get a version number (up to 2
349 350 # digits plus separator)
350 351 version = f.read(3).partition('\0')[0]
351 352 if version and version != str(storageversion):
352 353 # different version of the storage. Exit early (and not
353 354 # write anything) if this is not a version we can handle or
354 355 # the file is corrupt. In future, perhaps rotate the file
355 356 # instead?
356 357 self.ui.warn(
357 358 _("unsupported journal file version '%s'\n") % version)
358 359 return
359 360 if not version:
360 361 # empty file, write version first
361 362 f.write(str(storageversion) + '\0')
362 363 f.seek(0, os.SEEK_END)
363 364 f.write(str(entry) + '\0')
364 365
365 366 def filtered(self, namespace=None, name=None):
366 367 """Yield all journal entries with the given namespace or name
367 368
368 369 Both the namespace and the name are optional; if neither is given all
369 370 entries in the journal are produced.
370 371
371 372 Matching supports regular expressions by using the `re:` prefix
372 373 (use `literal:` to match names or namespaces that start with `re:`)
373 374
374 375 """
375 376 if namespace is not None:
376 377 namespace = util.stringmatcher(namespace)[-1]
377 378 if name is not None:
378 379 name = util.stringmatcher(name)[-1]
379 380 for entry in self:
380 381 if namespace is not None and not namespace(entry.namespace):
381 382 continue
382 383 if name is not None and not name(entry.name):
383 384 continue
384 385 yield entry
385 386
386 387 def __iter__(self):
387 388 """Iterate over the storage
388 389
389 390 Yields journalentry instances for each contained journal record.
390 391
391 392 """
392 393 local = self._open(self.vfs)
393 394
394 395 if self.sharedvfs is None:
395 396 return local
396 397
397 398 # iterate over both local and shared entries, but only those
398 399 # shared entries that are among the currently shared features
399 400 shared = (
400 401 e for e in self._open(self.sharedvfs)
401 402 if sharednamespaces.get(e.namespace) in self.sharedfeatures)
402 403 return _mergeentriesiter(local, shared)
403 404
404 405 def _open(self, vfs, filename='namejournal', _newestfirst=True):
405 406 if not vfs.exists(filename):
406 407 return
407 408
408 409 with vfs(filename) as f:
409 410 raw = f.read()
410 411
411 412 lines = raw.split('\0')
412 413 version = lines and lines[0]
413 414 if version != str(storageversion):
414 415 version = version or _('not available')
415 416 raise error.Abort(_("unknown journal file version '%s'") % version)
416 417
417 418 # Skip the first line, it's a version number. Normally we iterate over
418 419 # these in reverse order to list newest first; only when copying across
419 420 # a shared storage do we forgo reversing.
420 421 lines = lines[1:]
421 422 if _newestfirst:
422 423 lines = reversed(lines)
423 424 for line in lines:
424 425 if not line:
425 426 continue
426 427 yield journalentry.fromstorage(line)
427 428
428 429 # journal reading
429 430 # log options that don't make sense for journal
430 431 _ignoreopts = ('no-merges', 'graph')
431 432 @command(
432 433 'journal', [
433 434 ('', 'all', None, 'show history for all names'),
434 435 ('c', 'commits', None, 'show commit metadata'),
435 436 ] + [opt for opt in cmdutil.logopts if opt[1] not in _ignoreopts],
436 437 '[OPTION]... [BOOKMARKNAME]')
437 438 def journal(ui, repo, *args, **opts):
438 439 """show the previous position of bookmarks and the working copy
439 440
440 441 The journal is used to see the previous commits that bookmarks and the
441 442 working copy pointed to. By default the previous locations for the working
442 443 copy. Passing a bookmark name will show all the previous positions of
443 444 that bookmark. Use the --all switch to show previous locations for all
444 445 bookmarks and the working copy; each line will then include the bookmark
445 446 name, or '.' for the working copy, as well.
446 447
447 448 If `name` starts with `re:`, the remainder of the name is treated as
448 449 a regular expression. To match a name that actually starts with `re:`,
449 450 use the prefix `literal:`.
450 451
451 452 By default hg journal only shows the commit hash and the command that was
452 453 running at that time. -v/--verbose will show the prior hash, the user, and
453 454 the time at which it happened.
454 455
455 456 Use -c/--commits to output log information on each commit hash; at this
456 457 point you can use the usual `--patch`, `--git`, `--stat` and `--template`
457 458 switches to alter the log output for these.
458 459
459 460 `hg journal -T json` can be used to produce machine readable output.
460 461
461 462 """
462 463 opts = pycompat.byteskwargs(opts)
463 464 name = '.'
464 465 if opts.get('all'):
465 466 if args:
466 467 raise error.Abort(
467 468 _("You can't combine --all and filtering on a name"))
468 469 name = None
469 470 if args:
470 471 name = args[0]
471 472
472 473 fm = ui.formatter('journal', opts)
473 474
474 475 if opts.get("template") != "json":
475 476 if name is None:
476 477 displayname = _('the working copy and bookmarks')
477 478 else:
478 479 displayname = "'%s'" % name
479 480 ui.status(_("previous locations of %s:\n") % displayname)
480 481
481 limit = cmdutil.loglimit(opts)
482 limit = logcmdutil.getlimit(opts)
482 483 entry = None
483 484 ui.pager('journal')
484 485 for count, entry in enumerate(repo.journal.filtered(name=name)):
485 486 if count == limit:
486 487 break
487 488 newhashesstr = fm.formatlist(map(fm.hexfunc, entry.newhashes),
488 489 name='node', sep=',')
489 490 oldhashesstr = fm.formatlist(map(fm.hexfunc, entry.oldhashes),
490 491 name='node', sep=',')
491 492
492 493 fm.startitem()
493 494 fm.condwrite(ui.verbose, 'oldhashes', '%s -> ', oldhashesstr)
494 495 fm.write('newhashes', '%s', newhashesstr)
495 496 fm.condwrite(ui.verbose, 'user', ' %-8s', entry.user)
496 497 fm.condwrite(
497 498 opts.get('all') or name.startswith('re:'),
498 499 'name', ' %-8s', entry.name)
499 500
500 501 timestring = fm.formatdate(entry.timestamp, '%Y-%m-%d %H:%M %1%2')
501 502 fm.condwrite(ui.verbose, 'date', ' %s', timestring)
502 503 fm.write('command', ' %s\n', entry.command)
503 504
504 505 if opts.get("commits"):
505 displayer = cmdutil.show_changeset(ui, repo, opts, buffered=False)
506 displayer = logcmdutil.changesetdisplayer(ui, repo, opts,
507 buffered=False)
506 508 for hash in entry.newhashes:
507 509 try:
508 510 ctx = repo[hash]
509 511 displayer.show(ctx)
510 512 except error.RepoLookupError as e:
511 513 fm.write('repolookuperror', "%s\n\n", str(e))
512 514 displayer.close()
513 515
514 516 fm.end()
515 517
516 518 if entry is None:
517 519 ui.status(_("no recorded locations\n"))
@@ -1,807 +1,808 b''
1 1 # keyword.py - $Keyword$ expansion for Mercurial
2 2 #
3 3 # Copyright 2007-2015 Christian Ebert <blacktrash@gmx.net>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7 #
8 8 # $Id$
9 9 #
10 10 # Keyword expansion hack against the grain of a Distributed SCM
11 11 #
12 12 # There are many good reasons why this is not needed in a distributed
13 13 # SCM, still it may be useful in very small projects based on single
14 14 # files (like LaTeX packages), that are mostly addressed to an
15 15 # audience not running a version control system.
16 16 #
17 17 # For in-depth discussion refer to
18 18 # <https://mercurial-scm.org/wiki/KeywordPlan>.
19 19 #
20 20 # Keyword expansion is based on Mercurial's changeset template mappings.
21 21 #
22 22 # Binary files are not touched.
23 23 #
24 24 # Files to act upon/ignore are specified in the [keyword] section.
25 25 # Customized keyword template mappings in the [keywordmaps] section.
26 26 #
27 27 # Run 'hg help keyword' and 'hg kwdemo' to get info on configuration.
28 28
29 29 '''expand keywords in tracked files
30 30
31 31 This extension expands RCS/CVS-like or self-customized $Keywords$ in
32 32 tracked text files selected by your configuration.
33 33
34 34 Keywords are only expanded in local repositories and not stored in the
35 35 change history. The mechanism can be regarded as a convenience for the
36 36 current user or for archive distribution.
37 37
38 38 Keywords expand to the changeset data pertaining to the latest change
39 39 relative to the working directory parent of each file.
40 40
41 41 Configuration is done in the [keyword], [keywordset] and [keywordmaps]
42 42 sections of hgrc files.
43 43
44 44 Example::
45 45
46 46 [keyword]
47 47 # expand keywords in every python file except those matching "x*"
48 48 **.py =
49 49 x* = ignore
50 50
51 51 [keywordset]
52 52 # prefer svn- over cvs-like default keywordmaps
53 53 svn = True
54 54
55 55 .. note::
56 56
57 57 The more specific you are in your filename patterns the less you
58 58 lose speed in huge repositories.
59 59
60 60 For [keywordmaps] template mapping and expansion demonstration and
61 61 control run :hg:`kwdemo`. See :hg:`help templates` for a list of
62 62 available templates and filters.
63 63
64 64 Three additional date template filters are provided:
65 65
66 66 :``utcdate``: "2006/09/18 15:13:13"
67 67 :``svnutcdate``: "2006-09-18 15:13:13Z"
68 68 :``svnisodate``: "2006-09-18 08:13:13 -700 (Mon, 18 Sep 2006)"
69 69
70 70 The default template mappings (view with :hg:`kwdemo -d`) can be
71 71 replaced with customized keywords and templates. Again, run
72 72 :hg:`kwdemo` to control the results of your configuration changes.
73 73
74 74 Before changing/disabling active keywords, you must run :hg:`kwshrink`
75 75 to avoid storing expanded keywords in the change history.
76 76
77 77 To force expansion after enabling it, or a configuration change, run
78 78 :hg:`kwexpand`.
79 79
80 80 Expansions spanning more than one line and incremental expansions,
81 81 like CVS' $Log$, are not supported. A keyword template map "Log =
82 82 {desc}" expands to the first line of the changeset description.
83 83 '''
84 84
85 85
86 86 from __future__ import absolute_import
87 87
88 88 import os
89 89 import re
90 90 import tempfile
91 91 import weakref
92 92
93 93 from mercurial.i18n import _
94 94 from mercurial.hgweb import webcommands
95 95
96 96 from mercurial import (
97 97 cmdutil,
98 98 context,
99 99 dispatch,
100 100 error,
101 101 extensions,
102 102 filelog,
103 103 localrepo,
104 logcmdutil,
104 105 match,
105 106 patch,
106 107 pathutil,
107 108 pycompat,
108 109 registrar,
109 110 scmutil,
110 111 templatefilters,
111 112 util,
112 113 )
113 114
114 115 cmdtable = {}
115 116 command = registrar.command(cmdtable)
116 117 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
117 118 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
118 119 # be specifying the version(s) of Mercurial they are tested with, or
119 120 # leave the attribute unspecified.
120 121 testedwith = 'ships-with-hg-core'
121 122
122 123 # hg commands that do not act on keywords
123 124 nokwcommands = ('add addremove annotate bundle export grep incoming init log'
124 125 ' outgoing push tip verify convert email glog')
125 126
126 127 # webcommands that do not act on keywords
127 128 nokwwebcommands = ('annotate changeset rev filediff diff comparison')
128 129
129 130 # hg commands that trigger expansion only when writing to working dir,
130 131 # not when reading filelog, and unexpand when reading from working dir
131 132 restricted = ('merge kwexpand kwshrink record qrecord resolve transplant'
132 133 ' unshelve rebase graft backout histedit fetch')
133 134
134 135 # names of extensions using dorecord
135 136 recordextensions = 'record'
136 137
137 138 colortable = {
138 139 'kwfiles.enabled': 'green bold',
139 140 'kwfiles.deleted': 'cyan bold underline',
140 141 'kwfiles.enabledunknown': 'green',
141 142 'kwfiles.ignored': 'bold',
142 143 'kwfiles.ignoredunknown': 'none'
143 144 }
144 145
145 146 templatefilter = registrar.templatefilter()
146 147
147 148 configtable = {}
148 149 configitem = registrar.configitem(configtable)
149 150
150 151 configitem('keywordset', 'svn',
151 152 default=False,
152 153 )
153 154 # date like in cvs' $Date
154 155 @templatefilter('utcdate')
155 156 def utcdate(text):
156 157 '''Date. Returns a UTC-date in this format: "2009/08/18 11:00:13".
157 158 '''
158 159 return util.datestr((util.parsedate(text)[0], 0), '%Y/%m/%d %H:%M:%S')
159 160 # date like in svn's $Date
160 161 @templatefilter('svnisodate')
161 162 def svnisodate(text):
162 163 '''Date. Returns a date in this format: "2009-08-18 13:00:13
163 164 +0200 (Tue, 18 Aug 2009)".
164 165 '''
165 166 return util.datestr(text, '%Y-%m-%d %H:%M:%S %1%2 (%a, %d %b %Y)')
166 167 # date like in svn's $Id
167 168 @templatefilter('svnutcdate')
168 169 def svnutcdate(text):
169 170 '''Date. Returns a UTC-date in this format: "2009-08-18
170 171 11:00:13Z".
171 172 '''
172 173 return util.datestr((util.parsedate(text)[0], 0), '%Y-%m-%d %H:%M:%SZ')
173 174
174 175 # make keyword tools accessible
175 176 kwtools = {'hgcmd': ''}
176 177
177 178 def _defaultkwmaps(ui):
178 179 '''Returns default keywordmaps according to keywordset configuration.'''
179 180 templates = {
180 181 'Revision': '{node|short}',
181 182 'Author': '{author|user}',
182 183 }
183 184 kwsets = ({
184 185 'Date': '{date|utcdate}',
185 186 'RCSfile': '{file|basename},v',
186 187 'RCSFile': '{file|basename},v', # kept for backwards compatibility
187 188 # with hg-keyword
188 189 'Source': '{root}/{file},v',
189 190 'Id': '{file|basename},v {node|short} {date|utcdate} {author|user}',
190 191 'Header': '{root}/{file},v {node|short} {date|utcdate} {author|user}',
191 192 }, {
192 193 'Date': '{date|svnisodate}',
193 194 'Id': '{file|basename},v {node|short} {date|svnutcdate} {author|user}',
194 195 'LastChangedRevision': '{node|short}',
195 196 'LastChangedBy': '{author|user}',
196 197 'LastChangedDate': '{date|svnisodate}',
197 198 })
198 199 templates.update(kwsets[ui.configbool('keywordset', 'svn')])
199 200 return templates
200 201
201 202 def _shrinktext(text, subfunc):
202 203 '''Helper for keyword expansion removal in text.
203 204 Depending on subfunc also returns number of substitutions.'''
204 205 return subfunc(r'$\1$', text)
205 206
206 207 def _preselect(wstatus, changed):
207 208 '''Retrieves modified and added files from a working directory state
208 209 and returns the subset of each contained in given changed files
209 210 retrieved from a change context.'''
210 211 modified = [f for f in wstatus.modified if f in changed]
211 212 added = [f for f in wstatus.added if f in changed]
212 213 return modified, added
213 214
214 215
215 216 class kwtemplater(object):
216 217 '''
217 218 Sets up keyword templates, corresponding keyword regex, and
218 219 provides keyword substitution functions.
219 220 '''
220 221
221 222 def __init__(self, ui, repo, inc, exc):
222 223 self.ui = ui
223 224 self._repo = weakref.ref(repo)
224 225 self.match = match.match(repo.root, '', [], inc, exc)
225 226 self.restrict = kwtools['hgcmd'] in restricted.split()
226 227 self.postcommit = False
227 228
228 229 kwmaps = self.ui.configitems('keywordmaps')
229 230 if kwmaps: # override default templates
230 231 self.templates = dict(kwmaps)
231 232 else:
232 233 self.templates = _defaultkwmaps(self.ui)
233 234
234 235 @property
235 236 def repo(self):
236 237 return self._repo()
237 238
238 239 @util.propertycache
239 240 def escape(self):
240 241 '''Returns bar-separated and escaped keywords.'''
241 242 return '|'.join(map(re.escape, self.templates.keys()))
242 243
243 244 @util.propertycache
244 245 def rekw(self):
245 246 '''Returns regex for unexpanded keywords.'''
246 247 return re.compile(r'\$(%s)\$' % self.escape)
247 248
248 249 @util.propertycache
249 250 def rekwexp(self):
250 251 '''Returns regex for expanded keywords.'''
251 252 return re.compile(r'\$(%s): [^$\n\r]*? \$' % self.escape)
252 253
253 254 def substitute(self, data, path, ctx, subfunc):
254 255 '''Replaces keywords in data with expanded template.'''
255 256 def kwsub(mobj):
256 257 kw = mobj.group(1)
257 ct = cmdutil.makelogtemplater(self.ui, self.repo,
258 ct = logcmdutil.maketemplater(self.ui, self.repo,
258 259 self.templates[kw])
259 260 self.ui.pushbuffer()
260 261 ct.show(ctx, root=self.repo.root, file=path)
261 262 ekw = templatefilters.firstline(self.ui.popbuffer())
262 263 return '$%s: %s $' % (kw, ekw)
263 264 return subfunc(kwsub, data)
264 265
265 266 def linkctx(self, path, fileid):
266 267 '''Similar to filelog.linkrev, but returns a changectx.'''
267 268 return self.repo.filectx(path, fileid=fileid).changectx()
268 269
269 270 def expand(self, path, node, data):
270 271 '''Returns data with keywords expanded.'''
271 272 if not self.restrict and self.match(path) and not util.binary(data):
272 273 ctx = self.linkctx(path, node)
273 274 return self.substitute(data, path, ctx, self.rekw.sub)
274 275 return data
275 276
276 277 def iskwfile(self, cand, ctx):
277 278 '''Returns subset of candidates which are configured for keyword
278 279 expansion but are not symbolic links.'''
279 280 return [f for f in cand if self.match(f) and 'l' not in ctx.flags(f)]
280 281
281 282 def overwrite(self, ctx, candidates, lookup, expand, rekw=False):
282 283 '''Overwrites selected files expanding/shrinking keywords.'''
283 284 if self.restrict or lookup or self.postcommit: # exclude kw_copy
284 285 candidates = self.iskwfile(candidates, ctx)
285 286 if not candidates:
286 287 return
287 288 kwcmd = self.restrict and lookup # kwexpand/kwshrink
288 289 if self.restrict or expand and lookup:
289 290 mf = ctx.manifest()
290 291 if self.restrict or rekw:
291 292 re_kw = self.rekw
292 293 else:
293 294 re_kw = self.rekwexp
294 295 if expand:
295 296 msg = _('overwriting %s expanding keywords\n')
296 297 else:
297 298 msg = _('overwriting %s shrinking keywords\n')
298 299 for f in candidates:
299 300 if self.restrict:
300 301 data = self.repo.file(f).read(mf[f])
301 302 else:
302 303 data = self.repo.wread(f)
303 304 if util.binary(data):
304 305 continue
305 306 if expand:
306 307 parents = ctx.parents()
307 308 if lookup:
308 309 ctx = self.linkctx(f, mf[f])
309 310 elif self.restrict and len(parents) > 1:
310 311 # merge commit
311 312 # in case of conflict f is in modified state during
312 313 # merge, even if f does not differ from f in parent
313 314 for p in parents:
314 315 if f in p and not p[f].cmp(ctx[f]):
315 316 ctx = p[f].changectx()
316 317 break
317 318 data, found = self.substitute(data, f, ctx, re_kw.subn)
318 319 elif self.restrict:
319 320 found = re_kw.search(data)
320 321 else:
321 322 data, found = _shrinktext(data, re_kw.subn)
322 323 if found:
323 324 self.ui.note(msg % f)
324 325 fp = self.repo.wvfs(f, "wb", atomictemp=True)
325 326 fp.write(data)
326 327 fp.close()
327 328 if kwcmd:
328 329 self.repo.dirstate.normal(f)
329 330 elif self.postcommit:
330 331 self.repo.dirstate.normallookup(f)
331 332
332 333 def shrink(self, fname, text):
333 334 '''Returns text with all keyword substitutions removed.'''
334 335 if self.match(fname) and not util.binary(text):
335 336 return _shrinktext(text, self.rekwexp.sub)
336 337 return text
337 338
338 339 def shrinklines(self, fname, lines):
339 340 '''Returns lines with keyword substitutions removed.'''
340 341 if self.match(fname):
341 342 text = ''.join(lines)
342 343 if not util.binary(text):
343 344 return _shrinktext(text, self.rekwexp.sub).splitlines(True)
344 345 return lines
345 346
346 347 def wread(self, fname, data):
347 348 '''If in restricted mode returns data read from wdir with
348 349 keyword substitutions removed.'''
349 350 if self.restrict:
350 351 return self.shrink(fname, data)
351 352 return data
352 353
353 354 class kwfilelog(filelog.filelog):
354 355 '''
355 356 Subclass of filelog to hook into its read, add, cmp methods.
356 357 Keywords are "stored" unexpanded, and processed on reading.
357 358 '''
358 359 def __init__(self, opener, kwt, path):
359 360 super(kwfilelog, self).__init__(opener, path)
360 361 self.kwt = kwt
361 362 self.path = path
362 363
363 364 def read(self, node):
364 365 '''Expands keywords when reading filelog.'''
365 366 data = super(kwfilelog, self).read(node)
366 367 if self.renamed(node):
367 368 return data
368 369 return self.kwt.expand(self.path, node, data)
369 370
370 371 def add(self, text, meta, tr, link, p1=None, p2=None):
371 372 '''Removes keyword substitutions when adding to filelog.'''
372 373 text = self.kwt.shrink(self.path, text)
373 374 return super(kwfilelog, self).add(text, meta, tr, link, p1, p2)
374 375
375 376 def cmp(self, node, text):
376 377 '''Removes keyword substitutions for comparison.'''
377 378 text = self.kwt.shrink(self.path, text)
378 379 return super(kwfilelog, self).cmp(node, text)
379 380
380 381 def _status(ui, repo, wctx, kwt, *pats, **opts):
381 382 '''Bails out if [keyword] configuration is not active.
382 383 Returns status of working directory.'''
383 384 if kwt:
384 385 opts = pycompat.byteskwargs(opts)
385 386 return repo.status(match=scmutil.match(wctx, pats, opts), clean=True,
386 387 unknown=opts.get('unknown') or opts.get('all'))
387 388 if ui.configitems('keyword'):
388 389 raise error.Abort(_('[keyword] patterns cannot match'))
389 390 raise error.Abort(_('no [keyword] patterns configured'))
390 391
391 392 def _kwfwrite(ui, repo, expand, *pats, **opts):
392 393 '''Selects files and passes them to kwtemplater.overwrite.'''
393 394 wctx = repo[None]
394 395 if len(wctx.parents()) > 1:
395 396 raise error.Abort(_('outstanding uncommitted merge'))
396 397 kwt = getattr(repo, '_keywordkwt', None)
397 398 with repo.wlock():
398 399 status = _status(ui, repo, wctx, kwt, *pats, **opts)
399 400 if status.modified or status.added or status.removed or status.deleted:
400 401 raise error.Abort(_('outstanding uncommitted changes'))
401 402 kwt.overwrite(wctx, status.clean, True, expand)
402 403
403 404 @command('kwdemo',
404 405 [('d', 'default', None, _('show default keyword template maps')),
405 406 ('f', 'rcfile', '',
406 407 _('read maps from rcfile'), _('FILE'))],
407 408 _('hg kwdemo [-d] [-f RCFILE] [TEMPLATEMAP]...'),
408 409 optionalrepo=True)
409 410 def demo(ui, repo, *args, **opts):
410 411 '''print [keywordmaps] configuration and an expansion example
411 412
412 413 Show current, custom, or default keyword template maps and their
413 414 expansions.
414 415
415 416 Extend the current configuration by specifying maps as arguments
416 417 and using -f/--rcfile to source an external hgrc file.
417 418
418 419 Use -d/--default to disable current configuration.
419 420
420 421 See :hg:`help templates` for information on templates and filters.
421 422 '''
422 423 def demoitems(section, items):
423 424 ui.write('[%s]\n' % section)
424 425 for k, v in sorted(items):
425 426 ui.write('%s = %s\n' % (k, v))
426 427
427 428 fn = 'demo.txt'
428 429 tmpdir = tempfile.mkdtemp('', 'kwdemo.')
429 430 ui.note(_('creating temporary repository at %s\n') % tmpdir)
430 431 if repo is None:
431 432 baseui = ui
432 433 else:
433 434 baseui = repo.baseui
434 435 repo = localrepo.localrepository(baseui, tmpdir, True)
435 436 ui.setconfig('keyword', fn, '', 'keyword')
436 437 svn = ui.configbool('keywordset', 'svn')
437 438 # explicitly set keywordset for demo output
438 439 ui.setconfig('keywordset', 'svn', svn, 'keyword')
439 440
440 441 uikwmaps = ui.configitems('keywordmaps')
441 442 if args or opts.get(r'rcfile'):
442 443 ui.status(_('\n\tconfiguration using custom keyword template maps\n'))
443 444 if uikwmaps:
444 445 ui.status(_('\textending current template maps\n'))
445 446 if opts.get(r'default') or not uikwmaps:
446 447 if svn:
447 448 ui.status(_('\toverriding default svn keywordset\n'))
448 449 else:
449 450 ui.status(_('\toverriding default cvs keywordset\n'))
450 451 if opts.get(r'rcfile'):
451 452 ui.readconfig(opts.get('rcfile'))
452 453 if args:
453 454 # simulate hgrc parsing
454 455 rcmaps = '[keywordmaps]\n%s\n' % '\n'.join(args)
455 456 repo.vfs.write('hgrc', rcmaps)
456 457 ui.readconfig(repo.vfs.join('hgrc'))
457 458 kwmaps = dict(ui.configitems('keywordmaps'))
458 459 elif opts.get(r'default'):
459 460 if svn:
460 461 ui.status(_('\n\tconfiguration using default svn keywordset\n'))
461 462 else:
462 463 ui.status(_('\n\tconfiguration using default cvs keywordset\n'))
463 464 kwmaps = _defaultkwmaps(ui)
464 465 if uikwmaps:
465 466 ui.status(_('\tdisabling current template maps\n'))
466 467 for k, v in kwmaps.iteritems():
467 468 ui.setconfig('keywordmaps', k, v, 'keyword')
468 469 else:
469 470 ui.status(_('\n\tconfiguration using current keyword template maps\n'))
470 471 if uikwmaps:
471 472 kwmaps = dict(uikwmaps)
472 473 else:
473 474 kwmaps = _defaultkwmaps(ui)
474 475
475 476 uisetup(ui)
476 477 reposetup(ui, repo)
477 478 ui.write(('[extensions]\nkeyword =\n'))
478 479 demoitems('keyword', ui.configitems('keyword'))
479 480 demoitems('keywordset', ui.configitems('keywordset'))
480 481 demoitems('keywordmaps', kwmaps.iteritems())
481 482 keywords = '$' + '$\n$'.join(sorted(kwmaps.keys())) + '$\n'
482 483 repo.wvfs.write(fn, keywords)
483 484 repo[None].add([fn])
484 485 ui.note(_('\nkeywords written to %s:\n') % fn)
485 486 ui.note(keywords)
486 487 with repo.wlock():
487 488 repo.dirstate.setbranch('demobranch')
488 489 for name, cmd in ui.configitems('hooks'):
489 490 if name.split('.', 1)[0].find('commit') > -1:
490 491 repo.ui.setconfig('hooks', name, '', 'keyword')
491 492 msg = _('hg keyword configuration and expansion example')
492 493 ui.note(("hg ci -m '%s'\n" % msg))
493 494 repo.commit(text=msg)
494 495 ui.status(_('\n\tkeywords expanded\n'))
495 496 ui.write(repo.wread(fn))
496 497 repo.wvfs.rmtree(repo.root)
497 498
498 499 @command('kwexpand',
499 500 cmdutil.walkopts,
500 501 _('hg kwexpand [OPTION]... [FILE]...'),
501 502 inferrepo=True)
502 503 def expand(ui, repo, *pats, **opts):
503 504 '''expand keywords in the working directory
504 505
505 506 Run after (re)enabling keyword expansion.
506 507
507 508 kwexpand refuses to run if given files contain local changes.
508 509 '''
509 510 # 3rd argument sets expansion to True
510 511 _kwfwrite(ui, repo, True, *pats, **opts)
511 512
512 513 @command('kwfiles',
513 514 [('A', 'all', None, _('show keyword status flags of all files')),
514 515 ('i', 'ignore', None, _('show files excluded from expansion')),
515 516 ('u', 'unknown', None, _('only show unknown (not tracked) files')),
516 517 ] + cmdutil.walkopts,
517 518 _('hg kwfiles [OPTION]... [FILE]...'),
518 519 inferrepo=True)
519 520 def files(ui, repo, *pats, **opts):
520 521 '''show files configured for keyword expansion
521 522
522 523 List which files in the working directory are matched by the
523 524 [keyword] configuration patterns.
524 525
525 526 Useful to prevent inadvertent keyword expansion and to speed up
526 527 execution by including only files that are actual candidates for
527 528 expansion.
528 529
529 530 See :hg:`help keyword` on how to construct patterns both for
530 531 inclusion and exclusion of files.
531 532
532 533 With -A/--all and -v/--verbose the codes used to show the status
533 534 of files are::
534 535
535 536 K = keyword expansion candidate
536 537 k = keyword expansion candidate (not tracked)
537 538 I = ignored
538 539 i = ignored (not tracked)
539 540 '''
540 541 kwt = getattr(repo, '_keywordkwt', None)
541 542 wctx = repo[None]
542 543 status = _status(ui, repo, wctx, kwt, *pats, **opts)
543 544 if pats:
544 545 cwd = repo.getcwd()
545 546 else:
546 547 cwd = ''
547 548 files = []
548 549 opts = pycompat.byteskwargs(opts)
549 550 if not opts.get('unknown') or opts.get('all'):
550 551 files = sorted(status.modified + status.added + status.clean)
551 552 kwfiles = kwt.iskwfile(files, wctx)
552 553 kwdeleted = kwt.iskwfile(status.deleted, wctx)
553 554 kwunknown = kwt.iskwfile(status.unknown, wctx)
554 555 if not opts.get('ignore') or opts.get('all'):
555 556 showfiles = kwfiles, kwdeleted, kwunknown
556 557 else:
557 558 showfiles = [], [], []
558 559 if opts.get('all') or opts.get('ignore'):
559 560 showfiles += ([f for f in files if f not in kwfiles],
560 561 [f for f in status.unknown if f not in kwunknown])
561 562 kwlabels = 'enabled deleted enabledunknown ignored ignoredunknown'.split()
562 563 kwstates = zip(kwlabels, 'K!kIi', showfiles)
563 564 fm = ui.formatter('kwfiles', opts)
564 565 fmt = '%.0s%s\n'
565 566 if opts.get('all') or ui.verbose:
566 567 fmt = '%s %s\n'
567 568 for kwstate, char, filenames in kwstates:
568 569 label = 'kwfiles.' + kwstate
569 570 for f in filenames:
570 571 fm.startitem()
571 572 fm.write('kwstatus path', fmt, char,
572 573 repo.pathto(f, cwd), label=label)
573 574 fm.end()
574 575
575 576 @command('kwshrink',
576 577 cmdutil.walkopts,
577 578 _('hg kwshrink [OPTION]... [FILE]...'),
578 579 inferrepo=True)
579 580 def shrink(ui, repo, *pats, **opts):
580 581 '''revert expanded keywords in the working directory
581 582
582 583 Must be run before changing/disabling active keywords.
583 584
584 585 kwshrink refuses to run if given files contain local changes.
585 586 '''
586 587 # 3rd argument sets expansion to False
587 588 _kwfwrite(ui, repo, False, *pats, **opts)
588 589
589 590 # monkeypatches
590 591
591 592 def kwpatchfile_init(orig, self, ui, gp, backend, store, eolmode=None):
592 593 '''Monkeypatch/wrap patch.patchfile.__init__ to avoid
593 594 rejects or conflicts due to expanded keywords in working dir.'''
594 595 orig(self, ui, gp, backend, store, eolmode)
595 596 kwt = getattr(getattr(backend, 'repo', None), '_keywordkwt', None)
596 597 if kwt:
597 598 # shrink keywords read from working dir
598 599 self.lines = kwt.shrinklines(self.fname, self.lines)
599 600
600 601 def kwdiff(orig, repo, *args, **kwargs):
601 602 '''Monkeypatch patch.diff to avoid expansion.'''
602 603 kwt = getattr(repo, '_keywordkwt', None)
603 604 if kwt:
604 605 restrict = kwt.restrict
605 606 kwt.restrict = True
606 607 try:
607 608 for chunk in orig(repo, *args, **kwargs):
608 609 yield chunk
609 610 finally:
610 611 if kwt:
611 612 kwt.restrict = restrict
612 613
613 614 def kwweb_skip(orig, web, req, tmpl):
614 615 '''Wraps webcommands.x turning off keyword expansion.'''
615 616 kwt = getattr(web.repo, '_keywordkwt', None)
616 617 if kwt:
617 618 origmatch = kwt.match
618 619 kwt.match = util.never
619 620 try:
620 621 for chunk in orig(web, req, tmpl):
621 622 yield chunk
622 623 finally:
623 624 if kwt:
624 625 kwt.match = origmatch
625 626
626 627 def kw_amend(orig, ui, repo, old, extra, pats, opts):
627 628 '''Wraps cmdutil.amend expanding keywords after amend.'''
628 629 kwt = getattr(repo, '_keywordkwt', None)
629 630 if kwt is None:
630 631 return orig(ui, repo, old, extra, pats, opts)
631 632 with repo.wlock():
632 633 kwt.postcommit = True
633 634 newid = orig(ui, repo, old, extra, pats, opts)
634 635 if newid != old.node():
635 636 ctx = repo[newid]
636 637 kwt.restrict = True
637 638 kwt.overwrite(ctx, ctx.files(), False, True)
638 639 kwt.restrict = False
639 640 return newid
640 641
641 642 def kw_copy(orig, ui, repo, pats, opts, rename=False):
642 643 '''Wraps cmdutil.copy so that copy/rename destinations do not
643 644 contain expanded keywords.
644 645 Note that the source of a regular file destination may also be a
645 646 symlink:
646 647 hg cp sym x -> x is symlink
647 648 cp sym x; hg cp -A sym x -> x is file (maybe expanded keywords)
648 649 For the latter we have to follow the symlink to find out whether its
649 650 target is configured for expansion and we therefore must unexpand the
650 651 keywords in the destination.'''
651 652 kwt = getattr(repo, '_keywordkwt', None)
652 653 if kwt is None:
653 654 return orig(ui, repo, pats, opts, rename)
654 655 with repo.wlock():
655 656 orig(ui, repo, pats, opts, rename)
656 657 if opts.get('dry_run'):
657 658 return
658 659 wctx = repo[None]
659 660 cwd = repo.getcwd()
660 661
661 662 def haskwsource(dest):
662 663 '''Returns true if dest is a regular file and configured for
663 664 expansion or a symlink which points to a file configured for
664 665 expansion. '''
665 666 source = repo.dirstate.copied(dest)
666 667 if 'l' in wctx.flags(source):
667 668 source = pathutil.canonpath(repo.root, cwd,
668 669 os.path.realpath(source))
669 670 return kwt.match(source)
670 671
671 672 candidates = [f for f in repo.dirstate.copies() if
672 673 'l' not in wctx.flags(f) and haskwsource(f)]
673 674 kwt.overwrite(wctx, candidates, False, False)
674 675
675 676 def kw_dorecord(orig, ui, repo, commitfunc, *pats, **opts):
676 677 '''Wraps record.dorecord expanding keywords after recording.'''
677 678 kwt = getattr(repo, '_keywordkwt', None)
678 679 if kwt is None:
679 680 return orig(ui, repo, commitfunc, *pats, **opts)
680 681 with repo.wlock():
681 682 # record returns 0 even when nothing has changed
682 683 # therefore compare nodes before and after
683 684 kwt.postcommit = True
684 685 ctx = repo['.']
685 686 wstatus = ctx.status()
686 687 ret = orig(ui, repo, commitfunc, *pats, **opts)
687 688 recctx = repo['.']
688 689 if ctx != recctx:
689 690 modified, added = _preselect(wstatus, recctx.files())
690 691 kwt.restrict = False
691 692 kwt.overwrite(recctx, modified, False, True)
692 693 kwt.overwrite(recctx, added, False, True, True)
693 694 kwt.restrict = True
694 695 return ret
695 696
696 697 def kwfilectx_cmp(orig, self, fctx):
697 698 if fctx._customcmp:
698 699 return fctx.cmp(self)
699 700 kwt = getattr(self._repo, '_keywordkwt', None)
700 701 if kwt is None:
701 702 return orig(self, fctx)
702 703 # keyword affects data size, comparing wdir and filelog size does
703 704 # not make sense
704 705 if (fctx._filenode is None and
705 706 (self._repo._encodefilterpats or
706 707 kwt.match(fctx.path()) and 'l' not in fctx.flags() or
707 708 self.size() - 4 == fctx.size()) or
708 709 self.size() == fctx.size()):
709 710 return self._filelog.cmp(self._filenode, fctx.data())
710 711 return True
711 712
712 713 def uisetup(ui):
713 714 ''' Monkeypatches dispatch._parse to retrieve user command.
714 715 Overrides file method to return kwfilelog instead of filelog
715 716 if file matches user configuration.
716 717 Wraps commit to overwrite configured files with updated
717 718 keyword substitutions.
718 719 Monkeypatches patch and webcommands.'''
719 720
720 721 def kwdispatch_parse(orig, ui, args):
721 722 '''Monkeypatch dispatch._parse to obtain running hg command.'''
722 723 cmd, func, args, options, cmdoptions = orig(ui, args)
723 724 kwtools['hgcmd'] = cmd
724 725 return cmd, func, args, options, cmdoptions
725 726
726 727 extensions.wrapfunction(dispatch, '_parse', kwdispatch_parse)
727 728
728 729 extensions.wrapfunction(context.filectx, 'cmp', kwfilectx_cmp)
729 730 extensions.wrapfunction(patch.patchfile, '__init__', kwpatchfile_init)
730 731 extensions.wrapfunction(patch, 'diff', kwdiff)
731 732 extensions.wrapfunction(cmdutil, 'amend', kw_amend)
732 733 extensions.wrapfunction(cmdutil, 'copy', kw_copy)
733 734 extensions.wrapfunction(cmdutil, 'dorecord', kw_dorecord)
734 735 for c in nokwwebcommands.split():
735 736 extensions.wrapfunction(webcommands, c, kwweb_skip)
736 737
737 738 def reposetup(ui, repo):
738 739 '''Sets up repo as kwrepo for keyword substitution.'''
739 740
740 741 try:
741 742 if (not repo.local() or kwtools['hgcmd'] in nokwcommands.split()
742 743 or '.hg' in util.splitpath(repo.root)
743 744 or repo._url.startswith('bundle:')):
744 745 return
745 746 except AttributeError:
746 747 pass
747 748
748 749 inc, exc = [], ['.hg*']
749 750 for pat, opt in ui.configitems('keyword'):
750 751 if opt != 'ignore':
751 752 inc.append(pat)
752 753 else:
753 754 exc.append(pat)
754 755 if not inc:
755 756 return
756 757
757 758 kwt = kwtemplater(ui, repo, inc, exc)
758 759
759 760 class kwrepo(repo.__class__):
760 761 def file(self, f):
761 762 if f[0] == '/':
762 763 f = f[1:]
763 764 return kwfilelog(self.svfs, kwt, f)
764 765
765 766 def wread(self, filename):
766 767 data = super(kwrepo, self).wread(filename)
767 768 return kwt.wread(filename, data)
768 769
769 770 def commit(self, *args, **opts):
770 771 # use custom commitctx for user commands
771 772 # other extensions can still wrap repo.commitctx directly
772 773 self.commitctx = self.kwcommitctx
773 774 try:
774 775 return super(kwrepo, self).commit(*args, **opts)
775 776 finally:
776 777 del self.commitctx
777 778
778 779 def kwcommitctx(self, ctx, error=False):
779 780 n = super(kwrepo, self).commitctx(ctx, error)
780 781 # no lock needed, only called from repo.commit() which already locks
781 782 if not kwt.postcommit:
782 783 restrict = kwt.restrict
783 784 kwt.restrict = True
784 785 kwt.overwrite(self[n], sorted(ctx.added() + ctx.modified()),
785 786 False, True)
786 787 kwt.restrict = restrict
787 788 return n
788 789
789 790 def rollback(self, dryrun=False, force=False):
790 791 with self.wlock():
791 792 origrestrict = kwt.restrict
792 793 try:
793 794 if not dryrun:
794 795 changed = self['.'].files()
795 796 ret = super(kwrepo, self).rollback(dryrun, force)
796 797 if not dryrun:
797 798 ctx = self['.']
798 799 modified, added = _preselect(ctx.status(), changed)
799 800 kwt.restrict = False
800 801 kwt.overwrite(ctx, modified, True, True)
801 802 kwt.overwrite(ctx, added, True, False)
802 803 return ret
803 804 finally:
804 805 kwt.restrict = origrestrict
805 806
806 807 repo.__class__ = kwrepo
807 808 repo._keywordkwt = kwt
@@ -1,3655 +1,3656 b''
1 1 # mq.py - patch queues for mercurial
2 2 #
3 3 # Copyright 2005, 2006 Chris Mason <mason@suse.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 '''manage a stack of patches
9 9
10 10 This extension lets you work with a stack of patches in a Mercurial
11 11 repository. It manages two stacks of patches - all known patches, and
12 12 applied patches (subset of known patches).
13 13
14 14 Known patches are represented as patch files in the .hg/patches
15 15 directory. Applied patches are both patch files and changesets.
16 16
17 17 Common tasks (use :hg:`help COMMAND` for more details)::
18 18
19 19 create new patch qnew
20 20 import existing patch qimport
21 21
22 22 print patch series qseries
23 23 print applied patches qapplied
24 24
25 25 add known patch to applied stack qpush
26 26 remove patch from applied stack qpop
27 27 refresh contents of top applied patch qrefresh
28 28
29 29 By default, mq will automatically use git patches when required to
30 30 avoid losing file mode changes, copy records, binary files or empty
31 31 files creations or deletions. This behavior can be configured with::
32 32
33 33 [mq]
34 34 git = auto/keep/yes/no
35 35
36 36 If set to 'keep', mq will obey the [diff] section configuration while
37 37 preserving existing git patches upon qrefresh. If set to 'yes' or
38 38 'no', mq will override the [diff] section and always generate git or
39 39 regular patches, possibly losing data in the second case.
40 40
41 41 It may be desirable for mq changesets to be kept in the secret phase (see
42 42 :hg:`help phases`), which can be enabled with the following setting::
43 43
44 44 [mq]
45 45 secret = True
46 46
47 47 You will by default be managing a patch queue named "patches". You can
48 48 create other, independent patch queues with the :hg:`qqueue` command.
49 49
50 50 If the working directory contains uncommitted files, qpush, qpop and
51 51 qgoto abort immediately. If -f/--force is used, the changes are
52 52 discarded. Setting::
53 53
54 54 [mq]
55 55 keepchanges = True
56 56
57 57 make them behave as if --keep-changes were passed, and non-conflicting
58 58 local changes will be tolerated and preserved. If incompatible options
59 59 such as -f/--force or --exact are passed, this setting is ignored.
60 60
61 61 This extension used to provide a strip command. This command now lives
62 62 in the strip extension.
63 63 '''
64 64
65 65 from __future__ import absolute_import, print_function
66 66
67 67 import errno
68 68 import os
69 69 import re
70 70 import shutil
71 71 from mercurial.i18n import _
72 72 from mercurial.node import (
73 73 bin,
74 74 hex,
75 75 nullid,
76 76 nullrev,
77 77 short,
78 78 )
79 79 from mercurial import (
80 80 cmdutil,
81 81 commands,
82 82 dirstateguard,
83 83 encoding,
84 84 error,
85 85 extensions,
86 86 hg,
87 87 localrepo,
88 88 lock as lockmod,
89 logcmdutil,
89 90 patch as patchmod,
90 91 phases,
91 92 pycompat,
92 93 registrar,
93 94 revsetlang,
94 95 scmutil,
95 96 smartset,
96 97 subrepo,
97 98 util,
98 99 vfs as vfsmod,
99 100 )
100 101
101 102 release = lockmod.release
102 103 seriesopts = [('s', 'summary', None, _('print first line of patch header'))]
103 104
104 105 cmdtable = {}
105 106 command = registrar.command(cmdtable)
106 107 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
107 108 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
108 109 # be specifying the version(s) of Mercurial they are tested with, or
109 110 # leave the attribute unspecified.
110 111 testedwith = 'ships-with-hg-core'
111 112
112 113 configtable = {}
113 114 configitem = registrar.configitem(configtable)
114 115
115 116 configitem('mq', 'git',
116 117 default='auto',
117 118 )
118 119 configitem('mq', 'keepchanges',
119 120 default=False,
120 121 )
121 122 configitem('mq', 'plain',
122 123 default=False,
123 124 )
124 125 configitem('mq', 'secret',
125 126 default=False,
126 127 )
127 128
128 129 # force load strip extension formerly included in mq and import some utility
129 130 try:
130 131 stripext = extensions.find('strip')
131 132 except KeyError:
132 133 # note: load is lazy so we could avoid the try-except,
133 134 # but I (marmoute) prefer this explicit code.
134 135 class dummyui(object):
135 136 def debug(self, msg):
136 137 pass
137 138 stripext = extensions.load(dummyui(), 'strip', '')
138 139
139 140 strip = stripext.strip
140 141 checksubstate = stripext.checksubstate
141 142 checklocalchanges = stripext.checklocalchanges
142 143
143 144
144 145 # Patch names looks like unix-file names.
145 146 # They must be joinable with queue directory and result in the patch path.
146 147 normname = util.normpath
147 148
148 149 class statusentry(object):
149 150 def __init__(self, node, name):
150 151 self.node, self.name = node, name
151 152
152 153 def __bytes__(self):
153 154 return hex(self.node) + ':' + self.name
154 155
155 156 __str__ = encoding.strmethod(__bytes__)
156 157 __repr__ = encoding.strmethod(__bytes__)
157 158
158 159 # The order of the headers in 'hg export' HG patches:
159 160 HGHEADERS = [
160 161 # '# HG changeset patch',
161 162 '# User ',
162 163 '# Date ',
163 164 '# ',
164 165 '# Branch ',
165 166 '# Node ID ',
166 167 '# Parent ', # can occur twice for merges - but that is not relevant for mq
167 168 ]
168 169 # The order of headers in plain 'mail style' patches:
169 170 PLAINHEADERS = {
170 171 'from': 0,
171 172 'date': 1,
172 173 'subject': 2,
173 174 }
174 175
175 176 def inserthgheader(lines, header, value):
176 177 """Assuming lines contains a HG patch header, add a header line with value.
177 178 >>> try: inserthgheader([], b'# Date ', b'z')
178 179 ... except ValueError as inst: print("oops")
179 180 oops
180 181 >>> inserthgheader([b'# HG changeset patch'], b'# Date ', b'z')
181 182 ['# HG changeset patch', '# Date z']
182 183 >>> inserthgheader([b'# HG changeset patch', b''], b'# Date ', b'z')
183 184 ['# HG changeset patch', '# Date z', '']
184 185 >>> inserthgheader([b'# HG changeset patch', b'# User y'], b'# Date ', b'z')
185 186 ['# HG changeset patch', '# User y', '# Date z']
186 187 >>> inserthgheader([b'# HG changeset patch', b'# Date x', b'# User y'],
187 188 ... b'# User ', b'z')
188 189 ['# HG changeset patch', '# Date x', '# User z']
189 190 >>> inserthgheader([b'# HG changeset patch', b'# Date y'], b'# Date ', b'z')
190 191 ['# HG changeset patch', '# Date z']
191 192 >>> inserthgheader([b'# HG changeset patch', b'', b'# Date y'],
192 193 ... b'# Date ', b'z')
193 194 ['# HG changeset patch', '# Date z', '', '# Date y']
194 195 >>> inserthgheader([b'# HG changeset patch', b'# Parent y'],
195 196 ... b'# Date ', b'z')
196 197 ['# HG changeset patch', '# Date z', '# Parent y']
197 198 """
198 199 start = lines.index('# HG changeset patch') + 1
199 200 newindex = HGHEADERS.index(header)
200 201 bestpos = len(lines)
201 202 for i in range(start, len(lines)):
202 203 line = lines[i]
203 204 if not line.startswith('# '):
204 205 bestpos = min(bestpos, i)
205 206 break
206 207 for lineindex, h in enumerate(HGHEADERS):
207 208 if line.startswith(h):
208 209 if lineindex == newindex:
209 210 lines[i] = header + value
210 211 return lines
211 212 if lineindex > newindex:
212 213 bestpos = min(bestpos, i)
213 214 break # next line
214 215 lines.insert(bestpos, header + value)
215 216 return lines
216 217
217 218 def insertplainheader(lines, header, value):
218 219 """For lines containing a plain patch header, add a header line with value.
219 220 >>> insertplainheader([], b'Date', b'z')
220 221 ['Date: z']
221 222 >>> insertplainheader([b''], b'Date', b'z')
222 223 ['Date: z', '']
223 224 >>> insertplainheader([b'x'], b'Date', b'z')
224 225 ['Date: z', '', 'x']
225 226 >>> insertplainheader([b'From: y', b'x'], b'Date', b'z')
226 227 ['From: y', 'Date: z', '', 'x']
227 228 >>> insertplainheader([b' date : x', b' from : y', b''], b'From', b'z')
228 229 [' date : x', 'From: z', '']
229 230 >>> insertplainheader([b'', b'Date: y'], b'Date', b'z')
230 231 ['Date: z', '', 'Date: y']
231 232 >>> insertplainheader([b'foo: bar', b'DATE: z', b'x'], b'From', b'y')
232 233 ['From: y', 'foo: bar', 'DATE: z', '', 'x']
233 234 """
234 235 newprio = PLAINHEADERS[header.lower()]
235 236 bestpos = len(lines)
236 237 for i, line in enumerate(lines):
237 238 if ':' in line:
238 239 lheader = line.split(':', 1)[0].strip().lower()
239 240 lprio = PLAINHEADERS.get(lheader, newprio + 1)
240 241 if lprio == newprio:
241 242 lines[i] = '%s: %s' % (header, value)
242 243 return lines
243 244 if lprio > newprio and i < bestpos:
244 245 bestpos = i
245 246 else:
246 247 if line:
247 248 lines.insert(i, '')
248 249 if i < bestpos:
249 250 bestpos = i
250 251 break
251 252 lines.insert(bestpos, '%s: %s' % (header, value))
252 253 return lines
253 254
254 255 class patchheader(object):
255 256 def __init__(self, pf, plainmode=False):
256 257 def eatdiff(lines):
257 258 while lines:
258 259 l = lines[-1]
259 260 if (l.startswith("diff -") or
260 261 l.startswith("Index:") or
261 262 l.startswith("===========")):
262 263 del lines[-1]
263 264 else:
264 265 break
265 266 def eatempty(lines):
266 267 while lines:
267 268 if not lines[-1].strip():
268 269 del lines[-1]
269 270 else:
270 271 break
271 272
272 273 message = []
273 274 comments = []
274 275 user = None
275 276 date = None
276 277 parent = None
277 278 format = None
278 279 subject = None
279 280 branch = None
280 281 nodeid = None
281 282 diffstart = 0
282 283
283 284 for line in file(pf):
284 285 line = line.rstrip()
285 286 if (line.startswith('diff --git')
286 287 or (diffstart and line.startswith('+++ '))):
287 288 diffstart = 2
288 289 break
289 290 diffstart = 0 # reset
290 291 if line.startswith("--- "):
291 292 diffstart = 1
292 293 continue
293 294 elif format == "hgpatch":
294 295 # parse values when importing the result of an hg export
295 296 if line.startswith("# User "):
296 297 user = line[7:]
297 298 elif line.startswith("# Date "):
298 299 date = line[7:]
299 300 elif line.startswith("# Parent "):
300 301 parent = line[9:].lstrip() # handle double trailing space
301 302 elif line.startswith("# Branch "):
302 303 branch = line[9:]
303 304 elif line.startswith("# Node ID "):
304 305 nodeid = line[10:]
305 306 elif not line.startswith("# ") and line:
306 307 message.append(line)
307 308 format = None
308 309 elif line == '# HG changeset patch':
309 310 message = []
310 311 format = "hgpatch"
311 312 elif (format != "tagdone" and (line.startswith("Subject: ") or
312 313 line.startswith("subject: "))):
313 314 subject = line[9:]
314 315 format = "tag"
315 316 elif (format != "tagdone" and (line.startswith("From: ") or
316 317 line.startswith("from: "))):
317 318 user = line[6:]
318 319 format = "tag"
319 320 elif (format != "tagdone" and (line.startswith("Date: ") or
320 321 line.startswith("date: "))):
321 322 date = line[6:]
322 323 format = "tag"
323 324 elif format == "tag" and line == "":
324 325 # when looking for tags (subject: from: etc) they
325 326 # end once you find a blank line in the source
326 327 format = "tagdone"
327 328 elif message or line:
328 329 message.append(line)
329 330 comments.append(line)
330 331
331 332 eatdiff(message)
332 333 eatdiff(comments)
333 334 # Remember the exact starting line of the patch diffs before consuming
334 335 # empty lines, for external use by TortoiseHg and others
335 336 self.diffstartline = len(comments)
336 337 eatempty(message)
337 338 eatempty(comments)
338 339
339 340 # make sure message isn't empty
340 341 if format and format.startswith("tag") and subject:
341 342 message.insert(0, subject)
342 343
343 344 self.message = message
344 345 self.comments = comments
345 346 self.user = user
346 347 self.date = date
347 348 self.parent = parent
348 349 # nodeid and branch are for external use by TortoiseHg and others
349 350 self.nodeid = nodeid
350 351 self.branch = branch
351 352 self.haspatch = diffstart > 1
352 353 self.plainmode = (plainmode or
353 354 '# HG changeset patch' not in self.comments and
354 355 any(c.startswith('Date: ') or
355 356 c.startswith('From: ')
356 357 for c in self.comments))
357 358
358 359 def setuser(self, user):
359 360 try:
360 361 inserthgheader(self.comments, '# User ', user)
361 362 except ValueError:
362 363 if self.plainmode:
363 364 insertplainheader(self.comments, 'From', user)
364 365 else:
365 366 tmp = ['# HG changeset patch', '# User ' + user]
366 367 self.comments = tmp + self.comments
367 368 self.user = user
368 369
369 370 def setdate(self, date):
370 371 try:
371 372 inserthgheader(self.comments, '# Date ', date)
372 373 except ValueError:
373 374 if self.plainmode:
374 375 insertplainheader(self.comments, 'Date', date)
375 376 else:
376 377 tmp = ['# HG changeset patch', '# Date ' + date]
377 378 self.comments = tmp + self.comments
378 379 self.date = date
379 380
380 381 def setparent(self, parent):
381 382 try:
382 383 inserthgheader(self.comments, '# Parent ', parent)
383 384 except ValueError:
384 385 if not self.plainmode:
385 386 tmp = ['# HG changeset patch', '# Parent ' + parent]
386 387 self.comments = tmp + self.comments
387 388 self.parent = parent
388 389
389 390 def setmessage(self, message):
390 391 if self.comments:
391 392 self._delmsg()
392 393 self.message = [message]
393 394 if message:
394 395 if self.plainmode and self.comments and self.comments[-1]:
395 396 self.comments.append('')
396 397 self.comments.append(message)
397 398
398 399 def __str__(self):
399 400 s = '\n'.join(self.comments).rstrip()
400 401 if not s:
401 402 return ''
402 403 return s + '\n\n'
403 404
404 405 def _delmsg(self):
405 406 '''Remove existing message, keeping the rest of the comments fields.
406 407 If comments contains 'subject: ', message will prepend
407 408 the field and a blank line.'''
408 409 if self.message:
409 410 subj = 'subject: ' + self.message[0].lower()
410 411 for i in xrange(len(self.comments)):
411 412 if subj == self.comments[i].lower():
412 413 del self.comments[i]
413 414 self.message = self.message[2:]
414 415 break
415 416 ci = 0
416 417 for mi in self.message:
417 418 while mi != self.comments[ci]:
418 419 ci += 1
419 420 del self.comments[ci]
420 421
421 422 def newcommit(repo, phase, *args, **kwargs):
422 423 """helper dedicated to ensure a commit respect mq.secret setting
423 424
424 425 It should be used instead of repo.commit inside the mq source for operation
425 426 creating new changeset.
426 427 """
427 428 repo = repo.unfiltered()
428 429 if phase is None:
429 430 if repo.ui.configbool('mq', 'secret'):
430 431 phase = phases.secret
431 432 overrides = {('ui', 'allowemptycommit'): True}
432 433 if phase is not None:
433 434 overrides[('phases', 'new-commit')] = phase
434 435 with repo.ui.configoverride(overrides, 'mq'):
435 436 repo.ui.setconfig('ui', 'allowemptycommit', True)
436 437 return repo.commit(*args, **kwargs)
437 438
438 439 class AbortNoCleanup(error.Abort):
439 440 pass
440 441
441 442 class queue(object):
442 443 def __init__(self, ui, baseui, path, patchdir=None):
443 444 self.basepath = path
444 445 try:
445 446 fh = open(os.path.join(path, 'patches.queue'))
446 447 cur = fh.read().rstrip()
447 448 fh.close()
448 449 if not cur:
449 450 curpath = os.path.join(path, 'patches')
450 451 else:
451 452 curpath = os.path.join(path, 'patches-' + cur)
452 453 except IOError:
453 454 curpath = os.path.join(path, 'patches')
454 455 self.path = patchdir or curpath
455 456 self.opener = vfsmod.vfs(self.path)
456 457 self.ui = ui
457 458 self.baseui = baseui
458 459 self.applieddirty = False
459 460 self.seriesdirty = False
460 461 self.added = []
461 462 self.seriespath = "series"
462 463 self.statuspath = "status"
463 464 self.guardspath = "guards"
464 465 self.activeguards = None
465 466 self.guardsdirty = False
466 467 # Handle mq.git as a bool with extended values
467 468 gitmode = ui.config('mq', 'git').lower()
468 469 boolmode = util.parsebool(gitmode)
469 470 if boolmode is not None:
470 471 if boolmode:
471 472 gitmode = 'yes'
472 473 else:
473 474 gitmode = 'no'
474 475 self.gitmode = gitmode
475 476 # deprecated config: mq.plain
476 477 self.plainmode = ui.configbool('mq', 'plain')
477 478 self.checkapplied = True
478 479
479 480 @util.propertycache
480 481 def applied(self):
481 482 def parselines(lines):
482 483 for l in lines:
483 484 entry = l.split(':', 1)
484 485 if len(entry) > 1:
485 486 n, name = entry
486 487 yield statusentry(bin(n), name)
487 488 elif l.strip():
488 489 self.ui.warn(_('malformated mq status line: %s\n') % entry)
489 490 # else we ignore empty lines
490 491 try:
491 492 lines = self.opener.read(self.statuspath).splitlines()
492 493 return list(parselines(lines))
493 494 except IOError as e:
494 495 if e.errno == errno.ENOENT:
495 496 return []
496 497 raise
497 498
498 499 @util.propertycache
499 500 def fullseries(self):
500 501 try:
501 502 return self.opener.read(self.seriespath).splitlines()
502 503 except IOError as e:
503 504 if e.errno == errno.ENOENT:
504 505 return []
505 506 raise
506 507
507 508 @util.propertycache
508 509 def series(self):
509 510 self.parseseries()
510 511 return self.series
511 512
512 513 @util.propertycache
513 514 def seriesguards(self):
514 515 self.parseseries()
515 516 return self.seriesguards
516 517
517 518 def invalidate(self):
518 519 for a in 'applied fullseries series seriesguards'.split():
519 520 if a in self.__dict__:
520 521 delattr(self, a)
521 522 self.applieddirty = False
522 523 self.seriesdirty = False
523 524 self.guardsdirty = False
524 525 self.activeguards = None
525 526
526 527 def diffopts(self, opts=None, patchfn=None, plain=False):
527 528 """Return diff options tweaked for this mq use, possibly upgrading to
528 529 git format, and possibly plain and without lossy options."""
529 530 diffopts = patchmod.difffeatureopts(self.ui, opts,
530 531 git=True, whitespace=not plain, formatchanging=not plain)
531 532 if self.gitmode == 'auto':
532 533 diffopts.upgrade = True
533 534 elif self.gitmode == 'keep':
534 535 pass
535 536 elif self.gitmode in ('yes', 'no'):
536 537 diffopts.git = self.gitmode == 'yes'
537 538 else:
538 539 raise error.Abort(_('mq.git option can be auto/keep/yes/no'
539 540 ' got %s') % self.gitmode)
540 541 if patchfn:
541 542 diffopts = self.patchopts(diffopts, patchfn)
542 543 return diffopts
543 544
544 545 def patchopts(self, diffopts, *patches):
545 546 """Return a copy of input diff options with git set to true if
546 547 referenced patch is a git patch and should be preserved as such.
547 548 """
548 549 diffopts = diffopts.copy()
549 550 if not diffopts.git and self.gitmode == 'keep':
550 551 for patchfn in patches:
551 552 patchf = self.opener(patchfn, 'r')
552 553 # if the patch was a git patch, refresh it as a git patch
553 554 for line in patchf:
554 555 if line.startswith('diff --git'):
555 556 diffopts.git = True
556 557 break
557 558 patchf.close()
558 559 return diffopts
559 560
560 561 def join(self, *p):
561 562 return os.path.join(self.path, *p)
562 563
563 564 def findseries(self, patch):
564 565 def matchpatch(l):
565 566 l = l.split('#', 1)[0]
566 567 return l.strip() == patch
567 568 for index, l in enumerate(self.fullseries):
568 569 if matchpatch(l):
569 570 return index
570 571 return None
571 572
572 573 guard_re = re.compile(br'\s?#([-+][^-+# \t\r\n\f][^# \t\r\n\f]*)')
573 574
574 575 def parseseries(self):
575 576 self.series = []
576 577 self.seriesguards = []
577 578 for l in self.fullseries:
578 579 h = l.find('#')
579 580 if h == -1:
580 581 patch = l
581 582 comment = ''
582 583 elif h == 0:
583 584 continue
584 585 else:
585 586 patch = l[:h]
586 587 comment = l[h:]
587 588 patch = patch.strip()
588 589 if patch:
589 590 if patch in self.series:
590 591 raise error.Abort(_('%s appears more than once in %s') %
591 592 (patch, self.join(self.seriespath)))
592 593 self.series.append(patch)
593 594 self.seriesguards.append(self.guard_re.findall(comment))
594 595
595 596 def checkguard(self, guard):
596 597 if not guard:
597 598 return _('guard cannot be an empty string')
598 599 bad_chars = '# \t\r\n\f'
599 600 first = guard[0]
600 601 if first in '-+':
601 602 return (_('guard %r starts with invalid character: %r') %
602 603 (guard, first))
603 604 for c in bad_chars:
604 605 if c in guard:
605 606 return _('invalid character in guard %r: %r') % (guard, c)
606 607
607 608 def setactive(self, guards):
608 609 for guard in guards:
609 610 bad = self.checkguard(guard)
610 611 if bad:
611 612 raise error.Abort(bad)
612 613 guards = sorted(set(guards))
613 614 self.ui.debug('active guards: %s\n' % ' '.join(guards))
614 615 self.activeguards = guards
615 616 self.guardsdirty = True
616 617
617 618 def active(self):
618 619 if self.activeguards is None:
619 620 self.activeguards = []
620 621 try:
621 622 guards = self.opener.read(self.guardspath).split()
622 623 except IOError as err:
623 624 if err.errno != errno.ENOENT:
624 625 raise
625 626 guards = []
626 627 for i, guard in enumerate(guards):
627 628 bad = self.checkguard(guard)
628 629 if bad:
629 630 self.ui.warn('%s:%d: %s\n' %
630 631 (self.join(self.guardspath), i + 1, bad))
631 632 else:
632 633 self.activeguards.append(guard)
633 634 return self.activeguards
634 635
635 636 def setguards(self, idx, guards):
636 637 for g in guards:
637 638 if len(g) < 2:
638 639 raise error.Abort(_('guard %r too short') % g)
639 640 if g[0] not in '-+':
640 641 raise error.Abort(_('guard %r starts with invalid char') % g)
641 642 bad = self.checkguard(g[1:])
642 643 if bad:
643 644 raise error.Abort(bad)
644 645 drop = self.guard_re.sub('', self.fullseries[idx])
645 646 self.fullseries[idx] = drop + ''.join([' #' + g for g in guards])
646 647 self.parseseries()
647 648 self.seriesdirty = True
648 649
649 650 def pushable(self, idx):
650 651 if isinstance(idx, str):
651 652 idx = self.series.index(idx)
652 653 patchguards = self.seriesguards[idx]
653 654 if not patchguards:
654 655 return True, None
655 656 guards = self.active()
656 657 exactneg = [g for g in patchguards if g[0] == '-' and g[1:] in guards]
657 658 if exactneg:
658 659 return False, repr(exactneg[0])
659 660 pos = [g for g in patchguards if g[0] == '+']
660 661 exactpos = [g for g in pos if g[1:] in guards]
661 662 if pos:
662 663 if exactpos:
663 664 return True, repr(exactpos[0])
664 665 return False, ' '.join(map(repr, pos))
665 666 return True, ''
666 667
667 668 def explainpushable(self, idx, all_patches=False):
668 669 if all_patches:
669 670 write = self.ui.write
670 671 else:
671 672 write = self.ui.warn
672 673
673 674 if all_patches or self.ui.verbose:
674 675 if isinstance(idx, str):
675 676 idx = self.series.index(idx)
676 677 pushable, why = self.pushable(idx)
677 678 if all_patches and pushable:
678 679 if why is None:
679 680 write(_('allowing %s - no guards in effect\n') %
680 681 self.series[idx])
681 682 else:
682 683 if not why:
683 684 write(_('allowing %s - no matching negative guards\n') %
684 685 self.series[idx])
685 686 else:
686 687 write(_('allowing %s - guarded by %s\n') %
687 688 (self.series[idx], why))
688 689 if not pushable:
689 690 if why:
690 691 write(_('skipping %s - guarded by %s\n') %
691 692 (self.series[idx], why))
692 693 else:
693 694 write(_('skipping %s - no matching guards\n') %
694 695 self.series[idx])
695 696
696 697 def savedirty(self):
697 698 def writelist(items, path):
698 699 fp = self.opener(path, 'wb')
699 700 for i in items:
700 701 fp.write("%s\n" % i)
701 702 fp.close()
702 703 if self.applieddirty:
703 704 writelist(map(bytes, self.applied), self.statuspath)
704 705 self.applieddirty = False
705 706 if self.seriesdirty:
706 707 writelist(self.fullseries, self.seriespath)
707 708 self.seriesdirty = False
708 709 if self.guardsdirty:
709 710 writelist(self.activeguards, self.guardspath)
710 711 self.guardsdirty = False
711 712 if self.added:
712 713 qrepo = self.qrepo()
713 714 if qrepo:
714 715 qrepo[None].add(f for f in self.added if f not in qrepo[None])
715 716 self.added = []
716 717
717 718 def removeundo(self, repo):
718 719 undo = repo.sjoin('undo')
719 720 if not os.path.exists(undo):
720 721 return
721 722 try:
722 723 os.unlink(undo)
723 724 except OSError as inst:
724 725 self.ui.warn(_('error removing undo: %s\n') % str(inst))
725 726
726 727 def backup(self, repo, files, copy=False):
727 728 # backup local changes in --force case
728 729 for f in sorted(files):
729 730 absf = repo.wjoin(f)
730 731 if os.path.lexists(absf):
731 732 self.ui.note(_('saving current version of %s as %s\n') %
732 733 (f, scmutil.origpath(self.ui, repo, f)))
733 734
734 735 absorig = scmutil.origpath(self.ui, repo, absf)
735 736 if copy:
736 737 util.copyfile(absf, absorig)
737 738 else:
738 739 util.rename(absf, absorig)
739 740
740 741 def printdiff(self, repo, diffopts, node1, node2=None, files=None,
741 742 fp=None, changes=None, opts=None):
742 743 if opts is None:
743 744 opts = {}
744 745 stat = opts.get('stat')
745 746 m = scmutil.match(repo[node1], files, opts)
746 cmdutil.diffordiffstat(self.ui, repo, diffopts, node1, node2, m,
747 changes, stat, fp)
747 logcmdutil.diffordiffstat(self.ui, repo, diffopts, node1, node2, m,
748 changes, stat, fp)
748 749
749 750 def mergeone(self, repo, mergeq, head, patch, rev, diffopts):
750 751 # first try just applying the patch
751 752 (err, n) = self.apply(repo, [patch], update_status=False,
752 753 strict=True, merge=rev)
753 754
754 755 if err == 0:
755 756 return (err, n)
756 757
757 758 if n is None:
758 759 raise error.Abort(_("apply failed for patch %s") % patch)
759 760
760 761 self.ui.warn(_("patch didn't work out, merging %s\n") % patch)
761 762
762 763 # apply failed, strip away that rev and merge.
763 764 hg.clean(repo, head)
764 765 strip(self.ui, repo, [n], update=False, backup=False)
765 766
766 767 ctx = repo[rev]
767 768 ret = hg.merge(repo, rev)
768 769 if ret:
769 770 raise error.Abort(_("update returned %d") % ret)
770 771 n = newcommit(repo, None, ctx.description(), ctx.user(), force=True)
771 772 if n is None:
772 773 raise error.Abort(_("repo commit failed"))
773 774 try:
774 775 ph = patchheader(mergeq.join(patch), self.plainmode)
775 776 except Exception:
776 777 raise error.Abort(_("unable to read %s") % patch)
777 778
778 779 diffopts = self.patchopts(diffopts, patch)
779 780 patchf = self.opener(patch, "w")
780 781 comments = str(ph)
781 782 if comments:
782 783 patchf.write(comments)
783 784 self.printdiff(repo, diffopts, head, n, fp=patchf)
784 785 patchf.close()
785 786 self.removeundo(repo)
786 787 return (0, n)
787 788
788 789 def qparents(self, repo, rev=None):
789 790 """return the mq handled parent or p1
790 791
791 792 In some case where mq get himself in being the parent of a merge the
792 793 appropriate parent may be p2.
793 794 (eg: an in progress merge started with mq disabled)
794 795
795 796 If no parent are managed by mq, p1 is returned.
796 797 """
797 798 if rev is None:
798 799 (p1, p2) = repo.dirstate.parents()
799 800 if p2 == nullid:
800 801 return p1
801 802 if not self.applied:
802 803 return None
803 804 return self.applied[-1].node
804 805 p1, p2 = repo.changelog.parents(rev)
805 806 if p2 != nullid and p2 in [x.node for x in self.applied]:
806 807 return p2
807 808 return p1
808 809
809 810 def mergepatch(self, repo, mergeq, series, diffopts):
810 811 if not self.applied:
811 812 # each of the patches merged in will have two parents. This
812 813 # can confuse the qrefresh, qdiff, and strip code because it
813 814 # needs to know which parent is actually in the patch queue.
814 815 # so, we insert a merge marker with only one parent. This way
815 816 # the first patch in the queue is never a merge patch
816 817 #
817 818 pname = ".hg.patches.merge.marker"
818 819 n = newcommit(repo, None, '[mq]: merge marker', force=True)
819 820 self.removeundo(repo)
820 821 self.applied.append(statusentry(n, pname))
821 822 self.applieddirty = True
822 823
823 824 head = self.qparents(repo)
824 825
825 826 for patch in series:
826 827 patch = mergeq.lookup(patch, strict=True)
827 828 if not patch:
828 829 self.ui.warn(_("patch %s does not exist\n") % patch)
829 830 return (1, None)
830 831 pushable, reason = self.pushable(patch)
831 832 if not pushable:
832 833 self.explainpushable(patch, all_patches=True)
833 834 continue
834 835 info = mergeq.isapplied(patch)
835 836 if not info:
836 837 self.ui.warn(_("patch %s is not applied\n") % patch)
837 838 return (1, None)
838 839 rev = info[1]
839 840 err, head = self.mergeone(repo, mergeq, head, patch, rev, diffopts)
840 841 if head:
841 842 self.applied.append(statusentry(head, patch))
842 843 self.applieddirty = True
843 844 if err:
844 845 return (err, head)
845 846 self.savedirty()
846 847 return (0, head)
847 848
848 849 def patch(self, repo, patchfile):
849 850 '''Apply patchfile to the working directory.
850 851 patchfile: name of patch file'''
851 852 files = set()
852 853 try:
853 854 fuzz = patchmod.patch(self.ui, repo, patchfile, strip=1,
854 855 files=files, eolmode=None)
855 856 return (True, list(files), fuzz)
856 857 except Exception as inst:
857 858 self.ui.note(str(inst) + '\n')
858 859 if not self.ui.verbose:
859 860 self.ui.warn(_("patch failed, unable to continue (try -v)\n"))
860 861 self.ui.traceback()
861 862 return (False, list(files), False)
862 863
863 864 def apply(self, repo, series, list=False, update_status=True,
864 865 strict=False, patchdir=None, merge=None, all_files=None,
865 866 tobackup=None, keepchanges=False):
866 867 wlock = lock = tr = None
867 868 try:
868 869 wlock = repo.wlock()
869 870 lock = repo.lock()
870 871 tr = repo.transaction("qpush")
871 872 try:
872 873 ret = self._apply(repo, series, list, update_status,
873 874 strict, patchdir, merge, all_files=all_files,
874 875 tobackup=tobackup, keepchanges=keepchanges)
875 876 tr.close()
876 877 self.savedirty()
877 878 return ret
878 879 except AbortNoCleanup:
879 880 tr.close()
880 881 self.savedirty()
881 882 raise
882 883 except: # re-raises
883 884 try:
884 885 tr.abort()
885 886 finally:
886 887 self.invalidate()
887 888 raise
888 889 finally:
889 890 release(tr, lock, wlock)
890 891 self.removeundo(repo)
891 892
892 893 def _apply(self, repo, series, list=False, update_status=True,
893 894 strict=False, patchdir=None, merge=None, all_files=None,
894 895 tobackup=None, keepchanges=False):
895 896 """returns (error, hash)
896 897
897 898 error = 1 for unable to read, 2 for patch failed, 3 for patch
898 899 fuzz. tobackup is None or a set of files to backup before they
899 900 are modified by a patch.
900 901 """
901 902 # TODO unify with commands.py
902 903 if not patchdir:
903 904 patchdir = self.path
904 905 err = 0
905 906 n = None
906 907 for patchname in series:
907 908 pushable, reason = self.pushable(patchname)
908 909 if not pushable:
909 910 self.explainpushable(patchname, all_patches=True)
910 911 continue
911 912 self.ui.status(_("applying %s\n") % patchname)
912 913 pf = os.path.join(patchdir, patchname)
913 914
914 915 try:
915 916 ph = patchheader(self.join(patchname), self.plainmode)
916 917 except IOError:
917 918 self.ui.warn(_("unable to read %s\n") % patchname)
918 919 err = 1
919 920 break
920 921
921 922 message = ph.message
922 923 if not message:
923 924 # The commit message should not be translated
924 925 message = "imported patch %s\n" % patchname
925 926 else:
926 927 if list:
927 928 # The commit message should not be translated
928 929 message.append("\nimported patch %s" % patchname)
929 930 message = '\n'.join(message)
930 931
931 932 if ph.haspatch:
932 933 if tobackup:
933 934 touched = patchmod.changedfiles(self.ui, repo, pf)
934 935 touched = set(touched) & tobackup
935 936 if touched and keepchanges:
936 937 raise AbortNoCleanup(
937 938 _("conflicting local changes found"),
938 939 hint=_("did you forget to qrefresh?"))
939 940 self.backup(repo, touched, copy=True)
940 941 tobackup = tobackup - touched
941 942 (patcherr, files, fuzz) = self.patch(repo, pf)
942 943 if all_files is not None:
943 944 all_files.update(files)
944 945 patcherr = not patcherr
945 946 else:
946 947 self.ui.warn(_("patch %s is empty\n") % patchname)
947 948 patcherr, files, fuzz = 0, [], 0
948 949
949 950 if merge and files:
950 951 # Mark as removed/merged and update dirstate parent info
951 952 removed = []
952 953 merged = []
953 954 for f in files:
954 955 if os.path.lexists(repo.wjoin(f)):
955 956 merged.append(f)
956 957 else:
957 958 removed.append(f)
958 959 with repo.dirstate.parentchange():
959 960 for f in removed:
960 961 repo.dirstate.remove(f)
961 962 for f in merged:
962 963 repo.dirstate.merge(f)
963 964 p1, p2 = repo.dirstate.parents()
964 965 repo.setparents(p1, merge)
965 966
966 967 if all_files and '.hgsubstate' in all_files:
967 968 wctx = repo[None]
968 969 pctx = repo['.']
969 970 overwrite = False
970 971 mergedsubstate = subrepo.submerge(repo, pctx, wctx, wctx,
971 972 overwrite)
972 973 files += mergedsubstate.keys()
973 974
974 975 match = scmutil.matchfiles(repo, files or [])
975 976 oldtip = repo['tip']
976 977 n = newcommit(repo, None, message, ph.user, ph.date, match=match,
977 978 force=True)
978 979 if repo['tip'] == oldtip:
979 980 raise error.Abort(_("qpush exactly duplicates child changeset"))
980 981 if n is None:
981 982 raise error.Abort(_("repository commit failed"))
982 983
983 984 if update_status:
984 985 self.applied.append(statusentry(n, patchname))
985 986
986 987 if patcherr:
987 988 self.ui.warn(_("patch failed, rejects left in working "
988 989 "directory\n"))
989 990 err = 2
990 991 break
991 992
992 993 if fuzz and strict:
993 994 self.ui.warn(_("fuzz found when applying patch, stopping\n"))
994 995 err = 3
995 996 break
996 997 return (err, n)
997 998
998 999 def _cleanup(self, patches, numrevs, keep=False):
999 1000 if not keep:
1000 1001 r = self.qrepo()
1001 1002 if r:
1002 1003 r[None].forget(patches)
1003 1004 for p in patches:
1004 1005 try:
1005 1006 os.unlink(self.join(p))
1006 1007 except OSError as inst:
1007 1008 if inst.errno != errno.ENOENT:
1008 1009 raise
1009 1010
1010 1011 qfinished = []
1011 1012 if numrevs:
1012 1013 qfinished = self.applied[:numrevs]
1013 1014 del self.applied[:numrevs]
1014 1015 self.applieddirty = True
1015 1016
1016 1017 unknown = []
1017 1018
1018 1019 for (i, p) in sorted([(self.findseries(p), p) for p in patches],
1019 1020 reverse=True):
1020 1021 if i is not None:
1021 1022 del self.fullseries[i]
1022 1023 else:
1023 1024 unknown.append(p)
1024 1025
1025 1026 if unknown:
1026 1027 if numrevs:
1027 1028 rev = dict((entry.name, entry.node) for entry in qfinished)
1028 1029 for p in unknown:
1029 1030 msg = _('revision %s refers to unknown patches: %s\n')
1030 1031 self.ui.warn(msg % (short(rev[p]), p))
1031 1032 else:
1032 1033 msg = _('unknown patches: %s\n')
1033 1034 raise error.Abort(''.join(msg % p for p in unknown))
1034 1035
1035 1036 self.parseseries()
1036 1037 self.seriesdirty = True
1037 1038 return [entry.node for entry in qfinished]
1038 1039
1039 1040 def _revpatches(self, repo, revs):
1040 1041 firstrev = repo[self.applied[0].node].rev()
1041 1042 patches = []
1042 1043 for i, rev in enumerate(revs):
1043 1044
1044 1045 if rev < firstrev:
1045 1046 raise error.Abort(_('revision %d is not managed') % rev)
1046 1047
1047 1048 ctx = repo[rev]
1048 1049 base = self.applied[i].node
1049 1050 if ctx.node() != base:
1050 1051 msg = _('cannot delete revision %d above applied patches')
1051 1052 raise error.Abort(msg % rev)
1052 1053
1053 1054 patch = self.applied[i].name
1054 1055 for fmt in ('[mq]: %s', 'imported patch %s'):
1055 1056 if ctx.description() == fmt % patch:
1056 1057 msg = _('patch %s finalized without changeset message\n')
1057 1058 repo.ui.status(msg % patch)
1058 1059 break
1059 1060
1060 1061 patches.append(patch)
1061 1062 return patches
1062 1063
1063 1064 def finish(self, repo, revs):
1064 1065 # Manually trigger phase computation to ensure phasedefaults is
1065 1066 # executed before we remove the patches.
1066 1067 repo._phasecache
1067 1068 patches = self._revpatches(repo, sorted(revs))
1068 1069 qfinished = self._cleanup(patches, len(patches))
1069 1070 if qfinished and repo.ui.configbool('mq', 'secret'):
1070 1071 # only use this logic when the secret option is added
1071 1072 oldqbase = repo[qfinished[0]]
1072 1073 tphase = phases.newcommitphase(repo.ui)
1073 1074 if oldqbase.phase() > tphase and oldqbase.p1().phase() <= tphase:
1074 1075 with repo.transaction('qfinish') as tr:
1075 1076 phases.advanceboundary(repo, tr, tphase, qfinished)
1076 1077
1077 1078 def delete(self, repo, patches, opts):
1078 1079 if not patches and not opts.get('rev'):
1079 1080 raise error.Abort(_('qdelete requires at least one revision or '
1080 1081 'patch name'))
1081 1082
1082 1083 realpatches = []
1083 1084 for patch in patches:
1084 1085 patch = self.lookup(patch, strict=True)
1085 1086 info = self.isapplied(patch)
1086 1087 if info:
1087 1088 raise error.Abort(_("cannot delete applied patch %s") % patch)
1088 1089 if patch not in self.series:
1089 1090 raise error.Abort(_("patch %s not in series file") % patch)
1090 1091 if patch not in realpatches:
1091 1092 realpatches.append(patch)
1092 1093
1093 1094 numrevs = 0
1094 1095 if opts.get('rev'):
1095 1096 if not self.applied:
1096 1097 raise error.Abort(_('no patches applied'))
1097 1098 revs = scmutil.revrange(repo, opts.get('rev'))
1098 1099 revs.sort()
1099 1100 revpatches = self._revpatches(repo, revs)
1100 1101 realpatches += revpatches
1101 1102 numrevs = len(revpatches)
1102 1103
1103 1104 self._cleanup(realpatches, numrevs, opts.get('keep'))
1104 1105
1105 1106 def checktoppatch(self, repo):
1106 1107 '''check that working directory is at qtip'''
1107 1108 if self.applied:
1108 1109 top = self.applied[-1].node
1109 1110 patch = self.applied[-1].name
1110 1111 if repo.dirstate.p1() != top:
1111 1112 raise error.Abort(_("working directory revision is not qtip"))
1112 1113 return top, patch
1113 1114 return None, None
1114 1115
1115 1116 def putsubstate2changes(self, substatestate, changes):
1116 1117 for files in changes[:3]:
1117 1118 if '.hgsubstate' in files:
1118 1119 return # already listed up
1119 1120 # not yet listed up
1120 1121 if substatestate in 'a?':
1121 1122 changes[1].append('.hgsubstate')
1122 1123 elif substatestate in 'r':
1123 1124 changes[2].append('.hgsubstate')
1124 1125 else: # modified
1125 1126 changes[0].append('.hgsubstate')
1126 1127
1127 1128 def checklocalchanges(self, repo, force=False, refresh=True):
1128 1129 excsuffix = ''
1129 1130 if refresh:
1130 1131 excsuffix = ', qrefresh first'
1131 1132 # plain versions for i18n tool to detect them
1132 1133 _("local changes found, qrefresh first")
1133 1134 _("local changed subrepos found, qrefresh first")
1134 1135 return checklocalchanges(repo, force, excsuffix)
1135 1136
1136 1137 _reserved = ('series', 'status', 'guards', '.', '..')
1137 1138 def checkreservedname(self, name):
1138 1139 if name in self._reserved:
1139 1140 raise error.Abort(_('"%s" cannot be used as the name of a patch')
1140 1141 % name)
1141 1142 if name != name.strip():
1142 1143 # whitespace is stripped by parseseries()
1143 1144 raise error.Abort(_('patch name cannot begin or end with '
1144 1145 'whitespace'))
1145 1146 for prefix in ('.hg', '.mq'):
1146 1147 if name.startswith(prefix):
1147 1148 raise error.Abort(_('patch name cannot begin with "%s"')
1148 1149 % prefix)
1149 1150 for c in ('#', ':', '\r', '\n'):
1150 1151 if c in name:
1151 1152 raise error.Abort(_('%r cannot be used in the name of a patch')
1152 1153 % c)
1153 1154
1154 1155 def checkpatchname(self, name, force=False):
1155 1156 self.checkreservedname(name)
1156 1157 if not force and os.path.exists(self.join(name)):
1157 1158 if os.path.isdir(self.join(name)):
1158 1159 raise error.Abort(_('"%s" already exists as a directory')
1159 1160 % name)
1160 1161 else:
1161 1162 raise error.Abort(_('patch "%s" already exists') % name)
1162 1163
1163 1164 def makepatchname(self, title, fallbackname):
1164 1165 """Return a suitable filename for title, adding a suffix to make
1165 1166 it unique in the existing list"""
1166 1167 namebase = re.sub('[\s\W_]+', '_', title.lower()).strip('_')
1167 1168 namebase = namebase[:75] # avoid too long name (issue5117)
1168 1169 if namebase:
1169 1170 try:
1170 1171 self.checkreservedname(namebase)
1171 1172 except error.Abort:
1172 1173 namebase = fallbackname
1173 1174 else:
1174 1175 namebase = fallbackname
1175 1176 name = namebase
1176 1177 i = 0
1177 1178 while True:
1178 1179 if name not in self.fullseries:
1179 1180 try:
1180 1181 self.checkpatchname(name)
1181 1182 break
1182 1183 except error.Abort:
1183 1184 pass
1184 1185 i += 1
1185 1186 name = '%s__%s' % (namebase, i)
1186 1187 return name
1187 1188
1188 1189 def checkkeepchanges(self, keepchanges, force):
1189 1190 if force and keepchanges:
1190 1191 raise error.Abort(_('cannot use both --force and --keep-changes'))
1191 1192
1192 1193 def new(self, repo, patchfn, *pats, **opts):
1193 1194 """options:
1194 1195 msg: a string or a no-argument function returning a string
1195 1196 """
1196 1197 msg = opts.get('msg')
1197 1198 edit = opts.get('edit')
1198 1199 editform = opts.get('editform', 'mq.qnew')
1199 1200 user = opts.get('user')
1200 1201 date = opts.get('date')
1201 1202 if date:
1202 1203 date = util.parsedate(date)
1203 1204 diffopts = self.diffopts({'git': opts.get('git')}, plain=True)
1204 1205 if opts.get('checkname', True):
1205 1206 self.checkpatchname(patchfn)
1206 1207 inclsubs = checksubstate(repo)
1207 1208 if inclsubs:
1208 1209 substatestate = repo.dirstate['.hgsubstate']
1209 1210 if opts.get('include') or opts.get('exclude') or pats:
1210 1211 # detect missing files in pats
1211 1212 def badfn(f, msg):
1212 1213 if f != '.hgsubstate': # .hgsubstate is auto-created
1213 1214 raise error.Abort('%s: %s' % (f, msg))
1214 1215 match = scmutil.match(repo[None], pats, opts, badfn=badfn)
1215 1216 changes = repo.status(match=match)
1216 1217 else:
1217 1218 changes = self.checklocalchanges(repo, force=True)
1218 1219 commitfiles = list(inclsubs)
1219 1220 for files in changes[:3]:
1220 1221 commitfiles.extend(files)
1221 1222 match = scmutil.matchfiles(repo, commitfiles)
1222 1223 if len(repo[None].parents()) > 1:
1223 1224 raise error.Abort(_('cannot manage merge changesets'))
1224 1225 self.checktoppatch(repo)
1225 1226 insert = self.fullseriesend()
1226 1227 with repo.wlock():
1227 1228 try:
1228 1229 # if patch file write fails, abort early
1229 1230 p = self.opener(patchfn, "w")
1230 1231 except IOError as e:
1231 1232 raise error.Abort(_('cannot write patch "%s": %s')
1232 1233 % (patchfn, encoding.strtolocal(e.strerror)))
1233 1234 try:
1234 1235 defaultmsg = "[mq]: %s" % patchfn
1235 1236 editor = cmdutil.getcommiteditor(editform=editform)
1236 1237 if edit:
1237 1238 def finishdesc(desc):
1238 1239 if desc.rstrip():
1239 1240 return desc
1240 1241 else:
1241 1242 return defaultmsg
1242 1243 # i18n: this message is shown in editor with "HG: " prefix
1243 1244 extramsg = _('Leave message empty to use default message.')
1244 1245 editor = cmdutil.getcommiteditor(finishdesc=finishdesc,
1245 1246 extramsg=extramsg,
1246 1247 editform=editform)
1247 1248 commitmsg = msg
1248 1249 else:
1249 1250 commitmsg = msg or defaultmsg
1250 1251
1251 1252 n = newcommit(repo, None, commitmsg, user, date, match=match,
1252 1253 force=True, editor=editor)
1253 1254 if n is None:
1254 1255 raise error.Abort(_("repo commit failed"))
1255 1256 try:
1256 1257 self.fullseries[insert:insert] = [patchfn]
1257 1258 self.applied.append(statusentry(n, patchfn))
1258 1259 self.parseseries()
1259 1260 self.seriesdirty = True
1260 1261 self.applieddirty = True
1261 1262 nctx = repo[n]
1262 1263 ph = patchheader(self.join(patchfn), self.plainmode)
1263 1264 if user:
1264 1265 ph.setuser(user)
1265 1266 if date:
1266 1267 ph.setdate('%s %s' % date)
1267 1268 ph.setparent(hex(nctx.p1().node()))
1268 1269 msg = nctx.description().strip()
1269 1270 if msg == defaultmsg.strip():
1270 1271 msg = ''
1271 1272 ph.setmessage(msg)
1272 1273 p.write(str(ph))
1273 1274 if commitfiles:
1274 1275 parent = self.qparents(repo, n)
1275 1276 if inclsubs:
1276 1277 self.putsubstate2changes(substatestate, changes)
1277 1278 chunks = patchmod.diff(repo, node1=parent, node2=n,
1278 1279 changes=changes, opts=diffopts)
1279 1280 for chunk in chunks:
1280 1281 p.write(chunk)
1281 1282 p.close()
1282 1283 r = self.qrepo()
1283 1284 if r:
1284 1285 r[None].add([patchfn])
1285 1286 except: # re-raises
1286 1287 repo.rollback()
1287 1288 raise
1288 1289 except Exception:
1289 1290 patchpath = self.join(patchfn)
1290 1291 try:
1291 1292 os.unlink(patchpath)
1292 1293 except OSError:
1293 1294 self.ui.warn(_('error unlinking %s\n') % patchpath)
1294 1295 raise
1295 1296 self.removeundo(repo)
1296 1297
1297 1298 def isapplied(self, patch):
1298 1299 """returns (index, rev, patch)"""
1299 1300 for i, a in enumerate(self.applied):
1300 1301 if a.name == patch:
1301 1302 return (i, a.node, a.name)
1302 1303 return None
1303 1304
1304 1305 # if the exact patch name does not exist, we try a few
1305 1306 # variations. If strict is passed, we try only #1
1306 1307 #
1307 1308 # 1) a number (as string) to indicate an offset in the series file
1308 1309 # 2) a unique substring of the patch name was given
1309 1310 # 3) patchname[-+]num to indicate an offset in the series file
1310 1311 def lookup(self, patch, strict=False):
1311 1312 def partialname(s):
1312 1313 if s in self.series:
1313 1314 return s
1314 1315 matches = [x for x in self.series if s in x]
1315 1316 if len(matches) > 1:
1316 1317 self.ui.warn(_('patch name "%s" is ambiguous:\n') % s)
1317 1318 for m in matches:
1318 1319 self.ui.warn(' %s\n' % m)
1319 1320 return None
1320 1321 if matches:
1321 1322 return matches[0]
1322 1323 if self.series and self.applied:
1323 1324 if s == 'qtip':
1324 1325 return self.series[self.seriesend(True) - 1]
1325 1326 if s == 'qbase':
1326 1327 return self.series[0]
1327 1328 return None
1328 1329
1329 1330 if patch in self.series:
1330 1331 return patch
1331 1332
1332 1333 if not os.path.isfile(self.join(patch)):
1333 1334 try:
1334 1335 sno = int(patch)
1335 1336 except (ValueError, OverflowError):
1336 1337 pass
1337 1338 else:
1338 1339 if -len(self.series) <= sno < len(self.series):
1339 1340 return self.series[sno]
1340 1341
1341 1342 if not strict:
1342 1343 res = partialname(patch)
1343 1344 if res:
1344 1345 return res
1345 1346 minus = patch.rfind('-')
1346 1347 if minus >= 0:
1347 1348 res = partialname(patch[:minus])
1348 1349 if res:
1349 1350 i = self.series.index(res)
1350 1351 try:
1351 1352 off = int(patch[minus + 1:] or 1)
1352 1353 except (ValueError, OverflowError):
1353 1354 pass
1354 1355 else:
1355 1356 if i - off >= 0:
1356 1357 return self.series[i - off]
1357 1358 plus = patch.rfind('+')
1358 1359 if plus >= 0:
1359 1360 res = partialname(patch[:plus])
1360 1361 if res:
1361 1362 i = self.series.index(res)
1362 1363 try:
1363 1364 off = int(patch[plus + 1:] or 1)
1364 1365 except (ValueError, OverflowError):
1365 1366 pass
1366 1367 else:
1367 1368 if i + off < len(self.series):
1368 1369 return self.series[i + off]
1369 1370 raise error.Abort(_("patch %s not in series") % patch)
1370 1371
1371 1372 def push(self, repo, patch=None, force=False, list=False, mergeq=None,
1372 1373 all=False, move=False, exact=False, nobackup=False,
1373 1374 keepchanges=False):
1374 1375 self.checkkeepchanges(keepchanges, force)
1375 1376 diffopts = self.diffopts()
1376 1377 with repo.wlock():
1377 1378 heads = []
1378 1379 for hs in repo.branchmap().itervalues():
1379 1380 heads.extend(hs)
1380 1381 if not heads:
1381 1382 heads = [nullid]
1382 1383 if repo.dirstate.p1() not in heads and not exact:
1383 1384 self.ui.status(_("(working directory not at a head)\n"))
1384 1385
1385 1386 if not self.series:
1386 1387 self.ui.warn(_('no patches in series\n'))
1387 1388 return 0
1388 1389
1389 1390 # Suppose our series file is: A B C and the current 'top'
1390 1391 # patch is B. qpush C should be performed (moving forward)
1391 1392 # qpush B is a NOP (no change) qpush A is an error (can't
1392 1393 # go backwards with qpush)
1393 1394 if patch:
1394 1395 patch = self.lookup(patch)
1395 1396 info = self.isapplied(patch)
1396 1397 if info and info[0] >= len(self.applied) - 1:
1397 1398 self.ui.warn(
1398 1399 _('qpush: %s is already at the top\n') % patch)
1399 1400 return 0
1400 1401
1401 1402 pushable, reason = self.pushable(patch)
1402 1403 if pushable:
1403 1404 if self.series.index(patch) < self.seriesend():
1404 1405 raise error.Abort(
1405 1406 _("cannot push to a previous patch: %s") % patch)
1406 1407 else:
1407 1408 if reason:
1408 1409 reason = _('guarded by %s') % reason
1409 1410 else:
1410 1411 reason = _('no matching guards')
1411 1412 self.ui.warn(_("cannot push '%s' - %s\n") % (patch, reason))
1412 1413 return 1
1413 1414 elif all:
1414 1415 patch = self.series[-1]
1415 1416 if self.isapplied(patch):
1416 1417 self.ui.warn(_('all patches are currently applied\n'))
1417 1418 return 0
1418 1419
1419 1420 # Following the above example, starting at 'top' of B:
1420 1421 # qpush should be performed (pushes C), but a subsequent
1421 1422 # qpush without an argument is an error (nothing to
1422 1423 # apply). This allows a loop of "...while hg qpush..." to
1423 1424 # work as it detects an error when done
1424 1425 start = self.seriesend()
1425 1426 if start == len(self.series):
1426 1427 self.ui.warn(_('patch series already fully applied\n'))
1427 1428 return 1
1428 1429 if not force and not keepchanges:
1429 1430 self.checklocalchanges(repo, refresh=self.applied)
1430 1431
1431 1432 if exact:
1432 1433 if keepchanges:
1433 1434 raise error.Abort(
1434 1435 _("cannot use --exact and --keep-changes together"))
1435 1436 if move:
1436 1437 raise error.Abort(_('cannot use --exact and --move '
1437 1438 'together'))
1438 1439 if self.applied:
1439 1440 raise error.Abort(_('cannot push --exact with applied '
1440 1441 'patches'))
1441 1442 root = self.series[start]
1442 1443 target = patchheader(self.join(root), self.plainmode).parent
1443 1444 if not target:
1444 1445 raise error.Abort(
1445 1446 _("%s does not have a parent recorded") % root)
1446 1447 if not repo[target] == repo['.']:
1447 1448 hg.update(repo, target)
1448 1449
1449 1450 if move:
1450 1451 if not patch:
1451 1452 raise error.Abort(_("please specify the patch to move"))
1452 1453 for fullstart, rpn in enumerate(self.fullseries):
1453 1454 # strip markers for patch guards
1454 1455 if self.guard_re.split(rpn, 1)[0] == self.series[start]:
1455 1456 break
1456 1457 for i, rpn in enumerate(self.fullseries[fullstart:]):
1457 1458 # strip markers for patch guards
1458 1459 if self.guard_re.split(rpn, 1)[0] == patch:
1459 1460 break
1460 1461 index = fullstart + i
1461 1462 assert index < len(self.fullseries)
1462 1463 fullpatch = self.fullseries[index]
1463 1464 del self.fullseries[index]
1464 1465 self.fullseries.insert(fullstart, fullpatch)
1465 1466 self.parseseries()
1466 1467 self.seriesdirty = True
1467 1468
1468 1469 self.applieddirty = True
1469 1470 if start > 0:
1470 1471 self.checktoppatch(repo)
1471 1472 if not patch:
1472 1473 patch = self.series[start]
1473 1474 end = start + 1
1474 1475 else:
1475 1476 end = self.series.index(patch, start) + 1
1476 1477
1477 1478 tobackup = set()
1478 1479 if (not nobackup and force) or keepchanges:
1479 1480 status = self.checklocalchanges(repo, force=True)
1480 1481 if keepchanges:
1481 1482 tobackup.update(status.modified + status.added +
1482 1483 status.removed + status.deleted)
1483 1484 else:
1484 1485 tobackup.update(status.modified + status.added)
1485 1486
1486 1487 s = self.series[start:end]
1487 1488 all_files = set()
1488 1489 try:
1489 1490 if mergeq:
1490 1491 ret = self.mergepatch(repo, mergeq, s, diffopts)
1491 1492 else:
1492 1493 ret = self.apply(repo, s, list, all_files=all_files,
1493 1494 tobackup=tobackup, keepchanges=keepchanges)
1494 1495 except AbortNoCleanup:
1495 1496 raise
1496 1497 except: # re-raises
1497 1498 self.ui.warn(_('cleaning up working directory...\n'))
1498 1499 cmdutil.revert(self.ui, repo, repo['.'],
1499 1500 repo.dirstate.parents(), no_backup=True)
1500 1501 # only remove unknown files that we know we touched or
1501 1502 # created while patching
1502 1503 for f in all_files:
1503 1504 if f not in repo.dirstate:
1504 1505 repo.wvfs.unlinkpath(f, ignoremissing=True)
1505 1506 self.ui.warn(_('done\n'))
1506 1507 raise
1507 1508
1508 1509 if not self.applied:
1509 1510 return ret[0]
1510 1511 top = self.applied[-1].name
1511 1512 if ret[0] and ret[0] > 1:
1512 1513 msg = _("errors during apply, please fix and qrefresh %s\n")
1513 1514 self.ui.write(msg % top)
1514 1515 else:
1515 1516 self.ui.write(_("now at: %s\n") % top)
1516 1517 return ret[0]
1517 1518
1518 1519 def pop(self, repo, patch=None, force=False, update=True, all=False,
1519 1520 nobackup=False, keepchanges=False):
1520 1521 self.checkkeepchanges(keepchanges, force)
1521 1522 with repo.wlock():
1522 1523 if patch:
1523 1524 # index, rev, patch
1524 1525 info = self.isapplied(patch)
1525 1526 if not info:
1526 1527 patch = self.lookup(patch)
1527 1528 info = self.isapplied(patch)
1528 1529 if not info:
1529 1530 raise error.Abort(_("patch %s is not applied") % patch)
1530 1531
1531 1532 if not self.applied:
1532 1533 # Allow qpop -a to work repeatedly,
1533 1534 # but not qpop without an argument
1534 1535 self.ui.warn(_("no patches applied\n"))
1535 1536 return not all
1536 1537
1537 1538 if all:
1538 1539 start = 0
1539 1540 elif patch:
1540 1541 start = info[0] + 1
1541 1542 else:
1542 1543 start = len(self.applied) - 1
1543 1544
1544 1545 if start >= len(self.applied):
1545 1546 self.ui.warn(_("qpop: %s is already at the top\n") % patch)
1546 1547 return
1547 1548
1548 1549 if not update:
1549 1550 parents = repo.dirstate.parents()
1550 1551 rr = [x.node for x in self.applied]
1551 1552 for p in parents:
1552 1553 if p in rr:
1553 1554 self.ui.warn(_("qpop: forcing dirstate update\n"))
1554 1555 update = True
1555 1556 else:
1556 1557 parents = [p.node() for p in repo[None].parents()]
1557 1558 needupdate = False
1558 1559 for entry in self.applied[start:]:
1559 1560 if entry.node in parents:
1560 1561 needupdate = True
1561 1562 break
1562 1563 update = needupdate
1563 1564
1564 1565 tobackup = set()
1565 1566 if update:
1566 1567 s = self.checklocalchanges(repo, force=force or keepchanges)
1567 1568 if force:
1568 1569 if not nobackup:
1569 1570 tobackup.update(s.modified + s.added)
1570 1571 elif keepchanges:
1571 1572 tobackup.update(s.modified + s.added +
1572 1573 s.removed + s.deleted)
1573 1574
1574 1575 self.applieddirty = True
1575 1576 end = len(self.applied)
1576 1577 rev = self.applied[start].node
1577 1578
1578 1579 try:
1579 1580 heads = repo.changelog.heads(rev)
1580 1581 except error.LookupError:
1581 1582 node = short(rev)
1582 1583 raise error.Abort(_('trying to pop unknown node %s') % node)
1583 1584
1584 1585 if heads != [self.applied[-1].node]:
1585 1586 raise error.Abort(_("popping would remove a revision not "
1586 1587 "managed by this patch queue"))
1587 1588 if not repo[self.applied[-1].node].mutable():
1588 1589 raise error.Abort(
1589 1590 _("popping would remove a public revision"),
1590 1591 hint=_("see 'hg help phases' for details"))
1591 1592
1592 1593 # we know there are no local changes, so we can make a simplified
1593 1594 # form of hg.update.
1594 1595 if update:
1595 1596 qp = self.qparents(repo, rev)
1596 1597 ctx = repo[qp]
1597 1598 m, a, r, d = repo.status(qp, '.')[:4]
1598 1599 if d:
1599 1600 raise error.Abort(_("deletions found between repo revs"))
1600 1601
1601 1602 tobackup = set(a + m + r) & tobackup
1602 1603 if keepchanges and tobackup:
1603 1604 raise error.Abort(_("local changes found, qrefresh first"))
1604 1605 self.backup(repo, tobackup)
1605 1606 with repo.dirstate.parentchange():
1606 1607 for f in a:
1607 1608 repo.wvfs.unlinkpath(f, ignoremissing=True)
1608 1609 repo.dirstate.drop(f)
1609 1610 for f in m + r:
1610 1611 fctx = ctx[f]
1611 1612 repo.wwrite(f, fctx.data(), fctx.flags())
1612 1613 repo.dirstate.normal(f)
1613 1614 repo.setparents(qp, nullid)
1614 1615 for patch in reversed(self.applied[start:end]):
1615 1616 self.ui.status(_("popping %s\n") % patch.name)
1616 1617 del self.applied[start:end]
1617 1618 strip(self.ui, repo, [rev], update=False, backup=False)
1618 1619 for s, state in repo['.'].substate.items():
1619 1620 repo['.'].sub(s).get(state)
1620 1621 if self.applied:
1621 1622 self.ui.write(_("now at: %s\n") % self.applied[-1].name)
1622 1623 else:
1623 1624 self.ui.write(_("patch queue now empty\n"))
1624 1625
1625 1626 def diff(self, repo, pats, opts):
1626 1627 top, patch = self.checktoppatch(repo)
1627 1628 if not top:
1628 1629 self.ui.write(_("no patches applied\n"))
1629 1630 return
1630 1631 qp = self.qparents(repo, top)
1631 1632 if opts.get('reverse'):
1632 1633 node1, node2 = None, qp
1633 1634 else:
1634 1635 node1, node2 = qp, None
1635 1636 diffopts = self.diffopts(opts, patch)
1636 1637 self.printdiff(repo, diffopts, node1, node2, files=pats, opts=opts)
1637 1638
1638 1639 def refresh(self, repo, pats=None, **opts):
1639 1640 if not self.applied:
1640 1641 self.ui.write(_("no patches applied\n"))
1641 1642 return 1
1642 1643 msg = opts.get('msg', '').rstrip()
1643 1644 edit = opts.get('edit')
1644 1645 editform = opts.get('editform', 'mq.qrefresh')
1645 1646 newuser = opts.get('user')
1646 1647 newdate = opts.get('date')
1647 1648 if newdate:
1648 1649 newdate = '%d %d' % util.parsedate(newdate)
1649 1650 wlock = repo.wlock()
1650 1651
1651 1652 try:
1652 1653 self.checktoppatch(repo)
1653 1654 (top, patchfn) = (self.applied[-1].node, self.applied[-1].name)
1654 1655 if repo.changelog.heads(top) != [top]:
1655 1656 raise error.Abort(_("cannot qrefresh a revision with children"))
1656 1657 if not repo[top].mutable():
1657 1658 raise error.Abort(_("cannot qrefresh public revision"),
1658 1659 hint=_("see 'hg help phases' for details"))
1659 1660
1660 1661 cparents = repo.changelog.parents(top)
1661 1662 patchparent = self.qparents(repo, top)
1662 1663
1663 1664 inclsubs = checksubstate(repo, hex(patchparent))
1664 1665 if inclsubs:
1665 1666 substatestate = repo.dirstate['.hgsubstate']
1666 1667
1667 1668 ph = patchheader(self.join(patchfn), self.plainmode)
1668 1669 diffopts = self.diffopts({'git': opts.get('git')}, patchfn,
1669 1670 plain=True)
1670 1671 if newuser:
1671 1672 ph.setuser(newuser)
1672 1673 if newdate:
1673 1674 ph.setdate(newdate)
1674 1675 ph.setparent(hex(patchparent))
1675 1676
1676 1677 # only commit new patch when write is complete
1677 1678 patchf = self.opener(patchfn, 'w', atomictemp=True)
1678 1679
1679 1680 # update the dirstate in place, strip off the qtip commit
1680 1681 # and then commit.
1681 1682 #
1682 1683 # this should really read:
1683 1684 # mm, dd, aa = repo.status(top, patchparent)[:3]
1684 1685 # but we do it backwards to take advantage of manifest/changelog
1685 1686 # caching against the next repo.status call
1686 1687 mm, aa, dd = repo.status(patchparent, top)[:3]
1687 1688 changes = repo.changelog.read(top)
1688 1689 man = repo.manifestlog[changes[0]].read()
1689 1690 aaa = aa[:]
1690 1691 match1 = scmutil.match(repo[None], pats, opts)
1691 1692 # in short mode, we only diff the files included in the
1692 1693 # patch already plus specified files
1693 1694 if opts.get('short'):
1694 1695 # if amending a patch, we start with existing
1695 1696 # files plus specified files - unfiltered
1696 1697 match = scmutil.matchfiles(repo, mm + aa + dd + match1.files())
1697 1698 # filter with include/exclude options
1698 1699 match1 = scmutil.match(repo[None], opts=opts)
1699 1700 else:
1700 1701 match = scmutil.matchall(repo)
1701 1702 m, a, r, d = repo.status(match=match)[:4]
1702 1703 mm = set(mm)
1703 1704 aa = set(aa)
1704 1705 dd = set(dd)
1705 1706
1706 1707 # we might end up with files that were added between
1707 1708 # qtip and the dirstate parent, but then changed in the
1708 1709 # local dirstate. in this case, we want them to only
1709 1710 # show up in the added section
1710 1711 for x in m:
1711 1712 if x not in aa:
1712 1713 mm.add(x)
1713 1714 # we might end up with files added by the local dirstate that
1714 1715 # were deleted by the patch. In this case, they should only
1715 1716 # show up in the changed section.
1716 1717 for x in a:
1717 1718 if x in dd:
1718 1719 dd.remove(x)
1719 1720 mm.add(x)
1720 1721 else:
1721 1722 aa.add(x)
1722 1723 # make sure any files deleted in the local dirstate
1723 1724 # are not in the add or change column of the patch
1724 1725 forget = []
1725 1726 for x in d + r:
1726 1727 if x in aa:
1727 1728 aa.remove(x)
1728 1729 forget.append(x)
1729 1730 continue
1730 1731 else:
1731 1732 mm.discard(x)
1732 1733 dd.add(x)
1733 1734
1734 1735 m = list(mm)
1735 1736 r = list(dd)
1736 1737 a = list(aa)
1737 1738
1738 1739 # create 'match' that includes the files to be recommitted.
1739 1740 # apply match1 via repo.status to ensure correct case handling.
1740 1741 cm, ca, cr, cd = repo.status(patchparent, match=match1)[:4]
1741 1742 allmatches = set(cm + ca + cr + cd)
1742 1743 refreshchanges = [x.intersection(allmatches) for x in (mm, aa, dd)]
1743 1744
1744 1745 files = set(inclsubs)
1745 1746 for x in refreshchanges:
1746 1747 files.update(x)
1747 1748 match = scmutil.matchfiles(repo, files)
1748 1749
1749 1750 bmlist = repo[top].bookmarks()
1750 1751
1751 1752 dsguard = None
1752 1753 try:
1753 1754 dsguard = dirstateguard.dirstateguard(repo, 'mq.refresh')
1754 1755 if diffopts.git or diffopts.upgrade:
1755 1756 copies = {}
1756 1757 for dst in a:
1757 1758 src = repo.dirstate.copied(dst)
1758 1759 # during qfold, the source file for copies may
1759 1760 # be removed. Treat this as a simple add.
1760 1761 if src is not None and src in repo.dirstate:
1761 1762 copies.setdefault(src, []).append(dst)
1762 1763 repo.dirstate.add(dst)
1763 1764 # remember the copies between patchparent and qtip
1764 1765 for dst in aaa:
1765 1766 f = repo.file(dst)
1766 1767 src = f.renamed(man[dst])
1767 1768 if src:
1768 1769 copies.setdefault(src[0], []).extend(
1769 1770 copies.get(dst, []))
1770 1771 if dst in a:
1771 1772 copies[src[0]].append(dst)
1772 1773 # we can't copy a file created by the patch itself
1773 1774 if dst in copies:
1774 1775 del copies[dst]
1775 1776 for src, dsts in copies.iteritems():
1776 1777 for dst in dsts:
1777 1778 repo.dirstate.copy(src, dst)
1778 1779 else:
1779 1780 for dst in a:
1780 1781 repo.dirstate.add(dst)
1781 1782 # Drop useless copy information
1782 1783 for f in list(repo.dirstate.copies()):
1783 1784 repo.dirstate.copy(None, f)
1784 1785 for f in r:
1785 1786 repo.dirstate.remove(f)
1786 1787 # if the patch excludes a modified file, mark that
1787 1788 # file with mtime=0 so status can see it.
1788 1789 mm = []
1789 1790 for i in xrange(len(m) - 1, -1, -1):
1790 1791 if not match1(m[i]):
1791 1792 mm.append(m[i])
1792 1793 del m[i]
1793 1794 for f in m:
1794 1795 repo.dirstate.normal(f)
1795 1796 for f in mm:
1796 1797 repo.dirstate.normallookup(f)
1797 1798 for f in forget:
1798 1799 repo.dirstate.drop(f)
1799 1800
1800 1801 user = ph.user or changes[1]
1801 1802
1802 1803 oldphase = repo[top].phase()
1803 1804
1804 1805 # assumes strip can roll itself back if interrupted
1805 1806 repo.setparents(*cparents)
1806 1807 self.applied.pop()
1807 1808 self.applieddirty = True
1808 1809 strip(self.ui, repo, [top], update=False, backup=False)
1809 1810 dsguard.close()
1810 1811 finally:
1811 1812 release(dsguard)
1812 1813
1813 1814 try:
1814 1815 # might be nice to attempt to roll back strip after this
1815 1816
1816 1817 defaultmsg = "[mq]: %s" % patchfn
1817 1818 editor = cmdutil.getcommiteditor(editform=editform)
1818 1819 if edit:
1819 1820 def finishdesc(desc):
1820 1821 if desc.rstrip():
1821 1822 ph.setmessage(desc)
1822 1823 return desc
1823 1824 return defaultmsg
1824 1825 # i18n: this message is shown in editor with "HG: " prefix
1825 1826 extramsg = _('Leave message empty to use default message.')
1826 1827 editor = cmdutil.getcommiteditor(finishdesc=finishdesc,
1827 1828 extramsg=extramsg,
1828 1829 editform=editform)
1829 1830 message = msg or "\n".join(ph.message)
1830 1831 elif not msg:
1831 1832 if not ph.message:
1832 1833 message = defaultmsg
1833 1834 else:
1834 1835 message = "\n".join(ph.message)
1835 1836 else:
1836 1837 message = msg
1837 1838 ph.setmessage(msg)
1838 1839
1839 1840 # Ensure we create a new changeset in the same phase than
1840 1841 # the old one.
1841 1842 lock = tr = None
1842 1843 try:
1843 1844 lock = repo.lock()
1844 1845 tr = repo.transaction('mq')
1845 1846 n = newcommit(repo, oldphase, message, user, ph.date,
1846 1847 match=match, force=True, editor=editor)
1847 1848 # only write patch after a successful commit
1848 1849 c = [list(x) for x in refreshchanges]
1849 1850 if inclsubs:
1850 1851 self.putsubstate2changes(substatestate, c)
1851 1852 chunks = patchmod.diff(repo, patchparent,
1852 1853 changes=c, opts=diffopts)
1853 1854 comments = str(ph)
1854 1855 if comments:
1855 1856 patchf.write(comments)
1856 1857 for chunk in chunks:
1857 1858 patchf.write(chunk)
1858 1859 patchf.close()
1859 1860
1860 1861 marks = repo._bookmarks
1861 1862 marks.applychanges(repo, tr, [(bm, n) for bm in bmlist])
1862 1863 tr.close()
1863 1864
1864 1865 self.applied.append(statusentry(n, patchfn))
1865 1866 finally:
1866 1867 lockmod.release(tr, lock)
1867 1868 except: # re-raises
1868 1869 ctx = repo[cparents[0]]
1869 1870 repo.dirstate.rebuild(ctx.node(), ctx.manifest())
1870 1871 self.savedirty()
1871 1872 self.ui.warn(_('qrefresh interrupted while patch was popped! '
1872 1873 '(revert --all, qpush to recover)\n'))
1873 1874 raise
1874 1875 finally:
1875 1876 wlock.release()
1876 1877 self.removeundo(repo)
1877 1878
1878 1879 def init(self, repo, create=False):
1879 1880 if not create and os.path.isdir(self.path):
1880 1881 raise error.Abort(_("patch queue directory already exists"))
1881 1882 try:
1882 1883 os.mkdir(self.path)
1883 1884 except OSError as inst:
1884 1885 if inst.errno != errno.EEXIST or not create:
1885 1886 raise
1886 1887 if create:
1887 1888 return self.qrepo(create=True)
1888 1889
1889 1890 def unapplied(self, repo, patch=None):
1890 1891 if patch and patch not in self.series:
1891 1892 raise error.Abort(_("patch %s is not in series file") % patch)
1892 1893 if not patch:
1893 1894 start = self.seriesend()
1894 1895 else:
1895 1896 start = self.series.index(patch) + 1
1896 1897 unapplied = []
1897 1898 for i in xrange(start, len(self.series)):
1898 1899 pushable, reason = self.pushable(i)
1899 1900 if pushable:
1900 1901 unapplied.append((i, self.series[i]))
1901 1902 self.explainpushable(i)
1902 1903 return unapplied
1903 1904
1904 1905 def qseries(self, repo, missing=None, start=0, length=None, status=None,
1905 1906 summary=False):
1906 1907 def displayname(pfx, patchname, state):
1907 1908 if pfx:
1908 1909 self.ui.write(pfx)
1909 1910 if summary:
1910 1911 ph = patchheader(self.join(patchname), self.plainmode)
1911 1912 if ph.message:
1912 1913 msg = ph.message[0]
1913 1914 else:
1914 1915 msg = ''
1915 1916
1916 1917 if self.ui.formatted():
1917 1918 width = self.ui.termwidth() - len(pfx) - len(patchname) - 2
1918 1919 if width > 0:
1919 1920 msg = util.ellipsis(msg, width)
1920 1921 else:
1921 1922 msg = ''
1922 1923 self.ui.write(patchname, label='qseries.' + state)
1923 1924 self.ui.write(': ')
1924 1925 self.ui.write(msg, label='qseries.message.' + state)
1925 1926 else:
1926 1927 self.ui.write(patchname, label='qseries.' + state)
1927 1928 self.ui.write('\n')
1928 1929
1929 1930 applied = set([p.name for p in self.applied])
1930 1931 if length is None:
1931 1932 length = len(self.series) - start
1932 1933 if not missing:
1933 1934 if self.ui.verbose:
1934 1935 idxwidth = len(str(start + length - 1))
1935 1936 for i in xrange(start, start + length):
1936 1937 patch = self.series[i]
1937 1938 if patch in applied:
1938 1939 char, state = 'A', 'applied'
1939 1940 elif self.pushable(i)[0]:
1940 1941 char, state = 'U', 'unapplied'
1941 1942 else:
1942 1943 char, state = 'G', 'guarded'
1943 1944 pfx = ''
1944 1945 if self.ui.verbose:
1945 1946 pfx = '%*d %s ' % (idxwidth, i, char)
1946 1947 elif status and status != char:
1947 1948 continue
1948 1949 displayname(pfx, patch, state)
1949 1950 else:
1950 1951 msng_list = []
1951 1952 for root, dirs, files in os.walk(self.path):
1952 1953 d = root[len(self.path) + 1:]
1953 1954 for f in files:
1954 1955 fl = os.path.join(d, f)
1955 1956 if (fl not in self.series and
1956 1957 fl not in (self.statuspath, self.seriespath,
1957 1958 self.guardspath)
1958 1959 and not fl.startswith('.')):
1959 1960 msng_list.append(fl)
1960 1961 for x in sorted(msng_list):
1961 1962 pfx = self.ui.verbose and ('D ') or ''
1962 1963 displayname(pfx, x, 'missing')
1963 1964
1964 1965 def issaveline(self, l):
1965 1966 if l.name == '.hg.patches.save.line':
1966 1967 return True
1967 1968
1968 1969 def qrepo(self, create=False):
1969 1970 ui = self.baseui.copy()
1970 1971 # copy back attributes set by ui.pager()
1971 1972 if self.ui.pageractive and not ui.pageractive:
1972 1973 ui.pageractive = self.ui.pageractive
1973 1974 # internal config: ui.formatted
1974 1975 ui.setconfig('ui', 'formatted',
1975 1976 self.ui.config('ui', 'formatted'), 'mqpager')
1976 1977 ui.setconfig('ui', 'interactive',
1977 1978 self.ui.config('ui', 'interactive'), 'mqpager')
1978 1979 if create or os.path.isdir(self.join(".hg")):
1979 1980 return hg.repository(ui, path=self.path, create=create)
1980 1981
1981 1982 def restore(self, repo, rev, delete=None, qupdate=None):
1982 1983 desc = repo[rev].description().strip()
1983 1984 lines = desc.splitlines()
1984 1985 i = 0
1985 1986 datastart = None
1986 1987 series = []
1987 1988 applied = []
1988 1989 qpp = None
1989 1990 for i, line in enumerate(lines):
1990 1991 if line == 'Patch Data:':
1991 1992 datastart = i + 1
1992 1993 elif line.startswith('Dirstate:'):
1993 1994 l = line.rstrip()
1994 1995 l = l[10:].split(' ')
1995 1996 qpp = [bin(x) for x in l]
1996 1997 elif datastart is not None:
1997 1998 l = line.rstrip()
1998 1999 n, name = l.split(':', 1)
1999 2000 if n:
2000 2001 applied.append(statusentry(bin(n), name))
2001 2002 else:
2002 2003 series.append(l)
2003 2004 if datastart is None:
2004 2005 self.ui.warn(_("no saved patch data found\n"))
2005 2006 return 1
2006 2007 self.ui.warn(_("restoring status: %s\n") % lines[0])
2007 2008 self.fullseries = series
2008 2009 self.applied = applied
2009 2010 self.parseseries()
2010 2011 self.seriesdirty = True
2011 2012 self.applieddirty = True
2012 2013 heads = repo.changelog.heads()
2013 2014 if delete:
2014 2015 if rev not in heads:
2015 2016 self.ui.warn(_("save entry has children, leaving it alone\n"))
2016 2017 else:
2017 2018 self.ui.warn(_("removing save entry %s\n") % short(rev))
2018 2019 pp = repo.dirstate.parents()
2019 2020 if rev in pp:
2020 2021 update = True
2021 2022 else:
2022 2023 update = False
2023 2024 strip(self.ui, repo, [rev], update=update, backup=False)
2024 2025 if qpp:
2025 2026 self.ui.warn(_("saved queue repository parents: %s %s\n") %
2026 2027 (short(qpp[0]), short(qpp[1])))
2027 2028 if qupdate:
2028 2029 self.ui.status(_("updating queue directory\n"))
2029 2030 r = self.qrepo()
2030 2031 if not r:
2031 2032 self.ui.warn(_("unable to load queue repository\n"))
2032 2033 return 1
2033 2034 hg.clean(r, qpp[0])
2034 2035
2035 2036 def save(self, repo, msg=None):
2036 2037 if not self.applied:
2037 2038 self.ui.warn(_("save: no patches applied, exiting\n"))
2038 2039 return 1
2039 2040 if self.issaveline(self.applied[-1]):
2040 2041 self.ui.warn(_("status is already saved\n"))
2041 2042 return 1
2042 2043
2043 2044 if not msg:
2044 2045 msg = _("hg patches saved state")
2045 2046 else:
2046 2047 msg = "hg patches: " + msg.rstrip('\r\n')
2047 2048 r = self.qrepo()
2048 2049 if r:
2049 2050 pp = r.dirstate.parents()
2050 2051 msg += "\nDirstate: %s %s" % (hex(pp[0]), hex(pp[1]))
2051 2052 msg += "\n\nPatch Data:\n"
2052 2053 msg += ''.join('%s\n' % x for x in self.applied)
2053 2054 msg += ''.join(':%s\n' % x for x in self.fullseries)
2054 2055 n = repo.commit(msg, force=True)
2055 2056 if not n:
2056 2057 self.ui.warn(_("repo commit failed\n"))
2057 2058 return 1
2058 2059 self.applied.append(statusentry(n, '.hg.patches.save.line'))
2059 2060 self.applieddirty = True
2060 2061 self.removeundo(repo)
2061 2062
2062 2063 def fullseriesend(self):
2063 2064 if self.applied:
2064 2065 p = self.applied[-1].name
2065 2066 end = self.findseries(p)
2066 2067 if end is None:
2067 2068 return len(self.fullseries)
2068 2069 return end + 1
2069 2070 return 0
2070 2071
2071 2072 def seriesend(self, all_patches=False):
2072 2073 """If all_patches is False, return the index of the next pushable patch
2073 2074 in the series, or the series length. If all_patches is True, return the
2074 2075 index of the first patch past the last applied one.
2075 2076 """
2076 2077 end = 0
2077 2078 def nextpatch(start):
2078 2079 if all_patches or start >= len(self.series):
2079 2080 return start
2080 2081 for i in xrange(start, len(self.series)):
2081 2082 p, reason = self.pushable(i)
2082 2083 if p:
2083 2084 return i
2084 2085 self.explainpushable(i)
2085 2086 return len(self.series)
2086 2087 if self.applied:
2087 2088 p = self.applied[-1].name
2088 2089 try:
2089 2090 end = self.series.index(p)
2090 2091 except ValueError:
2091 2092 return 0
2092 2093 return nextpatch(end + 1)
2093 2094 return nextpatch(end)
2094 2095
2095 2096 def appliedname(self, index):
2096 2097 pname = self.applied[index].name
2097 2098 if not self.ui.verbose:
2098 2099 p = pname
2099 2100 else:
2100 2101 p = str(self.series.index(pname)) + " " + pname
2101 2102 return p
2102 2103
2103 2104 def qimport(self, repo, files, patchname=None, rev=None, existing=None,
2104 2105 force=None, git=False):
2105 2106 def checkseries(patchname):
2106 2107 if patchname in self.series:
2107 2108 raise error.Abort(_('patch %s is already in the series file')
2108 2109 % patchname)
2109 2110
2110 2111 if rev:
2111 2112 if files:
2112 2113 raise error.Abort(_('option "-r" not valid when importing '
2113 2114 'files'))
2114 2115 rev = scmutil.revrange(repo, rev)
2115 2116 rev.sort(reverse=True)
2116 2117 elif not files:
2117 2118 raise error.Abort(_('no files or revisions specified'))
2118 2119 if (len(files) > 1 or len(rev) > 1) and patchname:
2119 2120 raise error.Abort(_('option "-n" not valid when importing multiple '
2120 2121 'patches'))
2121 2122 imported = []
2122 2123 if rev:
2123 2124 # If mq patches are applied, we can only import revisions
2124 2125 # that form a linear path to qbase.
2125 2126 # Otherwise, they should form a linear path to a head.
2126 2127 heads = repo.changelog.heads(repo.changelog.node(rev.first()))
2127 2128 if len(heads) > 1:
2128 2129 raise error.Abort(_('revision %d is the root of more than one '
2129 2130 'branch') % rev.last())
2130 2131 if self.applied:
2131 2132 base = repo.changelog.node(rev.first())
2132 2133 if base in [n.node for n in self.applied]:
2133 2134 raise error.Abort(_('revision %d is already managed')
2134 2135 % rev.first())
2135 2136 if heads != [self.applied[-1].node]:
2136 2137 raise error.Abort(_('revision %d is not the parent of '
2137 2138 'the queue') % rev.first())
2138 2139 base = repo.changelog.rev(self.applied[0].node)
2139 2140 lastparent = repo.changelog.parentrevs(base)[0]
2140 2141 else:
2141 2142 if heads != [repo.changelog.node(rev.first())]:
2142 2143 raise error.Abort(_('revision %d has unmanaged children')
2143 2144 % rev.first())
2144 2145 lastparent = None
2145 2146
2146 2147 diffopts = self.diffopts({'git': git})
2147 2148 with repo.transaction('qimport') as tr:
2148 2149 for r in rev:
2149 2150 if not repo[r].mutable():
2150 2151 raise error.Abort(_('revision %d is not mutable') % r,
2151 2152 hint=_("see 'hg help phases' "
2152 2153 'for details'))
2153 2154 p1, p2 = repo.changelog.parentrevs(r)
2154 2155 n = repo.changelog.node(r)
2155 2156 if p2 != nullrev:
2156 2157 raise error.Abort(_('cannot import merge revision %d')
2157 2158 % r)
2158 2159 if lastparent and lastparent != r:
2159 2160 raise error.Abort(_('revision %d is not the parent of '
2160 2161 '%d')
2161 2162 % (r, lastparent))
2162 2163 lastparent = p1
2163 2164
2164 2165 if not patchname:
2165 2166 patchname = self.makepatchname(
2166 2167 repo[r].description().split('\n', 1)[0],
2167 2168 '%d.diff' % r)
2168 2169 checkseries(patchname)
2169 2170 self.checkpatchname(patchname, force)
2170 2171 self.fullseries.insert(0, patchname)
2171 2172
2172 2173 patchf = self.opener(patchname, "w")
2173 2174 cmdutil.export(repo, [n], fp=patchf, opts=diffopts)
2174 2175 patchf.close()
2175 2176
2176 2177 se = statusentry(n, patchname)
2177 2178 self.applied.insert(0, se)
2178 2179
2179 2180 self.added.append(patchname)
2180 2181 imported.append(patchname)
2181 2182 patchname = None
2182 2183 if rev and repo.ui.configbool('mq', 'secret'):
2183 2184 # if we added anything with --rev, move the secret root
2184 2185 phases.retractboundary(repo, tr, phases.secret, [n])
2185 2186 self.parseseries()
2186 2187 self.applieddirty = True
2187 2188 self.seriesdirty = True
2188 2189
2189 2190 for i, filename in enumerate(files):
2190 2191 if existing:
2191 2192 if filename == '-':
2192 2193 raise error.Abort(_('-e is incompatible with import from -')
2193 2194 )
2194 2195 filename = normname(filename)
2195 2196 self.checkreservedname(filename)
2196 2197 if util.url(filename).islocal():
2197 2198 originpath = self.join(filename)
2198 2199 if not os.path.isfile(originpath):
2199 2200 raise error.Abort(
2200 2201 _("patch %s does not exist") % filename)
2201 2202
2202 2203 if patchname:
2203 2204 self.checkpatchname(patchname, force)
2204 2205
2205 2206 self.ui.write(_('renaming %s to %s\n')
2206 2207 % (filename, patchname))
2207 2208 util.rename(originpath, self.join(patchname))
2208 2209 else:
2209 2210 patchname = filename
2210 2211
2211 2212 else:
2212 2213 if filename == '-' and not patchname:
2213 2214 raise error.Abort(_('need --name to import a patch from -'))
2214 2215 elif not patchname:
2215 2216 patchname = normname(os.path.basename(filename.rstrip('/')))
2216 2217 self.checkpatchname(patchname, force)
2217 2218 try:
2218 2219 if filename == '-':
2219 2220 text = self.ui.fin.read()
2220 2221 else:
2221 2222 fp = hg.openpath(self.ui, filename)
2222 2223 text = fp.read()
2223 2224 fp.close()
2224 2225 except (OSError, IOError):
2225 2226 raise error.Abort(_("unable to read file %s") % filename)
2226 2227 patchf = self.opener(patchname, "w")
2227 2228 patchf.write(text)
2228 2229 patchf.close()
2229 2230 if not force:
2230 2231 checkseries(patchname)
2231 2232 if patchname not in self.series:
2232 2233 index = self.fullseriesend() + i
2233 2234 self.fullseries[index:index] = [patchname]
2234 2235 self.parseseries()
2235 2236 self.seriesdirty = True
2236 2237 self.ui.warn(_("adding %s to series file\n") % patchname)
2237 2238 self.added.append(patchname)
2238 2239 imported.append(patchname)
2239 2240 patchname = None
2240 2241
2241 2242 self.removeundo(repo)
2242 2243 return imported
2243 2244
2244 2245 def fixkeepchangesopts(ui, opts):
2245 2246 if (not ui.configbool('mq', 'keepchanges') or opts.get('force')
2246 2247 or opts.get('exact')):
2247 2248 return opts
2248 2249 opts = dict(opts)
2249 2250 opts['keep_changes'] = True
2250 2251 return opts
2251 2252
2252 2253 @command("qdelete|qremove|qrm",
2253 2254 [('k', 'keep', None, _('keep patch file')),
2254 2255 ('r', 'rev', [],
2255 2256 _('stop managing a revision (DEPRECATED)'), _('REV'))],
2256 2257 _('hg qdelete [-k] [PATCH]...'))
2257 2258 def delete(ui, repo, *patches, **opts):
2258 2259 """remove patches from queue
2259 2260
2260 2261 The patches must not be applied, and at least one patch is required. Exact
2261 2262 patch identifiers must be given. With -k/--keep, the patch files are
2262 2263 preserved in the patch directory.
2263 2264
2264 2265 To stop managing a patch and move it into permanent history,
2265 2266 use the :hg:`qfinish` command."""
2266 2267 q = repo.mq
2267 2268 q.delete(repo, patches, opts)
2268 2269 q.savedirty()
2269 2270 return 0
2270 2271
2271 2272 @command("qapplied",
2272 2273 [('1', 'last', None, _('show only the preceding applied patch'))
2273 2274 ] + seriesopts,
2274 2275 _('hg qapplied [-1] [-s] [PATCH]'))
2275 2276 def applied(ui, repo, patch=None, **opts):
2276 2277 """print the patches already applied
2277 2278
2278 2279 Returns 0 on success."""
2279 2280
2280 2281 q = repo.mq
2281 2282 opts = pycompat.byteskwargs(opts)
2282 2283
2283 2284 if patch:
2284 2285 if patch not in q.series:
2285 2286 raise error.Abort(_("patch %s is not in series file") % patch)
2286 2287 end = q.series.index(patch) + 1
2287 2288 else:
2288 2289 end = q.seriesend(True)
2289 2290
2290 2291 if opts.get('last') and not end:
2291 2292 ui.write(_("no patches applied\n"))
2292 2293 return 1
2293 2294 elif opts.get('last') and end == 1:
2294 2295 ui.write(_("only one patch applied\n"))
2295 2296 return 1
2296 2297 elif opts.get('last'):
2297 2298 start = end - 2
2298 2299 end = 1
2299 2300 else:
2300 2301 start = 0
2301 2302
2302 2303 q.qseries(repo, length=end, start=start, status='A',
2303 2304 summary=opts.get('summary'))
2304 2305
2305 2306
2306 2307 @command("qunapplied",
2307 2308 [('1', 'first', None, _('show only the first patch'))] + seriesopts,
2308 2309 _('hg qunapplied [-1] [-s] [PATCH]'))
2309 2310 def unapplied(ui, repo, patch=None, **opts):
2310 2311 """print the patches not yet applied
2311 2312
2312 2313 Returns 0 on success."""
2313 2314
2314 2315 q = repo.mq
2315 2316 opts = pycompat.byteskwargs(opts)
2316 2317 if patch:
2317 2318 if patch not in q.series:
2318 2319 raise error.Abort(_("patch %s is not in series file") % patch)
2319 2320 start = q.series.index(patch) + 1
2320 2321 else:
2321 2322 start = q.seriesend(True)
2322 2323
2323 2324 if start == len(q.series) and opts.get('first'):
2324 2325 ui.write(_("all patches applied\n"))
2325 2326 return 1
2326 2327
2327 2328 if opts.get('first'):
2328 2329 length = 1
2329 2330 else:
2330 2331 length = None
2331 2332 q.qseries(repo, start=start, length=length, status='U',
2332 2333 summary=opts.get('summary'))
2333 2334
2334 2335 @command("qimport",
2335 2336 [('e', 'existing', None, _('import file in patch directory')),
2336 2337 ('n', 'name', '',
2337 2338 _('name of patch file'), _('NAME')),
2338 2339 ('f', 'force', None, _('overwrite existing files')),
2339 2340 ('r', 'rev', [],
2340 2341 _('place existing revisions under mq control'), _('REV')),
2341 2342 ('g', 'git', None, _('use git extended diff format')),
2342 2343 ('P', 'push', None, _('qpush after importing'))],
2343 2344 _('hg qimport [-e] [-n NAME] [-f] [-g] [-P] [-r REV]... [FILE]...'))
2344 2345 def qimport(ui, repo, *filename, **opts):
2345 2346 """import a patch or existing changeset
2346 2347
2347 2348 The patch is inserted into the series after the last applied
2348 2349 patch. If no patches have been applied, qimport prepends the patch
2349 2350 to the series.
2350 2351
2351 2352 The patch will have the same name as its source file unless you
2352 2353 give it a new one with -n/--name.
2353 2354
2354 2355 You can register an existing patch inside the patch directory with
2355 2356 the -e/--existing flag.
2356 2357
2357 2358 With -f/--force, an existing patch of the same name will be
2358 2359 overwritten.
2359 2360
2360 2361 An existing changeset may be placed under mq control with -r/--rev
2361 2362 (e.g. qimport --rev . -n patch will place the current revision
2362 2363 under mq control). With -g/--git, patches imported with --rev will
2363 2364 use the git diff format. See the diffs help topic for information
2364 2365 on why this is important for preserving rename/copy information
2365 2366 and permission changes. Use :hg:`qfinish` to remove changesets
2366 2367 from mq control.
2367 2368
2368 2369 To import a patch from standard input, pass - as the patch file.
2369 2370 When importing from standard input, a patch name must be specified
2370 2371 using the --name flag.
2371 2372
2372 2373 To import an existing patch while renaming it::
2373 2374
2374 2375 hg qimport -e existing-patch -n new-name
2375 2376
2376 2377 Returns 0 if import succeeded.
2377 2378 """
2378 2379 opts = pycompat.byteskwargs(opts)
2379 2380 with repo.lock(): # cause this may move phase
2380 2381 q = repo.mq
2381 2382 try:
2382 2383 imported = q.qimport(
2383 2384 repo, filename, patchname=opts.get('name'),
2384 2385 existing=opts.get('existing'), force=opts.get('force'),
2385 2386 rev=opts.get('rev'), git=opts.get('git'))
2386 2387 finally:
2387 2388 q.savedirty()
2388 2389
2389 2390 if imported and opts.get('push') and not opts.get('rev'):
2390 2391 return q.push(repo, imported[-1])
2391 2392 return 0
2392 2393
2393 2394 def qinit(ui, repo, create):
2394 2395 """initialize a new queue repository
2395 2396
2396 2397 This command also creates a series file for ordering patches, and
2397 2398 an mq-specific .hgignore file in the queue repository, to exclude
2398 2399 the status and guards files (these contain mostly transient state).
2399 2400
2400 2401 Returns 0 if initialization succeeded."""
2401 2402 q = repo.mq
2402 2403 r = q.init(repo, create)
2403 2404 q.savedirty()
2404 2405 if r:
2405 2406 if not os.path.exists(r.wjoin('.hgignore')):
2406 2407 fp = r.wvfs('.hgignore', 'w')
2407 2408 fp.write('^\\.hg\n')
2408 2409 fp.write('^\\.mq\n')
2409 2410 fp.write('syntax: glob\n')
2410 2411 fp.write('status\n')
2411 2412 fp.write('guards\n')
2412 2413 fp.close()
2413 2414 if not os.path.exists(r.wjoin('series')):
2414 2415 r.wvfs('series', 'w').close()
2415 2416 r[None].add(['.hgignore', 'series'])
2416 2417 commands.add(ui, r)
2417 2418 return 0
2418 2419
2419 2420 @command("^qinit",
2420 2421 [('c', 'create-repo', None, _('create queue repository'))],
2421 2422 _('hg qinit [-c]'))
2422 2423 def init(ui, repo, **opts):
2423 2424 """init a new queue repository (DEPRECATED)
2424 2425
2425 2426 The queue repository is unversioned by default. If
2426 2427 -c/--create-repo is specified, qinit will create a separate nested
2427 2428 repository for patches (qinit -c may also be run later to convert
2428 2429 an unversioned patch repository into a versioned one). You can use
2429 2430 qcommit to commit changes to this queue repository.
2430 2431
2431 2432 This command is deprecated. Without -c, it's implied by other relevant
2432 2433 commands. With -c, use :hg:`init --mq` instead."""
2433 2434 return qinit(ui, repo, create=opts.get(r'create_repo'))
2434 2435
2435 2436 @command("qclone",
2436 2437 [('', 'pull', None, _('use pull protocol to copy metadata')),
2437 2438 ('U', 'noupdate', None,
2438 2439 _('do not update the new working directories')),
2439 2440 ('', 'uncompressed', None,
2440 2441 _('use uncompressed transfer (fast over LAN)')),
2441 2442 ('p', 'patches', '',
2442 2443 _('location of source patch repository'), _('REPO')),
2443 2444 ] + cmdutil.remoteopts,
2444 2445 _('hg qclone [OPTION]... SOURCE [DEST]'),
2445 2446 norepo=True)
2446 2447 def clone(ui, source, dest=None, **opts):
2447 2448 '''clone main and patch repository at same time
2448 2449
2449 2450 If source is local, destination will have no patches applied. If
2450 2451 source is remote, this command can not check if patches are
2451 2452 applied in source, so cannot guarantee that patches are not
2452 2453 applied in destination. If you clone remote repository, be sure
2453 2454 before that it has no patches applied.
2454 2455
2455 2456 Source patch repository is looked for in <src>/.hg/patches by
2456 2457 default. Use -p <url> to change.
2457 2458
2458 2459 The patch directory must be a nested Mercurial repository, as
2459 2460 would be created by :hg:`init --mq`.
2460 2461
2461 2462 Return 0 on success.
2462 2463 '''
2463 2464 opts = pycompat.byteskwargs(opts)
2464 2465 def patchdir(repo):
2465 2466 """compute a patch repo url from a repo object"""
2466 2467 url = repo.url()
2467 2468 if url.endswith('/'):
2468 2469 url = url[:-1]
2469 2470 return url + '/.hg/patches'
2470 2471
2471 2472 # main repo (destination and sources)
2472 2473 if dest is None:
2473 2474 dest = hg.defaultdest(source)
2474 2475 sr = hg.peer(ui, opts, ui.expandpath(source))
2475 2476
2476 2477 # patches repo (source only)
2477 2478 if opts.get('patches'):
2478 2479 patchespath = ui.expandpath(opts.get('patches'))
2479 2480 else:
2480 2481 patchespath = patchdir(sr)
2481 2482 try:
2482 2483 hg.peer(ui, opts, patchespath)
2483 2484 except error.RepoError:
2484 2485 raise error.Abort(_('versioned patch repository not found'
2485 2486 ' (see init --mq)'))
2486 2487 qbase, destrev = None, None
2487 2488 if sr.local():
2488 2489 repo = sr.local()
2489 2490 if repo.mq.applied and repo[qbase].phase() != phases.secret:
2490 2491 qbase = repo.mq.applied[0].node
2491 2492 if not hg.islocal(dest):
2492 2493 heads = set(repo.heads())
2493 2494 destrev = list(heads.difference(repo.heads(qbase)))
2494 2495 destrev.append(repo.changelog.parents(qbase)[0])
2495 2496 elif sr.capable('lookup'):
2496 2497 try:
2497 2498 qbase = sr.lookup('qbase')
2498 2499 except error.RepoError:
2499 2500 pass
2500 2501
2501 2502 ui.note(_('cloning main repository\n'))
2502 2503 sr, dr = hg.clone(ui, opts, sr.url(), dest,
2503 2504 pull=opts.get('pull'),
2504 2505 rev=destrev,
2505 2506 update=False,
2506 2507 stream=opts.get('uncompressed'))
2507 2508
2508 2509 ui.note(_('cloning patch repository\n'))
2509 2510 hg.clone(ui, opts, opts.get('patches') or patchdir(sr), patchdir(dr),
2510 2511 pull=opts.get('pull'), update=not opts.get('noupdate'),
2511 2512 stream=opts.get('uncompressed'))
2512 2513
2513 2514 if dr.local():
2514 2515 repo = dr.local()
2515 2516 if qbase:
2516 2517 ui.note(_('stripping applied patches from destination '
2517 2518 'repository\n'))
2518 2519 strip(ui, repo, [qbase], update=False, backup=None)
2519 2520 if not opts.get('noupdate'):
2520 2521 ui.note(_('updating destination repository\n'))
2521 2522 hg.update(repo, repo.changelog.tip())
2522 2523
2523 2524 @command("qcommit|qci",
2524 2525 commands.table["^commit|ci"][1],
2525 2526 _('hg qcommit [OPTION]... [FILE]...'),
2526 2527 inferrepo=True)
2527 2528 def commit(ui, repo, *pats, **opts):
2528 2529 """commit changes in the queue repository (DEPRECATED)
2529 2530
2530 2531 This command is deprecated; use :hg:`commit --mq` instead."""
2531 2532 q = repo.mq
2532 2533 r = q.qrepo()
2533 2534 if not r:
2534 2535 raise error.Abort('no queue repository')
2535 2536 commands.commit(r.ui, r, *pats, **opts)
2536 2537
2537 2538 @command("qseries",
2538 2539 [('m', 'missing', None, _('print patches not in series')),
2539 2540 ] + seriesopts,
2540 2541 _('hg qseries [-ms]'))
2541 2542 def series(ui, repo, **opts):
2542 2543 """print the entire series file
2543 2544
2544 2545 Returns 0 on success."""
2545 2546 repo.mq.qseries(repo, missing=opts.get(r'missing'),
2546 2547 summary=opts.get(r'summary'))
2547 2548 return 0
2548 2549
2549 2550 @command("qtop", seriesopts, _('hg qtop [-s]'))
2550 2551 def top(ui, repo, **opts):
2551 2552 """print the name of the current patch
2552 2553
2553 2554 Returns 0 on success."""
2554 2555 q = repo.mq
2555 2556 if q.applied:
2556 2557 t = q.seriesend(True)
2557 2558 else:
2558 2559 t = 0
2559 2560
2560 2561 if t:
2561 2562 q.qseries(repo, start=t - 1, length=1, status='A',
2562 2563 summary=opts.get(r'summary'))
2563 2564 else:
2564 2565 ui.write(_("no patches applied\n"))
2565 2566 return 1
2566 2567
2567 2568 @command("qnext", seriesopts, _('hg qnext [-s]'))
2568 2569 def next(ui, repo, **opts):
2569 2570 """print the name of the next pushable patch
2570 2571
2571 2572 Returns 0 on success."""
2572 2573 q = repo.mq
2573 2574 end = q.seriesend()
2574 2575 if end == len(q.series):
2575 2576 ui.write(_("all patches applied\n"))
2576 2577 return 1
2577 2578 q.qseries(repo, start=end, length=1, summary=opts.get(r'summary'))
2578 2579
2579 2580 @command("qprev", seriesopts, _('hg qprev [-s]'))
2580 2581 def prev(ui, repo, **opts):
2581 2582 """print the name of the preceding applied patch
2582 2583
2583 2584 Returns 0 on success."""
2584 2585 q = repo.mq
2585 2586 l = len(q.applied)
2586 2587 if l == 1:
2587 2588 ui.write(_("only one patch applied\n"))
2588 2589 return 1
2589 2590 if not l:
2590 2591 ui.write(_("no patches applied\n"))
2591 2592 return 1
2592 2593 idx = q.series.index(q.applied[-2].name)
2593 2594 q.qseries(repo, start=idx, length=1, status='A',
2594 2595 summary=opts.get(r'summary'))
2595 2596
2596 2597 def setupheaderopts(ui, opts):
2597 2598 if not opts.get('user') and opts.get('currentuser'):
2598 2599 opts['user'] = ui.username()
2599 2600 if not opts.get('date') and opts.get('currentdate'):
2600 2601 opts['date'] = "%d %d" % util.makedate()
2601 2602
2602 2603 @command("^qnew",
2603 2604 [('e', 'edit', None, _('invoke editor on commit messages')),
2604 2605 ('f', 'force', None, _('import uncommitted changes (DEPRECATED)')),
2605 2606 ('g', 'git', None, _('use git extended diff format')),
2606 2607 ('U', 'currentuser', None, _('add "From: <current user>" to patch')),
2607 2608 ('u', 'user', '',
2608 2609 _('add "From: <USER>" to patch'), _('USER')),
2609 2610 ('D', 'currentdate', None, _('add "Date: <current date>" to patch')),
2610 2611 ('d', 'date', '',
2611 2612 _('add "Date: <DATE>" to patch'), _('DATE'))
2612 2613 ] + cmdutil.walkopts + cmdutil.commitopts,
2613 2614 _('hg qnew [-e] [-m TEXT] [-l FILE] PATCH [FILE]...'),
2614 2615 inferrepo=True)
2615 2616 def new(ui, repo, patch, *args, **opts):
2616 2617 """create a new patch
2617 2618
2618 2619 qnew creates a new patch on top of the currently-applied patch (if
2619 2620 any). The patch will be initialized with any outstanding changes
2620 2621 in the working directory. You may also use -I/--include,
2621 2622 -X/--exclude, and/or a list of files after the patch name to add
2622 2623 only changes to matching files to the new patch, leaving the rest
2623 2624 as uncommitted modifications.
2624 2625
2625 2626 -u/--user and -d/--date can be used to set the (given) user and
2626 2627 date, respectively. -U/--currentuser and -D/--currentdate set user
2627 2628 to current user and date to current date.
2628 2629
2629 2630 -e/--edit, -m/--message or -l/--logfile set the patch header as
2630 2631 well as the commit message. If none is specified, the header is
2631 2632 empty and the commit message is '[mq]: PATCH'.
2632 2633
2633 2634 Use the -g/--git option to keep the patch in the git extended diff
2634 2635 format. Read the diffs help topic for more information on why this
2635 2636 is important for preserving permission changes and copy/rename
2636 2637 information.
2637 2638
2638 2639 Returns 0 on successful creation of a new patch.
2639 2640 """
2640 2641 opts = pycompat.byteskwargs(opts)
2641 2642 msg = cmdutil.logmessage(ui, opts)
2642 2643 q = repo.mq
2643 2644 opts['msg'] = msg
2644 2645 setupheaderopts(ui, opts)
2645 2646 q.new(repo, patch, *args, **pycompat.strkwargs(opts))
2646 2647 q.savedirty()
2647 2648 return 0
2648 2649
2649 2650 @command("^qrefresh",
2650 2651 [('e', 'edit', None, _('invoke editor on commit messages')),
2651 2652 ('g', 'git', None, _('use git extended diff format')),
2652 2653 ('s', 'short', None,
2653 2654 _('refresh only files already in the patch and specified files')),
2654 2655 ('U', 'currentuser', None,
2655 2656 _('add/update author field in patch with current user')),
2656 2657 ('u', 'user', '',
2657 2658 _('add/update author field in patch with given user'), _('USER')),
2658 2659 ('D', 'currentdate', None,
2659 2660 _('add/update date field in patch with current date')),
2660 2661 ('d', 'date', '',
2661 2662 _('add/update date field in patch with given date'), _('DATE'))
2662 2663 ] + cmdutil.walkopts + cmdutil.commitopts,
2663 2664 _('hg qrefresh [-I] [-X] [-e] [-m TEXT] [-l FILE] [-s] [FILE]...'),
2664 2665 inferrepo=True)
2665 2666 def refresh(ui, repo, *pats, **opts):
2666 2667 """update the current patch
2667 2668
2668 2669 If any file patterns are provided, the refreshed patch will
2669 2670 contain only the modifications that match those patterns; the
2670 2671 remaining modifications will remain in the working directory.
2671 2672
2672 2673 If -s/--short is specified, files currently included in the patch
2673 2674 will be refreshed just like matched files and remain in the patch.
2674 2675
2675 2676 If -e/--edit is specified, Mercurial will start your configured editor for
2676 2677 you to enter a message. In case qrefresh fails, you will find a backup of
2677 2678 your message in ``.hg/last-message.txt``.
2678 2679
2679 2680 hg add/remove/copy/rename work as usual, though you might want to
2680 2681 use git-style patches (-g/--git or [diff] git=1) to track copies
2681 2682 and renames. See the diffs help topic for more information on the
2682 2683 git diff format.
2683 2684
2684 2685 Returns 0 on success.
2685 2686 """
2686 2687 opts = pycompat.byteskwargs(opts)
2687 2688 q = repo.mq
2688 2689 message = cmdutil.logmessage(ui, opts)
2689 2690 setupheaderopts(ui, opts)
2690 2691 with repo.wlock():
2691 2692 ret = q.refresh(repo, pats, msg=message, **pycompat.strkwargs(opts))
2692 2693 q.savedirty()
2693 2694 return ret
2694 2695
2695 2696 @command("^qdiff",
2696 2697 cmdutil.diffopts + cmdutil.diffopts2 + cmdutil.walkopts,
2697 2698 _('hg qdiff [OPTION]... [FILE]...'),
2698 2699 inferrepo=True)
2699 2700 def diff(ui, repo, *pats, **opts):
2700 2701 """diff of the current patch and subsequent modifications
2701 2702
2702 2703 Shows a diff which includes the current patch as well as any
2703 2704 changes which have been made in the working directory since the
2704 2705 last refresh (thus showing what the current patch would become
2705 2706 after a qrefresh).
2706 2707
2707 2708 Use :hg:`diff` if you only want to see the changes made since the
2708 2709 last qrefresh, or :hg:`export qtip` if you want to see changes
2709 2710 made by the current patch without including changes made since the
2710 2711 qrefresh.
2711 2712
2712 2713 Returns 0 on success.
2713 2714 """
2714 2715 ui.pager('qdiff')
2715 2716 repo.mq.diff(repo, pats, pycompat.byteskwargs(opts))
2716 2717 return 0
2717 2718
2718 2719 @command('qfold',
2719 2720 [('e', 'edit', None, _('invoke editor on commit messages')),
2720 2721 ('k', 'keep', None, _('keep folded patch files')),
2721 2722 ] + cmdutil.commitopts,
2722 2723 _('hg qfold [-e] [-k] [-m TEXT] [-l FILE] PATCH...'))
2723 2724 def fold(ui, repo, *files, **opts):
2724 2725 """fold the named patches into the current patch
2725 2726
2726 2727 Patches must not yet be applied. Each patch will be successively
2727 2728 applied to the current patch in the order given. If all the
2728 2729 patches apply successfully, the current patch will be refreshed
2729 2730 with the new cumulative patch, and the folded patches will be
2730 2731 deleted. With -k/--keep, the folded patch files will not be
2731 2732 removed afterwards.
2732 2733
2733 2734 The header for each folded patch will be concatenated with the
2734 2735 current patch header, separated by a line of ``* * *``.
2735 2736
2736 2737 Returns 0 on success."""
2737 2738 opts = pycompat.byteskwargs(opts)
2738 2739 q = repo.mq
2739 2740 if not files:
2740 2741 raise error.Abort(_('qfold requires at least one patch name'))
2741 2742 if not q.checktoppatch(repo)[0]:
2742 2743 raise error.Abort(_('no patches applied'))
2743 2744 q.checklocalchanges(repo)
2744 2745
2745 2746 message = cmdutil.logmessage(ui, opts)
2746 2747
2747 2748 parent = q.lookup('qtip')
2748 2749 patches = []
2749 2750 messages = []
2750 2751 for f in files:
2751 2752 p = q.lookup(f)
2752 2753 if p in patches or p == parent:
2753 2754 ui.warn(_('skipping already folded patch %s\n') % p)
2754 2755 if q.isapplied(p):
2755 2756 raise error.Abort(_('qfold cannot fold already applied patch %s')
2756 2757 % p)
2757 2758 patches.append(p)
2758 2759
2759 2760 for p in patches:
2760 2761 if not message:
2761 2762 ph = patchheader(q.join(p), q.plainmode)
2762 2763 if ph.message:
2763 2764 messages.append(ph.message)
2764 2765 pf = q.join(p)
2765 2766 (patchsuccess, files, fuzz) = q.patch(repo, pf)
2766 2767 if not patchsuccess:
2767 2768 raise error.Abort(_('error folding patch %s') % p)
2768 2769
2769 2770 if not message:
2770 2771 ph = patchheader(q.join(parent), q.plainmode)
2771 2772 message = ph.message
2772 2773 for msg in messages:
2773 2774 if msg:
2774 2775 if message:
2775 2776 message.append('* * *')
2776 2777 message.extend(msg)
2777 2778 message = '\n'.join(message)
2778 2779
2779 2780 diffopts = q.patchopts(q.diffopts(), *patches)
2780 2781 with repo.wlock():
2781 2782 q.refresh(repo, msg=message, git=diffopts.git, edit=opts.get('edit'),
2782 2783 editform='mq.qfold')
2783 2784 q.delete(repo, patches, opts)
2784 2785 q.savedirty()
2785 2786
2786 2787 @command("qgoto",
2787 2788 [('', 'keep-changes', None,
2788 2789 _('tolerate non-conflicting local changes')),
2789 2790 ('f', 'force', None, _('overwrite any local changes')),
2790 2791 ('', 'no-backup', None, _('do not save backup copies of files'))],
2791 2792 _('hg qgoto [OPTION]... PATCH'))
2792 2793 def goto(ui, repo, patch, **opts):
2793 2794 '''push or pop patches until named patch is at top of stack
2794 2795
2795 2796 Returns 0 on success.'''
2796 2797 opts = pycompat.byteskwargs(opts)
2797 2798 opts = fixkeepchangesopts(ui, opts)
2798 2799 q = repo.mq
2799 2800 patch = q.lookup(patch)
2800 2801 nobackup = opts.get('no_backup')
2801 2802 keepchanges = opts.get('keep_changes')
2802 2803 if q.isapplied(patch):
2803 2804 ret = q.pop(repo, patch, force=opts.get('force'), nobackup=nobackup,
2804 2805 keepchanges=keepchanges)
2805 2806 else:
2806 2807 ret = q.push(repo, patch, force=opts.get('force'), nobackup=nobackup,
2807 2808 keepchanges=keepchanges)
2808 2809 q.savedirty()
2809 2810 return ret
2810 2811
2811 2812 @command("qguard",
2812 2813 [('l', 'list', None, _('list all patches and guards')),
2813 2814 ('n', 'none', None, _('drop all guards'))],
2814 2815 _('hg qguard [-l] [-n] [PATCH] [-- [+GUARD]... [-GUARD]...]'))
2815 2816 def guard(ui, repo, *args, **opts):
2816 2817 '''set or print guards for a patch
2817 2818
2818 2819 Guards control whether a patch can be pushed. A patch with no
2819 2820 guards is always pushed. A patch with a positive guard ("+foo") is
2820 2821 pushed only if the :hg:`qselect` command has activated it. A patch with
2821 2822 a negative guard ("-foo") is never pushed if the :hg:`qselect` command
2822 2823 has activated it.
2823 2824
2824 2825 With no arguments, print the currently active guards.
2825 2826 With arguments, set guards for the named patch.
2826 2827
2827 2828 .. note::
2828 2829
2829 2830 Specifying negative guards now requires '--'.
2830 2831
2831 2832 To set guards on another patch::
2832 2833
2833 2834 hg qguard other.patch -- +2.6.17 -stable
2834 2835
2835 2836 Returns 0 on success.
2836 2837 '''
2837 2838 def status(idx):
2838 2839 guards = q.seriesguards[idx] or ['unguarded']
2839 2840 if q.series[idx] in applied:
2840 2841 state = 'applied'
2841 2842 elif q.pushable(idx)[0]:
2842 2843 state = 'unapplied'
2843 2844 else:
2844 2845 state = 'guarded'
2845 2846 label = 'qguard.patch qguard.%s qseries.%s' % (state, state)
2846 2847 ui.write('%s: ' % ui.label(q.series[idx], label))
2847 2848
2848 2849 for i, guard in enumerate(guards):
2849 2850 if guard.startswith('+'):
2850 2851 ui.write(guard, label='qguard.positive')
2851 2852 elif guard.startswith('-'):
2852 2853 ui.write(guard, label='qguard.negative')
2853 2854 else:
2854 2855 ui.write(guard, label='qguard.unguarded')
2855 2856 if i != len(guards) - 1:
2856 2857 ui.write(' ')
2857 2858 ui.write('\n')
2858 2859 q = repo.mq
2859 2860 applied = set(p.name for p in q.applied)
2860 2861 patch = None
2861 2862 args = list(args)
2862 2863 if opts.get(r'list'):
2863 2864 if args or opts.get('none'):
2864 2865 raise error.Abort(_('cannot mix -l/--list with options or '
2865 2866 'arguments'))
2866 2867 for i in xrange(len(q.series)):
2867 2868 status(i)
2868 2869 return
2869 2870 if not args or args[0][0:1] in '-+':
2870 2871 if not q.applied:
2871 2872 raise error.Abort(_('no patches applied'))
2872 2873 patch = q.applied[-1].name
2873 2874 if patch is None and args[0][0:1] not in '-+':
2874 2875 patch = args.pop(0)
2875 2876 if patch is None:
2876 2877 raise error.Abort(_('no patch to work with'))
2877 2878 if args or opts.get('none'):
2878 2879 idx = q.findseries(patch)
2879 2880 if idx is None:
2880 2881 raise error.Abort(_('no patch named %s') % patch)
2881 2882 q.setguards(idx, args)
2882 2883 q.savedirty()
2883 2884 else:
2884 2885 status(q.series.index(q.lookup(patch)))
2885 2886
2886 2887 @command("qheader", [], _('hg qheader [PATCH]'))
2887 2888 def header(ui, repo, patch=None):
2888 2889 """print the header of the topmost or specified patch
2889 2890
2890 2891 Returns 0 on success."""
2891 2892 q = repo.mq
2892 2893
2893 2894 if patch:
2894 2895 patch = q.lookup(patch)
2895 2896 else:
2896 2897 if not q.applied:
2897 2898 ui.write(_('no patches applied\n'))
2898 2899 return 1
2899 2900 patch = q.lookup('qtip')
2900 2901 ph = patchheader(q.join(patch), q.plainmode)
2901 2902
2902 2903 ui.write('\n'.join(ph.message) + '\n')
2903 2904
2904 2905 def lastsavename(path):
2905 2906 (directory, base) = os.path.split(path)
2906 2907 names = os.listdir(directory)
2907 2908 namere = re.compile("%s.([0-9]+)" % base)
2908 2909 maxindex = None
2909 2910 maxname = None
2910 2911 for f in names:
2911 2912 m = namere.match(f)
2912 2913 if m:
2913 2914 index = int(m.group(1))
2914 2915 if maxindex is None or index > maxindex:
2915 2916 maxindex = index
2916 2917 maxname = f
2917 2918 if maxname:
2918 2919 return (os.path.join(directory, maxname), maxindex)
2919 2920 return (None, None)
2920 2921
2921 2922 def savename(path):
2922 2923 (last, index) = lastsavename(path)
2923 2924 if last is None:
2924 2925 index = 0
2925 2926 newpath = path + ".%d" % (index + 1)
2926 2927 return newpath
2927 2928
2928 2929 @command("^qpush",
2929 2930 [('', 'keep-changes', None,
2930 2931 _('tolerate non-conflicting local changes')),
2931 2932 ('f', 'force', None, _('apply on top of local changes')),
2932 2933 ('e', 'exact', None,
2933 2934 _('apply the target patch to its recorded parent')),
2934 2935 ('l', 'list', None, _('list patch name in commit text')),
2935 2936 ('a', 'all', None, _('apply all patches')),
2936 2937 ('m', 'merge', None, _('merge from another queue (DEPRECATED)')),
2937 2938 ('n', 'name', '',
2938 2939 _('merge queue name (DEPRECATED)'), _('NAME')),
2939 2940 ('', 'move', None,
2940 2941 _('reorder patch series and apply only the patch')),
2941 2942 ('', 'no-backup', None, _('do not save backup copies of files'))],
2942 2943 _('hg qpush [-f] [-l] [-a] [--move] [PATCH | INDEX]'))
2943 2944 def push(ui, repo, patch=None, **opts):
2944 2945 """push the next patch onto the stack
2945 2946
2946 2947 By default, abort if the working directory contains uncommitted
2947 2948 changes. With --keep-changes, abort only if the uncommitted files
2948 2949 overlap with patched files. With -f/--force, backup and patch over
2949 2950 uncommitted changes.
2950 2951
2951 2952 Return 0 on success.
2952 2953 """
2953 2954 q = repo.mq
2954 2955 mergeq = None
2955 2956
2956 2957 opts = pycompat.byteskwargs(opts)
2957 2958 opts = fixkeepchangesopts(ui, opts)
2958 2959 if opts.get('merge'):
2959 2960 if opts.get('name'):
2960 2961 newpath = repo.vfs.join(opts.get('name'))
2961 2962 else:
2962 2963 newpath, i = lastsavename(q.path)
2963 2964 if not newpath:
2964 2965 ui.warn(_("no saved queues found, please use -n\n"))
2965 2966 return 1
2966 2967 mergeq = queue(ui, repo.baseui, repo.path, newpath)
2967 2968 ui.warn(_("merging with queue at: %s\n") % mergeq.path)
2968 2969 ret = q.push(repo, patch, force=opts.get('force'), list=opts.get('list'),
2969 2970 mergeq=mergeq, all=opts.get('all'), move=opts.get('move'),
2970 2971 exact=opts.get('exact'), nobackup=opts.get('no_backup'),
2971 2972 keepchanges=opts.get('keep_changes'))
2972 2973 return ret
2973 2974
2974 2975 @command("^qpop",
2975 2976 [('a', 'all', None, _('pop all patches')),
2976 2977 ('n', 'name', '',
2977 2978 _('queue name to pop (DEPRECATED)'), _('NAME')),
2978 2979 ('', 'keep-changes', None,
2979 2980 _('tolerate non-conflicting local changes')),
2980 2981 ('f', 'force', None, _('forget any local changes to patched files')),
2981 2982 ('', 'no-backup', None, _('do not save backup copies of files'))],
2982 2983 _('hg qpop [-a] [-f] [PATCH | INDEX]'))
2983 2984 def pop(ui, repo, patch=None, **opts):
2984 2985 """pop the current patch off the stack
2985 2986
2986 2987 Without argument, pops off the top of the patch stack. If given a
2987 2988 patch name, keeps popping off patches until the named patch is at
2988 2989 the top of the stack.
2989 2990
2990 2991 By default, abort if the working directory contains uncommitted
2991 2992 changes. With --keep-changes, abort only if the uncommitted files
2992 2993 overlap with patched files. With -f/--force, backup and discard
2993 2994 changes made to such files.
2994 2995
2995 2996 Return 0 on success.
2996 2997 """
2997 2998 opts = pycompat.byteskwargs(opts)
2998 2999 opts = fixkeepchangesopts(ui, opts)
2999 3000 localupdate = True
3000 3001 if opts.get('name'):
3001 3002 q = queue(ui, repo.baseui, repo.path, repo.vfs.join(opts.get('name')))
3002 3003 ui.warn(_('using patch queue: %s\n') % q.path)
3003 3004 localupdate = False
3004 3005 else:
3005 3006 q = repo.mq
3006 3007 ret = q.pop(repo, patch, force=opts.get('force'), update=localupdate,
3007 3008 all=opts.get('all'), nobackup=opts.get('no_backup'),
3008 3009 keepchanges=opts.get('keep_changes'))
3009 3010 q.savedirty()
3010 3011 return ret
3011 3012
3012 3013 @command("qrename|qmv", [], _('hg qrename PATCH1 [PATCH2]'))
3013 3014 def rename(ui, repo, patch, name=None, **opts):
3014 3015 """rename a patch
3015 3016
3016 3017 With one argument, renames the current patch to PATCH1.
3017 3018 With two arguments, renames PATCH1 to PATCH2.
3018 3019
3019 3020 Returns 0 on success."""
3020 3021 q = repo.mq
3021 3022 if not name:
3022 3023 name = patch
3023 3024 patch = None
3024 3025
3025 3026 if patch:
3026 3027 patch = q.lookup(patch)
3027 3028 else:
3028 3029 if not q.applied:
3029 3030 ui.write(_('no patches applied\n'))
3030 3031 return
3031 3032 patch = q.lookup('qtip')
3032 3033 absdest = q.join(name)
3033 3034 if os.path.isdir(absdest):
3034 3035 name = normname(os.path.join(name, os.path.basename(patch)))
3035 3036 absdest = q.join(name)
3036 3037 q.checkpatchname(name)
3037 3038
3038 3039 ui.note(_('renaming %s to %s\n') % (patch, name))
3039 3040 i = q.findseries(patch)
3040 3041 guards = q.guard_re.findall(q.fullseries[i])
3041 3042 q.fullseries[i] = name + ''.join([' #' + g for g in guards])
3042 3043 q.parseseries()
3043 3044 q.seriesdirty = True
3044 3045
3045 3046 info = q.isapplied(patch)
3046 3047 if info:
3047 3048 q.applied[info[0]] = statusentry(info[1], name)
3048 3049 q.applieddirty = True
3049 3050
3050 3051 destdir = os.path.dirname(absdest)
3051 3052 if not os.path.isdir(destdir):
3052 3053 os.makedirs(destdir)
3053 3054 util.rename(q.join(patch), absdest)
3054 3055 r = q.qrepo()
3055 3056 if r and patch in r.dirstate:
3056 3057 wctx = r[None]
3057 3058 with r.wlock():
3058 3059 if r.dirstate[patch] == 'a':
3059 3060 r.dirstate.drop(patch)
3060 3061 r.dirstate.add(name)
3061 3062 else:
3062 3063 wctx.copy(patch, name)
3063 3064 wctx.forget([patch])
3064 3065
3065 3066 q.savedirty()
3066 3067
3067 3068 @command("qrestore",
3068 3069 [('d', 'delete', None, _('delete save entry')),
3069 3070 ('u', 'update', None, _('update queue working directory'))],
3070 3071 _('hg qrestore [-d] [-u] REV'))
3071 3072 def restore(ui, repo, rev, **opts):
3072 3073 """restore the queue state saved by a revision (DEPRECATED)
3073 3074
3074 3075 This command is deprecated, use :hg:`rebase` instead."""
3075 3076 rev = repo.lookup(rev)
3076 3077 q = repo.mq
3077 3078 q.restore(repo, rev, delete=opts.get(r'delete'),
3078 3079 qupdate=opts.get(r'update'))
3079 3080 q.savedirty()
3080 3081 return 0
3081 3082
3082 3083 @command("qsave",
3083 3084 [('c', 'copy', None, _('copy patch directory')),
3084 3085 ('n', 'name', '',
3085 3086 _('copy directory name'), _('NAME')),
3086 3087 ('e', 'empty', None, _('clear queue status file')),
3087 3088 ('f', 'force', None, _('force copy'))] + cmdutil.commitopts,
3088 3089 _('hg qsave [-m TEXT] [-l FILE] [-c] [-n NAME] [-e] [-f]'))
3089 3090 def save(ui, repo, **opts):
3090 3091 """save current queue state (DEPRECATED)
3091 3092
3092 3093 This command is deprecated, use :hg:`rebase` instead."""
3093 3094 q = repo.mq
3094 3095 opts = pycompat.byteskwargs(opts)
3095 3096 message = cmdutil.logmessage(ui, opts)
3096 3097 ret = q.save(repo, msg=message)
3097 3098 if ret:
3098 3099 return ret
3099 3100 q.savedirty() # save to .hg/patches before copying
3100 3101 if opts.get('copy'):
3101 3102 path = q.path
3102 3103 if opts.get('name'):
3103 3104 newpath = os.path.join(q.basepath, opts.get('name'))
3104 3105 if os.path.exists(newpath):
3105 3106 if not os.path.isdir(newpath):
3106 3107 raise error.Abort(_('destination %s exists and is not '
3107 3108 'a directory') % newpath)
3108 3109 if not opts.get('force'):
3109 3110 raise error.Abort(_('destination %s exists, '
3110 3111 'use -f to force') % newpath)
3111 3112 else:
3112 3113 newpath = savename(path)
3113 3114 ui.warn(_("copy %s to %s\n") % (path, newpath))
3114 3115 util.copyfiles(path, newpath)
3115 3116 if opts.get('empty'):
3116 3117 del q.applied[:]
3117 3118 q.applieddirty = True
3118 3119 q.savedirty()
3119 3120 return 0
3120 3121
3121 3122
3122 3123 @command("qselect",
3123 3124 [('n', 'none', None, _('disable all guards')),
3124 3125 ('s', 'series', None, _('list all guards in series file')),
3125 3126 ('', 'pop', None, _('pop to before first guarded applied patch')),
3126 3127 ('', 'reapply', None, _('pop, then reapply patches'))],
3127 3128 _('hg qselect [OPTION]... [GUARD]...'))
3128 3129 def select(ui, repo, *args, **opts):
3129 3130 '''set or print guarded patches to push
3130 3131
3131 3132 Use the :hg:`qguard` command to set or print guards on patch, then use
3132 3133 qselect to tell mq which guards to use. A patch will be pushed if
3133 3134 it has no guards or any positive guards match the currently
3134 3135 selected guard, but will not be pushed if any negative guards
3135 3136 match the current guard. For example::
3136 3137
3137 3138 qguard foo.patch -- -stable (negative guard)
3138 3139 qguard bar.patch +stable (positive guard)
3139 3140 qselect stable
3140 3141
3141 3142 This activates the "stable" guard. mq will skip foo.patch (because
3142 3143 it has a negative match) but push bar.patch (because it has a
3143 3144 positive match).
3144 3145
3145 3146 With no arguments, prints the currently active guards.
3146 3147 With one argument, sets the active guard.
3147 3148
3148 3149 Use -n/--none to deactivate guards (no other arguments needed).
3149 3150 When no guards are active, patches with positive guards are
3150 3151 skipped and patches with negative guards are pushed.
3151 3152
3152 3153 qselect can change the guards on applied patches. It does not pop
3153 3154 guarded patches by default. Use --pop to pop back to the last
3154 3155 applied patch that is not guarded. Use --reapply (which implies
3155 3156 --pop) to push back to the current patch afterwards, but skip
3156 3157 guarded patches.
3157 3158
3158 3159 Use -s/--series to print a list of all guards in the series file
3159 3160 (no other arguments needed). Use -v for more information.
3160 3161
3161 3162 Returns 0 on success.'''
3162 3163
3163 3164 q = repo.mq
3164 3165 opts = pycompat.byteskwargs(opts)
3165 3166 guards = q.active()
3166 3167 pushable = lambda i: q.pushable(q.applied[i].name)[0]
3167 3168 if args or opts.get('none'):
3168 3169 old_unapplied = q.unapplied(repo)
3169 3170 old_guarded = [i for i in xrange(len(q.applied)) if not pushable(i)]
3170 3171 q.setactive(args)
3171 3172 q.savedirty()
3172 3173 if not args:
3173 3174 ui.status(_('guards deactivated\n'))
3174 3175 if not opts.get('pop') and not opts.get('reapply'):
3175 3176 unapplied = q.unapplied(repo)
3176 3177 guarded = [i for i in xrange(len(q.applied)) if not pushable(i)]
3177 3178 if len(unapplied) != len(old_unapplied):
3178 3179 ui.status(_('number of unguarded, unapplied patches has '
3179 3180 'changed from %d to %d\n') %
3180 3181 (len(old_unapplied), len(unapplied)))
3181 3182 if len(guarded) != len(old_guarded):
3182 3183 ui.status(_('number of guarded, applied patches has changed '
3183 3184 'from %d to %d\n') %
3184 3185 (len(old_guarded), len(guarded)))
3185 3186 elif opts.get('series'):
3186 3187 guards = {}
3187 3188 noguards = 0
3188 3189 for gs in q.seriesguards:
3189 3190 if not gs:
3190 3191 noguards += 1
3191 3192 for g in gs:
3192 3193 guards.setdefault(g, 0)
3193 3194 guards[g] += 1
3194 3195 if ui.verbose:
3195 3196 guards['NONE'] = noguards
3196 3197 guards = guards.items()
3197 3198 guards.sort(key=lambda x: x[0][1:])
3198 3199 if guards:
3199 3200 ui.note(_('guards in series file:\n'))
3200 3201 for guard, count in guards:
3201 3202 ui.note('%2d ' % count)
3202 3203 ui.write(guard, '\n')
3203 3204 else:
3204 3205 ui.note(_('no guards in series file\n'))
3205 3206 else:
3206 3207 if guards:
3207 3208 ui.note(_('active guards:\n'))
3208 3209 for g in guards:
3209 3210 ui.write(g, '\n')
3210 3211 else:
3211 3212 ui.write(_('no active guards\n'))
3212 3213 reapply = opts.get('reapply') and q.applied and q.applied[-1].name
3213 3214 popped = False
3214 3215 if opts.get('pop') or opts.get('reapply'):
3215 3216 for i in xrange(len(q.applied)):
3216 3217 if not pushable(i):
3217 3218 ui.status(_('popping guarded patches\n'))
3218 3219 popped = True
3219 3220 if i == 0:
3220 3221 q.pop(repo, all=True)
3221 3222 else:
3222 3223 q.pop(repo, q.applied[i - 1].name)
3223 3224 break
3224 3225 if popped:
3225 3226 try:
3226 3227 if reapply:
3227 3228 ui.status(_('reapplying unguarded patches\n'))
3228 3229 q.push(repo, reapply)
3229 3230 finally:
3230 3231 q.savedirty()
3231 3232
3232 3233 @command("qfinish",
3233 3234 [('a', 'applied', None, _('finish all applied changesets'))],
3234 3235 _('hg qfinish [-a] [REV]...'))
3235 3236 def finish(ui, repo, *revrange, **opts):
3236 3237 """move applied patches into repository history
3237 3238
3238 3239 Finishes the specified revisions (corresponding to applied
3239 3240 patches) by moving them out of mq control into regular repository
3240 3241 history.
3241 3242
3242 3243 Accepts a revision range or the -a/--applied option. If --applied
3243 3244 is specified, all applied mq revisions are removed from mq
3244 3245 control. Otherwise, the given revisions must be at the base of the
3245 3246 stack of applied patches.
3246 3247
3247 3248 This can be especially useful if your changes have been applied to
3248 3249 an upstream repository, or if you are about to push your changes
3249 3250 to upstream.
3250 3251
3251 3252 Returns 0 on success.
3252 3253 """
3253 3254 if not opts.get(r'applied') and not revrange:
3254 3255 raise error.Abort(_('no revisions specified'))
3255 3256 elif opts.get(r'applied'):
3256 3257 revrange = ('qbase::qtip',) + revrange
3257 3258
3258 3259 q = repo.mq
3259 3260 if not q.applied:
3260 3261 ui.status(_('no patches applied\n'))
3261 3262 return 0
3262 3263
3263 3264 revs = scmutil.revrange(repo, revrange)
3264 3265 if repo['.'].rev() in revs and repo[None].files():
3265 3266 ui.warn(_('warning: uncommitted changes in the working directory\n'))
3266 3267 # queue.finish may changes phases but leave the responsibility to lock the
3267 3268 # repo to the caller to avoid deadlock with wlock. This command code is
3268 3269 # responsibility for this locking.
3269 3270 with repo.lock():
3270 3271 q.finish(repo, revs)
3271 3272 q.savedirty()
3272 3273 return 0
3273 3274
3274 3275 @command("qqueue",
3275 3276 [('l', 'list', False, _('list all available queues')),
3276 3277 ('', 'active', False, _('print name of active queue')),
3277 3278 ('c', 'create', False, _('create new queue')),
3278 3279 ('', 'rename', False, _('rename active queue')),
3279 3280 ('', 'delete', False, _('delete reference to queue')),
3280 3281 ('', 'purge', False, _('delete queue, and remove patch dir')),
3281 3282 ],
3282 3283 _('[OPTION] [QUEUE]'))
3283 3284 def qqueue(ui, repo, name=None, **opts):
3284 3285 '''manage multiple patch queues
3285 3286
3286 3287 Supports switching between different patch queues, as well as creating
3287 3288 new patch queues and deleting existing ones.
3288 3289
3289 3290 Omitting a queue name or specifying -l/--list will show you the registered
3290 3291 queues - by default the "normal" patches queue is registered. The currently
3291 3292 active queue will be marked with "(active)". Specifying --active will print
3292 3293 only the name of the active queue.
3293 3294
3294 3295 To create a new queue, use -c/--create. The queue is automatically made
3295 3296 active, except in the case where there are applied patches from the
3296 3297 currently active queue in the repository. Then the queue will only be
3297 3298 created and switching will fail.
3298 3299
3299 3300 To delete an existing queue, use --delete. You cannot delete the currently
3300 3301 active queue.
3301 3302
3302 3303 Returns 0 on success.
3303 3304 '''
3304 3305 q = repo.mq
3305 3306 _defaultqueue = 'patches'
3306 3307 _allqueues = 'patches.queues'
3307 3308 _activequeue = 'patches.queue'
3308 3309
3309 3310 def _getcurrent():
3310 3311 cur = os.path.basename(q.path)
3311 3312 if cur.startswith('patches-'):
3312 3313 cur = cur[8:]
3313 3314 return cur
3314 3315
3315 3316 def _noqueues():
3316 3317 try:
3317 3318 fh = repo.vfs(_allqueues, 'r')
3318 3319 fh.close()
3319 3320 except IOError:
3320 3321 return True
3321 3322
3322 3323 return False
3323 3324
3324 3325 def _getqueues():
3325 3326 current = _getcurrent()
3326 3327
3327 3328 try:
3328 3329 fh = repo.vfs(_allqueues, 'r')
3329 3330 queues = [queue.strip() for queue in fh if queue.strip()]
3330 3331 fh.close()
3331 3332 if current not in queues:
3332 3333 queues.append(current)
3333 3334 except IOError:
3334 3335 queues = [_defaultqueue]
3335 3336
3336 3337 return sorted(queues)
3337 3338
3338 3339 def _setactive(name):
3339 3340 if q.applied:
3340 3341 raise error.Abort(_('new queue created, but cannot make active '
3341 3342 'as patches are applied'))
3342 3343 _setactivenocheck(name)
3343 3344
3344 3345 def _setactivenocheck(name):
3345 3346 fh = repo.vfs(_activequeue, 'w')
3346 3347 if name != 'patches':
3347 3348 fh.write(name)
3348 3349 fh.close()
3349 3350
3350 3351 def _addqueue(name):
3351 3352 fh = repo.vfs(_allqueues, 'a')
3352 3353 fh.write('%s\n' % (name,))
3353 3354 fh.close()
3354 3355
3355 3356 def _queuedir(name):
3356 3357 if name == 'patches':
3357 3358 return repo.vfs.join('patches')
3358 3359 else:
3359 3360 return repo.vfs.join('patches-' + name)
3360 3361
3361 3362 def _validname(name):
3362 3363 for n in name:
3363 3364 if n in ':\\/.':
3364 3365 return False
3365 3366 return True
3366 3367
3367 3368 def _delete(name):
3368 3369 if name not in existing:
3369 3370 raise error.Abort(_('cannot delete queue that does not exist'))
3370 3371
3371 3372 current = _getcurrent()
3372 3373
3373 3374 if name == current:
3374 3375 raise error.Abort(_('cannot delete currently active queue'))
3375 3376
3376 3377 fh = repo.vfs('patches.queues.new', 'w')
3377 3378 for queue in existing:
3378 3379 if queue == name:
3379 3380 continue
3380 3381 fh.write('%s\n' % (queue,))
3381 3382 fh.close()
3382 3383 repo.vfs.rename('patches.queues.new', _allqueues)
3383 3384
3384 3385 opts = pycompat.byteskwargs(opts)
3385 3386 if not name or opts.get('list') or opts.get('active'):
3386 3387 current = _getcurrent()
3387 3388 if opts.get('active'):
3388 3389 ui.write('%s\n' % (current,))
3389 3390 return
3390 3391 for queue in _getqueues():
3391 3392 ui.write('%s' % (queue,))
3392 3393 if queue == current and not ui.quiet:
3393 3394 ui.write(_(' (active)\n'))
3394 3395 else:
3395 3396 ui.write('\n')
3396 3397 return
3397 3398
3398 3399 if not _validname(name):
3399 3400 raise error.Abort(
3400 3401 _('invalid queue name, may not contain the characters ":\\/."'))
3401 3402
3402 3403 with repo.wlock():
3403 3404 existing = _getqueues()
3404 3405
3405 3406 if opts.get('create'):
3406 3407 if name in existing:
3407 3408 raise error.Abort(_('queue "%s" already exists') % name)
3408 3409 if _noqueues():
3409 3410 _addqueue(_defaultqueue)
3410 3411 _addqueue(name)
3411 3412 _setactive(name)
3412 3413 elif opts.get('rename'):
3413 3414 current = _getcurrent()
3414 3415 if name == current:
3415 3416 raise error.Abort(_('can\'t rename "%s" to its current name')
3416 3417 % name)
3417 3418 if name in existing:
3418 3419 raise error.Abort(_('queue "%s" already exists') % name)
3419 3420
3420 3421 olddir = _queuedir(current)
3421 3422 newdir = _queuedir(name)
3422 3423
3423 3424 if os.path.exists(newdir):
3424 3425 raise error.Abort(_('non-queue directory "%s" already exists') %
3425 3426 newdir)
3426 3427
3427 3428 fh = repo.vfs('patches.queues.new', 'w')
3428 3429 for queue in existing:
3429 3430 if queue == current:
3430 3431 fh.write('%s\n' % (name,))
3431 3432 if os.path.exists(olddir):
3432 3433 util.rename(olddir, newdir)
3433 3434 else:
3434 3435 fh.write('%s\n' % (queue,))
3435 3436 fh.close()
3436 3437 repo.vfs.rename('patches.queues.new', _allqueues)
3437 3438 _setactivenocheck(name)
3438 3439 elif opts.get('delete'):
3439 3440 _delete(name)
3440 3441 elif opts.get('purge'):
3441 3442 if name in existing:
3442 3443 _delete(name)
3443 3444 qdir = _queuedir(name)
3444 3445 if os.path.exists(qdir):
3445 3446 shutil.rmtree(qdir)
3446 3447 else:
3447 3448 if name not in existing:
3448 3449 raise error.Abort(_('use --create to create a new queue'))
3449 3450 _setactive(name)
3450 3451
3451 3452 def mqphasedefaults(repo, roots):
3452 3453 """callback used to set mq changeset as secret when no phase data exists"""
3453 3454 if repo.mq.applied:
3454 3455 if repo.ui.configbool('mq', 'secret'):
3455 3456 mqphase = phases.secret
3456 3457 else:
3457 3458 mqphase = phases.draft
3458 3459 qbase = repo[repo.mq.applied[0].node]
3459 3460 roots[mqphase].add(qbase.node())
3460 3461 return roots
3461 3462
3462 3463 def reposetup(ui, repo):
3463 3464 class mqrepo(repo.__class__):
3464 3465 @localrepo.unfilteredpropertycache
3465 3466 def mq(self):
3466 3467 return queue(self.ui, self.baseui, self.path)
3467 3468
3468 3469 def invalidateall(self):
3469 3470 super(mqrepo, self).invalidateall()
3470 3471 if localrepo.hasunfilteredcache(self, 'mq'):
3471 3472 # recreate mq in case queue path was changed
3472 3473 delattr(self.unfiltered(), 'mq')
3473 3474
3474 3475 def abortifwdirpatched(self, errmsg, force=False):
3475 3476 if self.mq.applied and self.mq.checkapplied and not force:
3476 3477 parents = self.dirstate.parents()
3477 3478 patches = [s.node for s in self.mq.applied]
3478 3479 if parents[0] in patches or parents[1] in patches:
3479 3480 raise error.Abort(errmsg)
3480 3481
3481 3482 def commit(self, text="", user=None, date=None, match=None,
3482 3483 force=False, editor=False, extra=None):
3483 3484 if extra is None:
3484 3485 extra = {}
3485 3486 self.abortifwdirpatched(
3486 3487 _('cannot commit over an applied mq patch'),
3487 3488 force)
3488 3489
3489 3490 return super(mqrepo, self).commit(text, user, date, match, force,
3490 3491 editor, extra)
3491 3492
3492 3493 def checkpush(self, pushop):
3493 3494 if self.mq.applied and self.mq.checkapplied and not pushop.force:
3494 3495 outapplied = [e.node for e in self.mq.applied]
3495 3496 if pushop.revs:
3496 3497 # Assume applied patches have no non-patch descendants and
3497 3498 # are not on remote already. Filtering any changeset not
3498 3499 # pushed.
3499 3500 heads = set(pushop.revs)
3500 3501 for node in reversed(outapplied):
3501 3502 if node in heads:
3502 3503 break
3503 3504 else:
3504 3505 outapplied.pop()
3505 3506 # looking for pushed and shared changeset
3506 3507 for node in outapplied:
3507 3508 if self[node].phase() < phases.secret:
3508 3509 raise error.Abort(_('source has mq patches applied'))
3509 3510 # no non-secret patches pushed
3510 3511 super(mqrepo, self).checkpush(pushop)
3511 3512
3512 3513 def _findtags(self):
3513 3514 '''augment tags from base class with patch tags'''
3514 3515 result = super(mqrepo, self)._findtags()
3515 3516
3516 3517 q = self.mq
3517 3518 if not q.applied:
3518 3519 return result
3519 3520
3520 3521 mqtags = [(patch.node, patch.name) for patch in q.applied]
3521 3522
3522 3523 try:
3523 3524 # for now ignore filtering business
3524 3525 self.unfiltered().changelog.rev(mqtags[-1][0])
3525 3526 except error.LookupError:
3526 3527 self.ui.warn(_('mq status file refers to unknown node %s\n')
3527 3528 % short(mqtags[-1][0]))
3528 3529 return result
3529 3530
3530 3531 # do not add fake tags for filtered revisions
3531 3532 included = self.changelog.hasnode
3532 3533 mqtags = [mqt for mqt in mqtags if included(mqt[0])]
3533 3534 if not mqtags:
3534 3535 return result
3535 3536
3536 3537 mqtags.append((mqtags[-1][0], 'qtip'))
3537 3538 mqtags.append((mqtags[0][0], 'qbase'))
3538 3539 mqtags.append((self.changelog.parents(mqtags[0][0])[0], 'qparent'))
3539 3540 tags = result[0]
3540 3541 for patch in mqtags:
3541 3542 if patch[1] in tags:
3542 3543 self.ui.warn(_('tag %s overrides mq patch of the same '
3543 3544 'name\n') % patch[1])
3544 3545 else:
3545 3546 tags[patch[1]] = patch[0]
3546 3547
3547 3548 return result
3548 3549
3549 3550 if repo.local():
3550 3551 repo.__class__ = mqrepo
3551 3552
3552 3553 repo._phasedefaults.append(mqphasedefaults)
3553 3554
3554 3555 def mqimport(orig, ui, repo, *args, **kwargs):
3555 3556 if (util.safehasattr(repo, 'abortifwdirpatched')
3556 3557 and not kwargs.get(r'no_commit', False)):
3557 3558 repo.abortifwdirpatched(_('cannot import over an applied patch'),
3558 3559 kwargs.get(r'force'))
3559 3560 return orig(ui, repo, *args, **kwargs)
3560 3561
3561 3562 def mqinit(orig, ui, *args, **kwargs):
3562 3563 mq = kwargs.pop(r'mq', None)
3563 3564
3564 3565 if not mq:
3565 3566 return orig(ui, *args, **kwargs)
3566 3567
3567 3568 if args:
3568 3569 repopath = args[0]
3569 3570 if not hg.islocal(repopath):
3570 3571 raise error.Abort(_('only a local queue repository '
3571 3572 'may be initialized'))
3572 3573 else:
3573 3574 repopath = cmdutil.findrepo(pycompat.getcwd())
3574 3575 if not repopath:
3575 3576 raise error.Abort(_('there is no Mercurial repository here '
3576 3577 '(.hg not found)'))
3577 3578 repo = hg.repository(ui, repopath)
3578 3579 return qinit(ui, repo, True)
3579 3580
3580 3581 def mqcommand(orig, ui, repo, *args, **kwargs):
3581 3582 """Add --mq option to operate on patch repository instead of main"""
3582 3583
3583 3584 # some commands do not like getting unknown options
3584 3585 mq = kwargs.pop(r'mq', None)
3585 3586
3586 3587 if not mq:
3587 3588 return orig(ui, repo, *args, **kwargs)
3588 3589
3589 3590 q = repo.mq
3590 3591 r = q.qrepo()
3591 3592 if not r:
3592 3593 raise error.Abort(_('no queue repository'))
3593 3594 return orig(r.ui, r, *args, **kwargs)
3594 3595
3595 3596 def summaryhook(ui, repo):
3596 3597 q = repo.mq
3597 3598 m = []
3598 3599 a, u = len(q.applied), len(q.unapplied(repo))
3599 3600 if a:
3600 3601 m.append(ui.label(_("%d applied"), 'qseries.applied') % a)
3601 3602 if u:
3602 3603 m.append(ui.label(_("%d unapplied"), 'qseries.unapplied') % u)
3603 3604 if m:
3604 3605 # i18n: column positioning for "hg summary"
3605 3606 ui.write(_("mq: %s\n") % ', '.join(m))
3606 3607 else:
3607 3608 # i18n: column positioning for "hg summary"
3608 3609 ui.note(_("mq: (empty queue)\n"))
3609 3610
3610 3611 revsetpredicate = registrar.revsetpredicate()
3611 3612
3612 3613 @revsetpredicate('mq()')
3613 3614 def revsetmq(repo, subset, x):
3614 3615 """Changesets managed by MQ.
3615 3616 """
3616 3617 revsetlang.getargs(x, 0, 0, _("mq takes no arguments"))
3617 3618 applied = set([repo[r.node].rev() for r in repo.mq.applied])
3618 3619 return smartset.baseset([r for r in subset if r in applied])
3619 3620
3620 3621 # tell hggettext to extract docstrings from these functions:
3621 3622 i18nfunctions = [revsetmq]
3622 3623
3623 3624 def extsetup(ui):
3624 3625 # Ensure mq wrappers are called first, regardless of extension load order by
3625 3626 # NOT wrapping in uisetup() and instead deferring to init stage two here.
3626 3627 mqopt = [('', 'mq', None, _("operate on patch repository"))]
3627 3628
3628 3629 extensions.wrapcommand(commands.table, 'import', mqimport)
3629 3630 cmdutil.summaryhooks.add('mq', summaryhook)
3630 3631
3631 3632 entry = extensions.wrapcommand(commands.table, 'init', mqinit)
3632 3633 entry[1].extend(mqopt)
3633 3634
3634 3635 def dotable(cmdtable):
3635 3636 for cmd, entry in cmdtable.iteritems():
3636 3637 cmd = cmdutil.parsealiases(cmd)[0]
3637 3638 func = entry[0]
3638 3639 if func.norepo:
3639 3640 continue
3640 3641 entry = extensions.wrapcommand(cmdtable, cmd, mqcommand)
3641 3642 entry[1].extend(mqopt)
3642 3643
3643 3644 dotable(commands.table)
3644 3645
3645 3646 for extname, extmodule in extensions.extensions():
3646 3647 if extmodule.__file__ != __file__:
3647 3648 dotable(getattr(extmodule, 'cmdtable', {}))
3648 3649
3649 3650 colortable = {'qguard.negative': 'red',
3650 3651 'qguard.positive': 'yellow',
3651 3652 'qguard.unguarded': 'green',
3652 3653 'qseries.applied': 'blue bold underline',
3653 3654 'qseries.guarded': 'black bold',
3654 3655 'qseries.missing': 'red bold',
3655 3656 'qseries.unapplied': 'black bold'}
@@ -1,485 +1,485 b''
1 1 # notify.py - email notifications for mercurial
2 2 #
3 3 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 '''hooks for sending email push notifications
9 9
10 10 This extension implements hooks to send email notifications when
11 11 changesets are sent from or received by the local repository.
12 12
13 13 First, enable the extension as explained in :hg:`help extensions`, and
14 14 register the hook you want to run. ``incoming`` and ``changegroup`` hooks
15 15 are run when changesets are received, while ``outgoing`` hooks are for
16 16 changesets sent to another repository::
17 17
18 18 [hooks]
19 19 # one email for each incoming changeset
20 20 incoming.notify = python:hgext.notify.hook
21 21 # one email for all incoming changesets
22 22 changegroup.notify = python:hgext.notify.hook
23 23
24 24 # one email for all outgoing changesets
25 25 outgoing.notify = python:hgext.notify.hook
26 26
27 27 This registers the hooks. To enable notification, subscribers must
28 28 be assigned to repositories. The ``[usersubs]`` section maps multiple
29 29 repositories to a given recipient. The ``[reposubs]`` section maps
30 30 multiple recipients to a single repository::
31 31
32 32 [usersubs]
33 33 # key is subscriber email, value is a comma-separated list of repo patterns
34 34 user@host = pattern
35 35
36 36 [reposubs]
37 37 # key is repo pattern, value is a comma-separated list of subscriber emails
38 38 pattern = user@host
39 39
40 40 A ``pattern`` is a ``glob`` matching the absolute path to a repository,
41 41 optionally combined with a revset expression. A revset expression, if
42 42 present, is separated from the glob by a hash. Example::
43 43
44 44 [reposubs]
45 45 */widgets#branch(release) = qa-team@example.com
46 46
47 47 This sends to ``qa-team@example.com`` whenever a changeset on the ``release``
48 48 branch triggers a notification in any repository ending in ``widgets``.
49 49
50 50 In order to place them under direct user management, ``[usersubs]`` and
51 51 ``[reposubs]`` sections may be placed in a separate ``hgrc`` file and
52 52 incorporated by reference::
53 53
54 54 [notify]
55 55 config = /path/to/subscriptionsfile
56 56
57 57 Notifications will not be sent until the ``notify.test`` value is set
58 58 to ``False``; see below.
59 59
60 60 Notifications content can be tweaked with the following configuration entries:
61 61
62 62 notify.test
63 63 If ``True``, print messages to stdout instead of sending them. Default: True.
64 64
65 65 notify.sources
66 66 Space-separated list of change sources. Notifications are activated only
67 67 when a changeset's source is in this list. Sources may be:
68 68
69 69 :``serve``: changesets received via http or ssh
70 70 :``pull``: changesets received via ``hg pull``
71 71 :``unbundle``: changesets received via ``hg unbundle``
72 72 :``push``: changesets sent or received via ``hg push``
73 73 :``bundle``: changesets sent via ``hg unbundle``
74 74
75 75 Default: serve.
76 76
77 77 notify.strip
78 78 Number of leading slashes to strip from url paths. By default, notifications
79 79 reference repositories with their absolute path. ``notify.strip`` lets you
80 80 turn them into relative paths. For example, ``notify.strip=3`` will change
81 81 ``/long/path/repository`` into ``repository``. Default: 0.
82 82
83 83 notify.domain
84 84 Default email domain for sender or recipients with no explicit domain.
85 85
86 86 notify.style
87 87 Style file to use when formatting emails.
88 88
89 89 notify.template
90 90 Template to use when formatting emails.
91 91
92 92 notify.incoming
93 93 Template to use when run as an incoming hook, overriding ``notify.template``.
94 94
95 95 notify.outgoing
96 96 Template to use when run as an outgoing hook, overriding ``notify.template``.
97 97
98 98 notify.changegroup
99 99 Template to use when running as a changegroup hook, overriding
100 100 ``notify.template``.
101 101
102 102 notify.maxdiff
103 103 Maximum number of diff lines to include in notification email. Set to 0
104 104 to disable the diff, or -1 to include all of it. Default: 300.
105 105
106 106 notify.maxsubject
107 107 Maximum number of characters in email's subject line. Default: 67.
108 108
109 109 notify.diffstat
110 110 Set to True to include a diffstat before diff content. Default: True.
111 111
112 112 notify.merge
113 113 If True, send notifications for merge changesets. Default: True.
114 114
115 115 notify.mbox
116 116 If set, append mails to this mbox file instead of sending. Default: None.
117 117
118 118 notify.fromauthor
119 119 If set, use the committer of the first changeset in a changegroup for
120 120 the "From" field of the notification mail. If not set, take the user
121 121 from the pushing repo. Default: False.
122 122
123 123 If set, the following entries will also be used to customize the
124 124 notifications:
125 125
126 126 email.from
127 127 Email ``From`` address to use if none can be found in the generated
128 128 email content.
129 129
130 130 web.baseurl
131 131 Root repository URL to combine with repository paths when making
132 132 references. See also ``notify.strip``.
133 133
134 134 '''
135 135 from __future__ import absolute_import
136 136
137 137 import email
138 138 import email.parser as emailparser
139 139 import fnmatch
140 140 import socket
141 141 import time
142 142
143 143 from mercurial.i18n import _
144 144 from mercurial import (
145 cmdutil,
146 145 error,
146 logcmdutil,
147 147 mail,
148 148 patch,
149 149 registrar,
150 150 util,
151 151 )
152 152
153 153 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
154 154 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
155 155 # be specifying the version(s) of Mercurial they are tested with, or
156 156 # leave the attribute unspecified.
157 157 testedwith = 'ships-with-hg-core'
158 158
159 159 configtable = {}
160 160 configitem = registrar.configitem(configtable)
161 161
162 162 configitem('notify', 'changegroup',
163 163 default=None,
164 164 )
165 165 configitem('notify', 'config',
166 166 default=None,
167 167 )
168 168 configitem('notify', 'diffstat',
169 169 default=True,
170 170 )
171 171 configitem('notify', 'domain',
172 172 default=None,
173 173 )
174 174 configitem('notify', 'fromauthor',
175 175 default=None,
176 176 )
177 177 configitem('notify', 'incoming',
178 178 default=None,
179 179 )
180 180 configitem('notify', 'maxdiff',
181 181 default=300,
182 182 )
183 183 configitem('notify', 'maxsubject',
184 184 default=67,
185 185 )
186 186 configitem('notify', 'mbox',
187 187 default=None,
188 188 )
189 189 configitem('notify', 'merge',
190 190 default=True,
191 191 )
192 192 configitem('notify', 'outgoing',
193 193 default=None,
194 194 )
195 195 configitem('notify', 'sources',
196 196 default='serve',
197 197 )
198 198 configitem('notify', 'strip',
199 199 default=0,
200 200 )
201 201 configitem('notify', 'style',
202 202 default=None,
203 203 )
204 204 configitem('notify', 'template',
205 205 default=None,
206 206 )
207 207 configitem('notify', 'test',
208 208 default=True,
209 209 )
210 210
211 211 # template for single changeset can include email headers.
212 212 single_template = '''
213 213 Subject: changeset in {webroot}: {desc|firstline|strip}
214 214 From: {author}
215 215
216 216 changeset {node|short} in {root}
217 217 details: {baseurl}{webroot}?cmd=changeset;node={node|short}
218 218 description:
219 219 \t{desc|tabindent|strip}
220 220 '''.lstrip()
221 221
222 222 # template for multiple changesets should not contain email headers,
223 223 # because only first set of headers will be used and result will look
224 224 # strange.
225 225 multiple_template = '''
226 226 changeset {node|short} in {root}
227 227 details: {baseurl}{webroot}?cmd=changeset;node={node|short}
228 228 summary: {desc|firstline}
229 229 '''
230 230
231 231 deftemplates = {
232 232 'changegroup': multiple_template,
233 233 }
234 234
235 235 class notifier(object):
236 236 '''email notification class.'''
237 237
238 238 def __init__(self, ui, repo, hooktype):
239 239 self.ui = ui
240 240 cfg = self.ui.config('notify', 'config')
241 241 if cfg:
242 242 self.ui.readconfig(cfg, sections=['usersubs', 'reposubs'])
243 243 self.repo = repo
244 244 self.stripcount = int(self.ui.config('notify', 'strip'))
245 245 self.root = self.strip(self.repo.root)
246 246 self.domain = self.ui.config('notify', 'domain')
247 247 self.mbox = self.ui.config('notify', 'mbox')
248 248 self.test = self.ui.configbool('notify', 'test')
249 249 self.charsets = mail._charsets(self.ui)
250 250 self.subs = self.subscribers()
251 251 self.merge = self.ui.configbool('notify', 'merge')
252 252
253 253 mapfile = None
254 254 template = (self.ui.config('notify', hooktype) or
255 255 self.ui.config('notify', 'template'))
256 256 if not template:
257 257 mapfile = self.ui.config('notify', 'style')
258 258 if not mapfile and not template:
259 259 template = deftemplates.get(hooktype) or single_template
260 spec = cmdutil.logtemplatespec(template, mapfile)
261 self.t = cmdutil.changeset_templater(self.ui, self.repo, spec,
262 False, None, False)
260 spec = logcmdutil.templatespec(template, mapfile)
261 self.t = logcmdutil.changesettemplater(self.ui, self.repo, spec,
262 False, None, False)
263 263
264 264 def strip(self, path):
265 265 '''strip leading slashes from local path, turn into web-safe path.'''
266 266
267 267 path = util.pconvert(path)
268 268 count = self.stripcount
269 269 while count > 0:
270 270 c = path.find('/')
271 271 if c == -1:
272 272 break
273 273 path = path[c + 1:]
274 274 count -= 1
275 275 return path
276 276
277 277 def fixmail(self, addr):
278 278 '''try to clean up email addresses.'''
279 279
280 280 addr = util.email(addr.strip())
281 281 if self.domain:
282 282 a = addr.find('@localhost')
283 283 if a != -1:
284 284 addr = addr[:a]
285 285 if '@' not in addr:
286 286 return addr + '@' + self.domain
287 287 return addr
288 288
289 289 def subscribers(self):
290 290 '''return list of email addresses of subscribers to this repo.'''
291 291 subs = set()
292 292 for user, pats in self.ui.configitems('usersubs'):
293 293 for pat in pats.split(','):
294 294 if '#' in pat:
295 295 pat, revs = pat.split('#', 1)
296 296 else:
297 297 revs = None
298 298 if fnmatch.fnmatch(self.repo.root, pat.strip()):
299 299 subs.add((self.fixmail(user), revs))
300 300 for pat, users in self.ui.configitems('reposubs'):
301 301 if '#' in pat:
302 302 pat, revs = pat.split('#', 1)
303 303 else:
304 304 revs = None
305 305 if fnmatch.fnmatch(self.repo.root, pat):
306 306 for user in users.split(','):
307 307 subs.add((self.fixmail(user), revs))
308 308 return [(mail.addressencode(self.ui, s, self.charsets, self.test), r)
309 309 for s, r in sorted(subs)]
310 310
311 311 def node(self, ctx, **props):
312 312 '''format one changeset, unless it is a suppressed merge.'''
313 313 if not self.merge and len(ctx.parents()) > 1:
314 314 return False
315 315 self.t.show(ctx, changes=ctx.changeset(),
316 316 baseurl=self.ui.config('web', 'baseurl'),
317 317 root=self.repo.root, webroot=self.root, **props)
318 318 return True
319 319
320 320 def skipsource(self, source):
321 321 '''true if incoming changes from this source should be skipped.'''
322 322 ok_sources = self.ui.config('notify', 'sources').split()
323 323 return source not in ok_sources
324 324
325 325 def send(self, ctx, count, data):
326 326 '''send message.'''
327 327
328 328 # Select subscribers by revset
329 329 subs = set()
330 330 for sub, spec in self.subs:
331 331 if spec is None:
332 332 subs.add(sub)
333 333 continue
334 334 revs = self.repo.revs('%r and %d:', spec, ctx.rev())
335 335 if len(revs):
336 336 subs.add(sub)
337 337 continue
338 338 if len(subs) == 0:
339 339 self.ui.debug('notify: no subscribers to selected repo '
340 340 'and revset\n')
341 341 return
342 342
343 343 p = emailparser.Parser()
344 344 try:
345 345 msg = p.parsestr(data)
346 346 except email.Errors.MessageParseError as inst:
347 347 raise error.Abort(inst)
348 348
349 349 # store sender and subject
350 350 sender, subject = msg['From'], msg['Subject']
351 351 del msg['From'], msg['Subject']
352 352
353 353 if not msg.is_multipart():
354 354 # create fresh mime message from scratch
355 355 # (multipart templates must take care of this themselves)
356 356 headers = msg.items()
357 357 payload = msg.get_payload()
358 358 # for notification prefer readability over data precision
359 359 msg = mail.mimeencode(self.ui, payload, self.charsets, self.test)
360 360 # reinstate custom headers
361 361 for k, v in headers:
362 362 msg[k] = v
363 363
364 364 msg['Date'] = util.datestr(format="%a, %d %b %Y %H:%M:%S %1%2")
365 365
366 366 # try to make subject line exist and be useful
367 367 if not subject:
368 368 if count > 1:
369 369 subject = _('%s: %d new changesets') % (self.root, count)
370 370 else:
371 371 s = ctx.description().lstrip().split('\n', 1)[0].rstrip()
372 372 subject = '%s: %s' % (self.root, s)
373 373 maxsubject = int(self.ui.config('notify', 'maxsubject'))
374 374 if maxsubject:
375 375 subject = util.ellipsis(subject, maxsubject)
376 376 msg['Subject'] = mail.headencode(self.ui, subject,
377 377 self.charsets, self.test)
378 378
379 379 # try to make message have proper sender
380 380 if not sender:
381 381 sender = self.ui.config('email', 'from') or self.ui.username()
382 382 if '@' not in sender or '@localhost' in sender:
383 383 sender = self.fixmail(sender)
384 384 msg['From'] = mail.addressencode(self.ui, sender,
385 385 self.charsets, self.test)
386 386
387 387 msg['X-Hg-Notification'] = 'changeset %s' % ctx
388 388 if not msg['Message-Id']:
389 389 msg['Message-Id'] = ('<hg.%s.%s.%s@%s>' %
390 390 (ctx, int(time.time()),
391 391 hash(self.repo.root), socket.getfqdn()))
392 392 msg['To'] = ', '.join(sorted(subs))
393 393
394 394 msgtext = msg.as_string()
395 395 if self.test:
396 396 self.ui.write(msgtext)
397 397 if not msgtext.endswith('\n'):
398 398 self.ui.write('\n')
399 399 else:
400 400 self.ui.status(_('notify: sending %d subscribers %d changes\n') %
401 401 (len(subs), count))
402 402 mail.sendmail(self.ui, util.email(msg['From']),
403 403 subs, msgtext, mbox=self.mbox)
404 404
405 405 def diff(self, ctx, ref=None):
406 406
407 407 maxdiff = int(self.ui.config('notify', 'maxdiff'))
408 408 prev = ctx.p1().node()
409 409 if ref:
410 410 ref = ref.node()
411 411 else:
412 412 ref = ctx.node()
413 413 chunks = patch.diff(self.repo, prev, ref,
414 414 opts=patch.diffallopts(self.ui))
415 415 difflines = ''.join(chunks).splitlines()
416 416
417 417 if self.ui.configbool('notify', 'diffstat'):
418 418 s = patch.diffstat(difflines)
419 419 # s may be nil, don't include the header if it is
420 420 if s:
421 421 self.ui.write(_('\ndiffstat:\n\n%s') % s)
422 422
423 423 if maxdiff == 0:
424 424 return
425 425 elif maxdiff > 0 and len(difflines) > maxdiff:
426 426 msg = _('\ndiffs (truncated from %d to %d lines):\n\n')
427 427 self.ui.write(msg % (len(difflines), maxdiff))
428 428 difflines = difflines[:maxdiff]
429 429 elif difflines:
430 430 self.ui.write(_('\ndiffs (%d lines):\n\n') % len(difflines))
431 431
432 432 self.ui.write("\n".join(difflines))
433 433
434 434 def hook(ui, repo, hooktype, node=None, source=None, **kwargs):
435 435 '''send email notifications to interested subscribers.
436 436
437 437 if used as changegroup hook, send one email for all changesets in
438 438 changegroup. else send one email per changeset.'''
439 439
440 440 n = notifier(ui, repo, hooktype)
441 441 ctx = repo[node]
442 442
443 443 if not n.subs:
444 444 ui.debug('notify: no subscribers to repository %s\n' % n.root)
445 445 return
446 446 if n.skipsource(source):
447 447 ui.debug('notify: changes have source "%s" - skipping\n' % source)
448 448 return
449 449
450 450 ui.pushbuffer()
451 451 data = ''
452 452 count = 0
453 453 author = ''
454 454 if hooktype == 'changegroup' or hooktype == 'outgoing':
455 455 start, end = ctx.rev(), len(repo)
456 456 for rev in xrange(start, end):
457 457 if n.node(repo[rev]):
458 458 count += 1
459 459 if not author:
460 460 author = repo[rev].user()
461 461 else:
462 462 data += ui.popbuffer()
463 463 ui.note(_('notify: suppressing notification for merge %d:%s\n')
464 464 % (rev, repo[rev].hex()[:12]))
465 465 ui.pushbuffer()
466 466 if count:
467 467 n.diff(ctx, repo['tip'])
468 468 else:
469 469 if not n.node(ctx):
470 470 ui.popbuffer()
471 471 ui.note(_('notify: suppressing notification for merge %d:%s\n') %
472 472 (ctx.rev(), ctx.hex()[:12]))
473 473 return
474 474 count += 1
475 475 n.diff(ctx)
476 476 if not author:
477 477 author = ctx.user()
478 478
479 479 data += ui.popbuffer()
480 480 fromauthor = ui.config('notify', 'fromauthor')
481 481 if author and fromauthor:
482 482 data = '\n'.join(['From: %s' % author, data])
483 483
484 484 if count:
485 485 n.send(ctx, count, data)
@@ -1,468 +1,469 b''
1 1 # show.py - Extension implementing `hg show`
2 2 #
3 3 # Copyright 2017 Gregory Szorc <gregory.szorc@gmail.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 """unified command to show various repository information (EXPERIMENTAL)
9 9
10 10 This extension provides the :hg:`show` command, which provides a central
11 11 command for displaying commonly-accessed repository data and views of that
12 12 data.
13 13
14 14 The following config options can influence operation.
15 15
16 16 ``commands``
17 17 ------------
18 18
19 19 ``show.aliasprefix``
20 20 List of strings that will register aliases for views. e.g. ``s`` will
21 21 effectively set config options ``alias.s<view> = show <view>`` for all
22 22 views. i.e. `hg swork` would execute `hg show work`.
23 23
24 24 Aliases that would conflict with existing registrations will not be
25 25 performed.
26 26 """
27 27
28 28 from __future__ import absolute_import
29 29
30 30 from mercurial.i18n import _
31 31 from mercurial.node import (
32 32 hex,
33 33 nullrev,
34 34 )
35 35 from mercurial import (
36 36 cmdutil,
37 37 commands,
38 38 destutil,
39 39 error,
40 40 formatter,
41 41 graphmod,
42 logcmdutil,
42 43 phases,
43 44 pycompat,
44 45 registrar,
45 46 revset,
46 47 revsetlang,
47 48 )
48 49
49 50 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
50 51 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
51 52 # be specifying the version(s) of Mercurial they are tested with, or
52 53 # leave the attribute unspecified.
53 54 testedwith = 'ships-with-hg-core'
54 55
55 56 cmdtable = {}
56 57 command = registrar.command(cmdtable)
57 58
58 59 revsetpredicate = registrar.revsetpredicate()
59 60
60 61 class showcmdfunc(registrar._funcregistrarbase):
61 62 """Register a function to be invoked for an `hg show <thing>`."""
62 63
63 64 # Used by _formatdoc().
64 65 _docformat = '%s -- %s'
65 66
66 67 def _extrasetup(self, name, func, fmtopic=None, csettopic=None):
67 68 """Called with decorator arguments to register a show view.
68 69
69 70 ``name`` is the sub-command name.
70 71
71 72 ``func`` is the function being decorated.
72 73
73 74 ``fmtopic`` is the topic in the style that will be rendered for
74 75 this view.
75 76
76 77 ``csettopic`` is the topic in the style to be used for a changeset
77 78 printer.
78 79
79 80 If ``fmtopic`` is specified, the view function will receive a
80 81 formatter instance. If ``csettopic`` is specified, the view
81 82 function will receive a changeset printer.
82 83 """
83 84 func._fmtopic = fmtopic
84 85 func._csettopic = csettopic
85 86
86 87 showview = showcmdfunc()
87 88
88 89 @command('show', [
89 90 # TODO: Switch this template flag to use cmdutil.formatteropts if
90 91 # 'hg show' becomes stable before --template/-T is stable. For now,
91 92 # we are putting it here without the '(EXPERIMENTAL)' flag because it
92 93 # is an important part of the 'hg show' user experience and the entire
93 94 # 'hg show' experience is experimental.
94 95 ('T', 'template', '', ('display with template'), _('TEMPLATE')),
95 96 ], _('VIEW'))
96 97 def show(ui, repo, view=None, template=None):
97 98 """show various repository information
98 99
99 100 A requested view of repository data is displayed.
100 101
101 102 If no view is requested, the list of available views is shown and the
102 103 command aborts.
103 104
104 105 .. note::
105 106
106 107 There are no backwards compatibility guarantees for the output of this
107 108 command. Output may change in any future Mercurial release.
108 109
109 110 Consumers wanting stable command output should specify a template via
110 111 ``-T/--template``.
111 112
112 113 List of available views:
113 114 """
114 115 if ui.plain() and not template:
115 116 hint = _('invoke with -T/--template to control output format')
116 117 raise error.Abort(_('must specify a template in plain mode'), hint=hint)
117 118
118 119 views = showview._table
119 120
120 121 if not view:
121 122 ui.pager('show')
122 123 # TODO consider using formatter here so available views can be
123 124 # rendered to custom format.
124 125 ui.write(_('available views:\n'))
125 126 ui.write('\n')
126 127
127 128 for name, func in sorted(views.items()):
128 129 ui.write(('%s\n') % func.__doc__)
129 130
130 131 ui.write('\n')
131 132 raise error.Abort(_('no view requested'),
132 133 hint=_('use "hg show VIEW" to choose a view'))
133 134
134 135 # TODO use same logic as dispatch to perform prefix matching.
135 136 if view not in views:
136 137 raise error.Abort(_('unknown view: %s') % view,
137 138 hint=_('run "hg show" to see available views'))
138 139
139 140 template = template or 'show'
140 141
141 142 fn = views[view]
142 143 ui.pager('show')
143 144
144 145 if fn._fmtopic:
145 146 fmtopic = 'show%s' % fn._fmtopic
146 147 with ui.formatter(fmtopic, {'template': template}) as fm:
147 148 return fn(ui, repo, fm)
148 149 elif fn._csettopic:
149 150 ref = 'show%s' % fn._csettopic
150 151 spec = formatter.lookuptemplate(ui, ref, template)
151 displayer = cmdutil.changeset_templater(ui, repo, spec, buffered=True)
152 displayer = logcmdutil.changesettemplater(ui, repo, spec, buffered=True)
152 153 return fn(ui, repo, displayer)
153 154 else:
154 155 return fn(ui, repo)
155 156
156 157 @showview('bookmarks', fmtopic='bookmarks')
157 158 def showbookmarks(ui, repo, fm):
158 159 """bookmarks and their associated changeset"""
159 160 marks = repo._bookmarks
160 161 if not len(marks):
161 162 # This is a bit hacky. Ideally, templates would have a way to
162 163 # specify an empty output, but we shouldn't corrupt JSON while
163 164 # waiting for this functionality.
164 165 if not isinstance(fm, formatter.jsonformatter):
165 166 ui.write(_('(no bookmarks set)\n'))
166 167 return
167 168
168 169 revs = [repo[node].rev() for node in marks.values()]
169 170 active = repo._activebookmark
170 171 longestname = max(len(b) for b in marks)
171 172 nodelen = longestshortest(repo, revs)
172 173
173 174 for bm, node in sorted(marks.items()):
174 175 fm.startitem()
175 176 fm.context(ctx=repo[node])
176 177 fm.write('bookmark', '%s', bm)
177 178 fm.write('node', fm.hexfunc(node), fm.hexfunc(node))
178 179 fm.data(active=bm == active,
179 180 longestbookmarklen=longestname,
180 181 nodelen=nodelen)
181 182
182 183 @showview('stack', csettopic='stack')
183 184 def showstack(ui, repo, displayer):
184 185 """current line of work"""
185 186 wdirctx = repo['.']
186 187 if wdirctx.rev() == nullrev:
187 188 raise error.Abort(_('stack view only available when there is a '
188 189 'working directory'))
189 190
190 191 if wdirctx.phase() == phases.public:
191 192 ui.write(_('(empty stack; working directory parent is a published '
192 193 'changeset)\n'))
193 194 return
194 195
195 196 # TODO extract "find stack" into a function to facilitate
196 197 # customization and reuse.
197 198
198 199 baserev = destutil.stackbase(ui, repo)
199 200 basectx = None
200 201
201 202 if baserev is None:
202 203 baserev = wdirctx.rev()
203 204 stackrevs = {wdirctx.rev()}
204 205 else:
205 206 stackrevs = set(repo.revs('%d::.', baserev))
206 207
207 208 ctx = repo[baserev]
208 209 if ctx.p1().rev() != nullrev:
209 210 basectx = ctx.p1()
210 211
211 212 # And relevant descendants.
212 213 branchpointattip = False
213 214 cl = repo.changelog
214 215
215 216 for rev in cl.descendants([wdirctx.rev()]):
216 217 ctx = repo[rev]
217 218
218 219 # Will only happen if . is public.
219 220 if ctx.phase() == phases.public:
220 221 break
221 222
222 223 stackrevs.add(ctx.rev())
223 224
224 225 # ctx.children() within a function iterating on descandants
225 226 # potentially has severe performance concerns because revlog.children()
226 227 # iterates over all revisions after ctx's node. However, the number of
227 228 # draft changesets should be a reasonably small number. So even if
228 229 # this is quadratic, the perf impact should be minimal.
229 230 if len(ctx.children()) > 1:
230 231 branchpointattip = True
231 232 break
232 233
233 234 stackrevs = list(sorted(stackrevs, reverse=True))
234 235
235 236 # Find likely target heads for the current stack. These are likely
236 237 # merge or rebase targets.
237 238 if basectx:
238 239 # TODO make this customizable?
239 240 newheads = set(repo.revs('heads(%d::) - %ld - not public()',
240 241 basectx.rev(), stackrevs))
241 242 else:
242 243 newheads = set()
243 244
244 245 allrevs = set(stackrevs) | newheads | set([baserev])
245 246 nodelen = longestshortest(repo, allrevs)
246 247
247 248 try:
248 249 cmdutil.findcmd('rebase', commands.table)
249 250 haverebase = True
250 251 except (error.AmbiguousCommand, error.UnknownCommand):
251 252 haverebase = False
252 253
253 254 # TODO use templating.
254 255 # TODO consider using graphmod. But it may not be necessary given
255 256 # our simplicity and the customizations required.
256 257 # TODO use proper graph symbols from graphmod
257 258
258 259 tres = formatter.templateresources(ui, repo)
259 260 shortesttmpl = formatter.maketemplater(ui, '{shortest(node, %d)}' % nodelen,
260 261 resources=tres)
261 262 def shortest(ctx):
262 263 return shortesttmpl.render({'ctx': ctx, 'node': ctx.hex()})
263 264
264 265 # We write out new heads to aid in DAG awareness and to help with decision
265 266 # making on how the stack should be reconciled with commits made since the
266 267 # branch point.
267 268 if newheads:
268 269 # Calculate distance from base so we can render the count and so we can
269 270 # sort display order by commit distance.
270 271 revdistance = {}
271 272 for head in newheads:
272 273 # There is some redundancy in DAG traversal here and therefore
273 274 # room to optimize.
274 275 ancestors = cl.ancestors([head], stoprev=basectx.rev())
275 276 revdistance[head] = len(list(ancestors))
276 277
277 278 sourcectx = repo[stackrevs[-1]]
278 279
279 280 sortedheads = sorted(newheads, key=lambda x: revdistance[x],
280 281 reverse=True)
281 282
282 283 for i, rev in enumerate(sortedheads):
283 284 ctx = repo[rev]
284 285
285 286 if i:
286 287 ui.write(': ')
287 288 else:
288 289 ui.write(' ')
289 290
290 291 ui.write(('o '))
291 292 displayer.show(ctx, nodelen=nodelen)
292 293 displayer.flush(ctx)
293 294 ui.write('\n')
294 295
295 296 if i:
296 297 ui.write(':/')
297 298 else:
298 299 ui.write(' /')
299 300
300 301 ui.write(' (')
301 302 ui.write(_('%d commits ahead') % revdistance[rev],
302 303 label='stack.commitdistance')
303 304
304 305 if haverebase:
305 306 # TODO may be able to omit --source in some scenarios
306 307 ui.write('; ')
307 308 ui.write(('hg rebase --source %s --dest %s' % (
308 309 shortest(sourcectx), shortest(ctx))),
309 310 label='stack.rebasehint')
310 311
311 312 ui.write(')\n')
312 313
313 314 ui.write(':\n: ')
314 315 ui.write(_('(stack head)\n'), label='stack.label')
315 316
316 317 if branchpointattip:
317 318 ui.write(' \\ / ')
318 319 ui.write(_('(multiple children)\n'), label='stack.label')
319 320 ui.write(' |\n')
320 321
321 322 for rev in stackrevs:
322 323 ctx = repo[rev]
323 324 symbol = '@' if rev == wdirctx.rev() else 'o'
324 325
325 326 if newheads:
326 327 ui.write(': ')
327 328 else:
328 329 ui.write(' ')
329 330
330 331 ui.write(symbol, ' ')
331 332 displayer.show(ctx, nodelen=nodelen)
332 333 displayer.flush(ctx)
333 334 ui.write('\n')
334 335
335 336 # TODO display histedit hint?
336 337
337 338 if basectx:
338 339 # Vertically and horizontally separate stack base from parent
339 340 # to reinforce stack boundary.
340 341 if newheads:
341 342 ui.write(':/ ')
342 343 else:
343 344 ui.write(' / ')
344 345
345 346 ui.write(_('(stack base)'), '\n', label='stack.label')
346 347 ui.write(('o '))
347 348
348 349 displayer.show(basectx, nodelen=nodelen)
349 350 displayer.flush(basectx)
350 351 ui.write('\n')
351 352
352 353 @revsetpredicate('_underway([commitage[, headage]])')
353 354 def underwayrevset(repo, subset, x):
354 355 args = revset.getargsdict(x, 'underway', 'commitage headage')
355 356 if 'commitage' not in args:
356 357 args['commitage'] = None
357 358 if 'headage' not in args:
358 359 args['headage'] = None
359 360
360 361 # We assume callers of this revset add a topographical sort on the
361 362 # result. This means there is no benefit to making the revset lazy
362 363 # since the topographical sort needs to consume all revs.
363 364 #
364 365 # With this in mind, we build up the set manually instead of constructing
365 366 # a complex revset. This enables faster execution.
366 367
367 368 # Mutable changesets (non-public) are the most important changesets
368 369 # to return. ``not public()`` will also pull in obsolete changesets if
369 370 # there is a non-obsolete changeset with obsolete ancestors. This is
370 371 # why we exclude obsolete changesets from this query.
371 372 rs = 'not public() and not obsolete()'
372 373 rsargs = []
373 374 if args['commitage']:
374 375 rs += ' and date(%s)'
375 376 rsargs.append(revsetlang.getstring(args['commitage'],
376 377 _('commitage requires a string')))
377 378
378 379 mutable = repo.revs(rs, *rsargs)
379 380 relevant = revset.baseset(mutable)
380 381
381 382 # Add parents of mutable changesets to provide context.
382 383 relevant += repo.revs('parents(%ld)', mutable)
383 384
384 385 # We also pull in (public) heads if they a) aren't closing a branch
385 386 # b) are recent.
386 387 rs = 'head() and not closed()'
387 388 rsargs = []
388 389 if args['headage']:
389 390 rs += ' and date(%s)'
390 391 rsargs.append(revsetlang.getstring(args['headage'],
391 392 _('headage requires a string')))
392 393
393 394 relevant += repo.revs(rs, *rsargs)
394 395
395 396 # Add working directory parent.
396 397 wdirrev = repo['.'].rev()
397 398 if wdirrev != nullrev:
398 399 relevant += revset.baseset({wdirrev})
399 400
400 401 return subset & relevant
401 402
402 403 @showview('work', csettopic='work')
403 404 def showwork(ui, repo, displayer):
404 405 """changesets that aren't finished"""
405 406 # TODO support date-based limiting when calling revset.
406 407 revs = repo.revs('sort(_underway(), topo)')
407 408 nodelen = longestshortest(repo, revs)
408 409
409 410 revdag = graphmod.dagwalker(repo, revs)
410 411
411 412 ui.setconfig('experimental', 'graphshorten', True)
412 cmdutil.displaygraph(ui, repo, revdag, displayer, graphmod.asciiedges,
413 props={'nodelen': nodelen})
413 logcmdutil.displaygraph(ui, repo, revdag, displayer, graphmod.asciiedges,
414 props={'nodelen': nodelen})
414 415
415 416 def extsetup(ui):
416 417 # Alias `hg <prefix><view>` to `hg show <view>`.
417 418 for prefix in ui.configlist('commands', 'show.aliasprefix'):
418 419 for view in showview._table:
419 420 name = '%s%s' % (prefix, view)
420 421
421 422 choice, allcommands = cmdutil.findpossible(name, commands.table,
422 423 strict=True)
423 424
424 425 # This alias is already a command name. Don't set it.
425 426 if name in choice:
426 427 continue
427 428
428 429 # Same for aliases.
429 430 if ui.config('alias', name):
430 431 continue
431 432
432 433 ui.setconfig('alias', name, 'show %s' % view, source='show')
433 434
434 435 def longestshortest(repo, revs, minlen=4):
435 436 """Return the length of the longest shortest node to identify revisions.
436 437
437 438 The result of this function can be used with the ``shortest()`` template
438 439 function to ensure that a value is unique and unambiguous for a given
439 440 set of nodes.
440 441
441 442 The number of revisions in the repo is taken into account to prevent
442 443 a numeric node prefix from conflicting with an integer revision number.
443 444 If we fail to do this, a value of e.g. ``10023`` could mean either
444 445 revision 10023 or node ``10023abc...``.
445 446 """
446 447 if not revs:
447 448 return minlen
448 449 # don't use filtered repo because it's slow. see templater.shortest().
449 450 cl = repo.unfiltered().changelog
450 451 return max(len(cl.shortest(hex(cl.node(r)), minlen)) for r in revs)
451 452
452 453 # Adjust the docstring of the show command so it shows all registered views.
453 454 # This is a bit hacky because it runs at the end of module load. When moved
454 455 # into core or when another extension wants to provide a view, we'll need
455 456 # to do this more robustly.
456 457 # TODO make this more robust.
457 458 def _updatedocstring():
458 459 longest = max(map(len, showview._table.keys()))
459 460 entries = []
460 461 for key in sorted(showview._table.keys()):
461 462 entries.append(pycompat.sysstr(' %s %s' % (
462 463 key.ljust(longest), showview._table[key]._origdoc)))
463 464
464 465 cmdtable['show'][0].__doc__ = pycompat.sysstr('%s\n\n%s\n ') % (
465 466 cmdtable['show'][0].__doc__.rstrip(),
466 467 pycompat.sysstr('\n\n').join(entries))
467 468
468 469 _updatedocstring()
@@ -1,757 +1,758 b''
1 1 # Patch transplanting extension for Mercurial
2 2 #
3 3 # Copyright 2006, 2007 Brendan Cully <brendan@kublai.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 '''command to transplant changesets from another branch
9 9
10 10 This extension allows you to transplant changes to another parent revision,
11 11 possibly in another repository. The transplant is done using 'diff' patches.
12 12
13 13 Transplanted patches are recorded in .hg/transplant/transplants, as a
14 14 map from a changeset hash to its hash in the source repository.
15 15 '''
16 16 from __future__ import absolute_import
17 17
18 18 import os
19 19 import tempfile
20 20 from mercurial.i18n import _
21 21 from mercurial import (
22 22 bundlerepo,
23 23 cmdutil,
24 24 error,
25 25 exchange,
26 26 hg,
27 logcmdutil,
27 28 match,
28 29 merge,
29 30 node as nodemod,
30 31 patch,
31 32 pycompat,
32 33 registrar,
33 34 revlog,
34 35 revset,
35 36 scmutil,
36 37 smartset,
37 38 util,
38 39 vfs as vfsmod,
39 40 )
40 41
41 42 class TransplantError(error.Abort):
42 43 pass
43 44
44 45 cmdtable = {}
45 46 command = registrar.command(cmdtable)
46 47 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
47 48 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
48 49 # be specifying the version(s) of Mercurial they are tested with, or
49 50 # leave the attribute unspecified.
50 51 testedwith = 'ships-with-hg-core'
51 52
52 53 configtable = {}
53 54 configitem = registrar.configitem(configtable)
54 55
55 56 configitem('transplant', 'filter',
56 57 default=None,
57 58 )
58 59 configitem('transplant', 'log',
59 60 default=None,
60 61 )
61 62
62 63 class transplantentry(object):
63 64 def __init__(self, lnode, rnode):
64 65 self.lnode = lnode
65 66 self.rnode = rnode
66 67
67 68 class transplants(object):
68 69 def __init__(self, path=None, transplantfile=None, opener=None):
69 70 self.path = path
70 71 self.transplantfile = transplantfile
71 72 self.opener = opener
72 73
73 74 if not opener:
74 75 self.opener = vfsmod.vfs(self.path)
75 76 self.transplants = {}
76 77 self.dirty = False
77 78 self.read()
78 79
79 80 def read(self):
80 81 abspath = os.path.join(self.path, self.transplantfile)
81 82 if self.transplantfile and os.path.exists(abspath):
82 83 for line in self.opener.read(self.transplantfile).splitlines():
83 84 lnode, rnode = map(revlog.bin, line.split(':'))
84 85 list = self.transplants.setdefault(rnode, [])
85 86 list.append(transplantentry(lnode, rnode))
86 87
87 88 def write(self):
88 89 if self.dirty and self.transplantfile:
89 90 if not os.path.isdir(self.path):
90 91 os.mkdir(self.path)
91 92 fp = self.opener(self.transplantfile, 'w')
92 93 for list in self.transplants.itervalues():
93 94 for t in list:
94 95 l, r = map(nodemod.hex, (t.lnode, t.rnode))
95 96 fp.write(l + ':' + r + '\n')
96 97 fp.close()
97 98 self.dirty = False
98 99
99 100 def get(self, rnode):
100 101 return self.transplants.get(rnode) or []
101 102
102 103 def set(self, lnode, rnode):
103 104 list = self.transplants.setdefault(rnode, [])
104 105 list.append(transplantentry(lnode, rnode))
105 106 self.dirty = True
106 107
107 108 def remove(self, transplant):
108 109 list = self.transplants.get(transplant.rnode)
109 110 if list:
110 111 del list[list.index(transplant)]
111 112 self.dirty = True
112 113
113 114 class transplanter(object):
114 115 def __init__(self, ui, repo, opts):
115 116 self.ui = ui
116 117 self.path = repo.vfs.join('transplant')
117 118 self.opener = vfsmod.vfs(self.path)
118 119 self.transplants = transplants(self.path, 'transplants',
119 120 opener=self.opener)
120 121 def getcommiteditor():
121 122 editform = cmdutil.mergeeditform(repo[None], 'transplant')
122 123 return cmdutil.getcommiteditor(editform=editform, **opts)
123 124 self.getcommiteditor = getcommiteditor
124 125
125 126 def applied(self, repo, node, parent):
126 127 '''returns True if a node is already an ancestor of parent
127 128 or is parent or has already been transplanted'''
128 129 if hasnode(repo, parent):
129 130 parentrev = repo.changelog.rev(parent)
130 131 if hasnode(repo, node):
131 132 rev = repo.changelog.rev(node)
132 133 reachable = repo.changelog.ancestors([parentrev], rev,
133 134 inclusive=True)
134 135 if rev in reachable:
135 136 return True
136 137 for t in self.transplants.get(node):
137 138 # it might have been stripped
138 139 if not hasnode(repo, t.lnode):
139 140 self.transplants.remove(t)
140 141 return False
141 142 lnoderev = repo.changelog.rev(t.lnode)
142 143 if lnoderev in repo.changelog.ancestors([parentrev], lnoderev,
143 144 inclusive=True):
144 145 return True
145 146 return False
146 147
147 148 def apply(self, repo, source, revmap, merges, opts=None):
148 149 '''apply the revisions in revmap one by one in revision order'''
149 150 if opts is None:
150 151 opts = {}
151 152 revs = sorted(revmap)
152 153 p1, p2 = repo.dirstate.parents()
153 154 pulls = []
154 155 diffopts = patch.difffeatureopts(self.ui, opts)
155 156 diffopts.git = True
156 157
157 158 lock = tr = None
158 159 try:
159 160 lock = repo.lock()
160 161 tr = repo.transaction('transplant')
161 162 for rev in revs:
162 163 node = revmap[rev]
163 164 revstr = '%s:%s' % (rev, nodemod.short(node))
164 165
165 166 if self.applied(repo, node, p1):
166 167 self.ui.warn(_('skipping already applied revision %s\n') %
167 168 revstr)
168 169 continue
169 170
170 171 parents = source.changelog.parents(node)
171 172 if not (opts.get('filter') or opts.get('log')):
172 173 # If the changeset parent is the same as the
173 174 # wdir's parent, just pull it.
174 175 if parents[0] == p1:
175 176 pulls.append(node)
176 177 p1 = node
177 178 continue
178 179 if pulls:
179 180 if source != repo:
180 181 exchange.pull(repo, source.peer(), heads=pulls)
181 182 merge.update(repo, pulls[-1], False, False)
182 183 p1, p2 = repo.dirstate.parents()
183 184 pulls = []
184 185
185 186 domerge = False
186 187 if node in merges:
187 188 # pulling all the merge revs at once would mean we
188 189 # couldn't transplant after the latest even if
189 190 # transplants before them fail.
190 191 domerge = True
191 192 if not hasnode(repo, node):
192 193 exchange.pull(repo, source.peer(), heads=[node])
193 194
194 195 skipmerge = False
195 196 if parents[1] != revlog.nullid:
196 197 if not opts.get('parent'):
197 198 self.ui.note(_('skipping merge changeset %s:%s\n')
198 199 % (rev, nodemod.short(node)))
199 200 skipmerge = True
200 201 else:
201 202 parent = source.lookup(opts['parent'])
202 203 if parent not in parents:
203 204 raise error.Abort(_('%s is not a parent of %s') %
204 205 (nodemod.short(parent),
205 206 nodemod.short(node)))
206 207 else:
207 208 parent = parents[0]
208 209
209 210 if skipmerge:
210 211 patchfile = None
211 212 else:
212 213 fd, patchfile = tempfile.mkstemp(prefix='hg-transplant-')
213 214 fp = os.fdopen(fd, pycompat.sysstr('w'))
214 215 gen = patch.diff(source, parent, node, opts=diffopts)
215 216 for chunk in gen:
216 217 fp.write(chunk)
217 218 fp.close()
218 219
219 220 del revmap[rev]
220 221 if patchfile or domerge:
221 222 try:
222 223 try:
223 224 n = self.applyone(repo, node,
224 225 source.changelog.read(node),
225 226 patchfile, merge=domerge,
226 227 log=opts.get('log'),
227 228 filter=opts.get('filter'))
228 229 except TransplantError:
229 230 # Do not rollback, it is up to the user to
230 231 # fix the merge or cancel everything
231 232 tr.close()
232 233 raise
233 234 if n and domerge:
234 235 self.ui.status(_('%s merged at %s\n') % (revstr,
235 236 nodemod.short(n)))
236 237 elif n:
237 238 self.ui.status(_('%s transplanted to %s\n')
238 239 % (nodemod.short(node),
239 240 nodemod.short(n)))
240 241 finally:
241 242 if patchfile:
242 243 os.unlink(patchfile)
243 244 tr.close()
244 245 if pulls:
245 246 exchange.pull(repo, source.peer(), heads=pulls)
246 247 merge.update(repo, pulls[-1], False, False)
247 248 finally:
248 249 self.saveseries(revmap, merges)
249 250 self.transplants.write()
250 251 if tr:
251 252 tr.release()
252 253 if lock:
253 254 lock.release()
254 255
255 256 def filter(self, filter, node, changelog, patchfile):
256 257 '''arbitrarily rewrite changeset before applying it'''
257 258
258 259 self.ui.status(_('filtering %s\n') % patchfile)
259 260 user, date, msg = (changelog[1], changelog[2], changelog[4])
260 261 fd, headerfile = tempfile.mkstemp(prefix='hg-transplant-')
261 262 fp = os.fdopen(fd, pycompat.sysstr('w'))
262 263 fp.write("# HG changeset patch\n")
263 264 fp.write("# User %s\n" % user)
264 265 fp.write("# Date %d %d\n" % date)
265 266 fp.write(msg + '\n')
266 267 fp.close()
267 268
268 269 try:
269 270 self.ui.system('%s %s %s' % (filter, util.shellquote(headerfile),
270 271 util.shellquote(patchfile)),
271 272 environ={'HGUSER': changelog[1],
272 273 'HGREVISION': nodemod.hex(node),
273 274 },
274 275 onerr=error.Abort, errprefix=_('filter failed'),
275 276 blockedtag='transplant_filter')
276 277 user, date, msg = self.parselog(file(headerfile))[1:4]
277 278 finally:
278 279 os.unlink(headerfile)
279 280
280 281 return (user, date, msg)
281 282
282 283 def applyone(self, repo, node, cl, patchfile, merge=False, log=False,
283 284 filter=None):
284 285 '''apply the patch in patchfile to the repository as a transplant'''
285 286 (manifest, user, (time, timezone), files, message) = cl[:5]
286 287 date = "%d %d" % (time, timezone)
287 288 extra = {'transplant_source': node}
288 289 if filter:
289 290 (user, date, message) = self.filter(filter, node, cl, patchfile)
290 291
291 292 if log:
292 293 # we don't translate messages inserted into commits
293 294 message += '\n(transplanted from %s)' % nodemod.hex(node)
294 295
295 296 self.ui.status(_('applying %s\n') % nodemod.short(node))
296 297 self.ui.note('%s %s\n%s\n' % (user, date, message))
297 298
298 299 if not patchfile and not merge:
299 300 raise error.Abort(_('can only omit patchfile if merging'))
300 301 if patchfile:
301 302 try:
302 303 files = set()
303 304 patch.patch(self.ui, repo, patchfile, files=files, eolmode=None)
304 305 files = list(files)
305 306 except Exception as inst:
306 307 seriespath = os.path.join(self.path, 'series')
307 308 if os.path.exists(seriespath):
308 309 os.unlink(seriespath)
309 310 p1 = repo.dirstate.p1()
310 311 p2 = node
311 312 self.log(user, date, message, p1, p2, merge=merge)
312 313 self.ui.write(str(inst) + '\n')
313 314 raise TransplantError(_('fix up the working directory and run '
314 315 'hg transplant --continue'))
315 316 else:
316 317 files = None
317 318 if merge:
318 319 p1, p2 = repo.dirstate.parents()
319 320 repo.setparents(p1, node)
320 321 m = match.always(repo.root, '')
321 322 else:
322 323 m = match.exact(repo.root, '', files)
323 324
324 325 n = repo.commit(message, user, date, extra=extra, match=m,
325 326 editor=self.getcommiteditor())
326 327 if not n:
327 328 self.ui.warn(_('skipping emptied changeset %s\n') %
328 329 nodemod.short(node))
329 330 return None
330 331 if not merge:
331 332 self.transplants.set(n, node)
332 333
333 334 return n
334 335
335 336 def canresume(self):
336 337 return os.path.exists(os.path.join(self.path, 'journal'))
337 338
338 339 def resume(self, repo, source, opts):
339 340 '''recover last transaction and apply remaining changesets'''
340 341 if os.path.exists(os.path.join(self.path, 'journal')):
341 342 n, node = self.recover(repo, source, opts)
342 343 if n:
343 344 self.ui.status(_('%s transplanted as %s\n') %
344 345 (nodemod.short(node),
345 346 nodemod.short(n)))
346 347 else:
347 348 self.ui.status(_('%s skipped due to empty diff\n')
348 349 % (nodemod.short(node),))
349 350 seriespath = os.path.join(self.path, 'series')
350 351 if not os.path.exists(seriespath):
351 352 self.transplants.write()
352 353 return
353 354 nodes, merges = self.readseries()
354 355 revmap = {}
355 356 for n in nodes:
356 357 revmap[source.changelog.rev(n)] = n
357 358 os.unlink(seriespath)
358 359
359 360 self.apply(repo, source, revmap, merges, opts)
360 361
361 362 def recover(self, repo, source, opts):
362 363 '''commit working directory using journal metadata'''
363 364 node, user, date, message, parents = self.readlog()
364 365 merge = False
365 366
366 367 if not user or not date or not message or not parents[0]:
367 368 raise error.Abort(_('transplant log file is corrupt'))
368 369
369 370 parent = parents[0]
370 371 if len(parents) > 1:
371 372 if opts.get('parent'):
372 373 parent = source.lookup(opts['parent'])
373 374 if parent not in parents:
374 375 raise error.Abort(_('%s is not a parent of %s') %
375 376 (nodemod.short(parent),
376 377 nodemod.short(node)))
377 378 else:
378 379 merge = True
379 380
380 381 extra = {'transplant_source': node}
381 382 try:
382 383 p1, p2 = repo.dirstate.parents()
383 384 if p1 != parent:
384 385 raise error.Abort(_('working directory not at transplant '
385 386 'parent %s') % nodemod.hex(parent))
386 387 if merge:
387 388 repo.setparents(p1, parents[1])
388 389 modified, added, removed, deleted = repo.status()[:4]
389 390 if merge or modified or added or removed or deleted:
390 391 n = repo.commit(message, user, date, extra=extra,
391 392 editor=self.getcommiteditor())
392 393 if not n:
393 394 raise error.Abort(_('commit failed'))
394 395 if not merge:
395 396 self.transplants.set(n, node)
396 397 else:
397 398 n = None
398 399 self.unlog()
399 400
400 401 return n, node
401 402 finally:
402 403 # TODO: get rid of this meaningless try/finally enclosing.
403 404 # this is kept only to reduce changes in a patch.
404 405 pass
405 406
406 407 def readseries(self):
407 408 nodes = []
408 409 merges = []
409 410 cur = nodes
410 411 for line in self.opener.read('series').splitlines():
411 412 if line.startswith('# Merges'):
412 413 cur = merges
413 414 continue
414 415 cur.append(revlog.bin(line))
415 416
416 417 return (nodes, merges)
417 418
418 419 def saveseries(self, revmap, merges):
419 420 if not revmap:
420 421 return
421 422
422 423 if not os.path.isdir(self.path):
423 424 os.mkdir(self.path)
424 425 series = self.opener('series', 'w')
425 426 for rev in sorted(revmap):
426 427 series.write(nodemod.hex(revmap[rev]) + '\n')
427 428 if merges:
428 429 series.write('# Merges\n')
429 430 for m in merges:
430 431 series.write(nodemod.hex(m) + '\n')
431 432 series.close()
432 433
433 434 def parselog(self, fp):
434 435 parents = []
435 436 message = []
436 437 node = revlog.nullid
437 438 inmsg = False
438 439 user = None
439 440 date = None
440 441 for line in fp.read().splitlines():
441 442 if inmsg:
442 443 message.append(line)
443 444 elif line.startswith('# User '):
444 445 user = line[7:]
445 446 elif line.startswith('# Date '):
446 447 date = line[7:]
447 448 elif line.startswith('# Node ID '):
448 449 node = revlog.bin(line[10:])
449 450 elif line.startswith('# Parent '):
450 451 parents.append(revlog.bin(line[9:]))
451 452 elif not line.startswith('# '):
452 453 inmsg = True
453 454 message.append(line)
454 455 if None in (user, date):
455 456 raise error.Abort(_("filter corrupted changeset (no user or date)"))
456 457 return (node, user, date, '\n'.join(message), parents)
457 458
458 459 def log(self, user, date, message, p1, p2, merge=False):
459 460 '''journal changelog metadata for later recover'''
460 461
461 462 if not os.path.isdir(self.path):
462 463 os.mkdir(self.path)
463 464 fp = self.opener('journal', 'w')
464 465 fp.write('# User %s\n' % user)
465 466 fp.write('# Date %s\n' % date)
466 467 fp.write('# Node ID %s\n' % nodemod.hex(p2))
467 468 fp.write('# Parent ' + nodemod.hex(p1) + '\n')
468 469 if merge:
469 470 fp.write('# Parent ' + nodemod.hex(p2) + '\n')
470 471 fp.write(message.rstrip() + '\n')
471 472 fp.close()
472 473
473 474 def readlog(self):
474 475 return self.parselog(self.opener('journal'))
475 476
476 477 def unlog(self):
477 478 '''remove changelog journal'''
478 479 absdst = os.path.join(self.path, 'journal')
479 480 if os.path.exists(absdst):
480 481 os.unlink(absdst)
481 482
482 483 def transplantfilter(self, repo, source, root):
483 484 def matchfn(node):
484 485 if self.applied(repo, node, root):
485 486 return False
486 487 if source.changelog.parents(node)[1] != revlog.nullid:
487 488 return False
488 489 extra = source.changelog.read(node)[5]
489 490 cnode = extra.get('transplant_source')
490 491 if cnode and self.applied(repo, cnode, root):
491 492 return False
492 493 return True
493 494
494 495 return matchfn
495 496
496 497 def hasnode(repo, node):
497 498 try:
498 499 return repo.changelog.rev(node) is not None
499 500 except error.RevlogError:
500 501 return False
501 502
502 503 def browserevs(ui, repo, nodes, opts):
503 504 '''interactively transplant changesets'''
504 displayer = cmdutil.show_changeset(ui, repo, opts)
505 displayer = logcmdutil.changesetdisplayer(ui, repo, opts)
505 506 transplants = []
506 507 merges = []
507 508 prompt = _('apply changeset? [ynmpcq?]:'
508 509 '$$ &yes, transplant this changeset'
509 510 '$$ &no, skip this changeset'
510 511 '$$ &merge at this changeset'
511 512 '$$ show &patch'
512 513 '$$ &commit selected changesets'
513 514 '$$ &quit and cancel transplant'
514 515 '$$ &? (show this help)')
515 516 for node in nodes:
516 517 displayer.show(repo[node])
517 518 action = None
518 519 while not action:
519 520 action = 'ynmpcq?'[ui.promptchoice(prompt)]
520 521 if action == '?':
521 522 for c, t in ui.extractchoices(prompt)[1]:
522 523 ui.write('%s: %s\n' % (c, t))
523 524 action = None
524 525 elif action == 'p':
525 526 parent = repo.changelog.parents(node)[0]
526 527 for chunk in patch.diff(repo, parent, node):
527 528 ui.write(chunk)
528 529 action = None
529 530 if action == 'y':
530 531 transplants.append(node)
531 532 elif action == 'm':
532 533 merges.append(node)
533 534 elif action == 'c':
534 535 break
535 536 elif action == 'q':
536 537 transplants = ()
537 538 merges = ()
538 539 break
539 540 displayer.close()
540 541 return (transplants, merges)
541 542
542 543 @command('transplant',
543 544 [('s', 'source', '', _('transplant changesets from REPO'), _('REPO')),
544 545 ('b', 'branch', [], _('use this source changeset as head'), _('REV')),
545 546 ('a', 'all', None, _('pull all changesets up to the --branch revisions')),
546 547 ('p', 'prune', [], _('skip over REV'), _('REV')),
547 548 ('m', 'merge', [], _('merge at REV'), _('REV')),
548 549 ('', 'parent', '',
549 550 _('parent to choose when transplanting merge'), _('REV')),
550 551 ('e', 'edit', False, _('invoke editor on commit messages')),
551 552 ('', 'log', None, _('append transplant info to log message')),
552 553 ('c', 'continue', None, _('continue last transplant session '
553 554 'after fixing conflicts')),
554 555 ('', 'filter', '',
555 556 _('filter changesets through command'), _('CMD'))],
556 557 _('hg transplant [-s REPO] [-b BRANCH [-a]] [-p REV] '
557 558 '[-m REV] [REV]...'))
558 559 def transplant(ui, repo, *revs, **opts):
559 560 '''transplant changesets from another branch
560 561
561 562 Selected changesets will be applied on top of the current working
562 563 directory with the log of the original changeset. The changesets
563 564 are copied and will thus appear twice in the history with different
564 565 identities.
565 566
566 567 Consider using the graft command if everything is inside the same
567 568 repository - it will use merges and will usually give a better result.
568 569 Use the rebase extension if the changesets are unpublished and you want
569 570 to move them instead of copying them.
570 571
571 572 If --log is specified, log messages will have a comment appended
572 573 of the form::
573 574
574 575 (transplanted from CHANGESETHASH)
575 576
576 577 You can rewrite the changelog message with the --filter option.
577 578 Its argument will be invoked with the current changelog message as
578 579 $1 and the patch as $2.
579 580
580 581 --source/-s specifies another repository to use for selecting changesets,
581 582 just as if it temporarily had been pulled.
582 583 If --branch/-b is specified, these revisions will be used as
583 584 heads when deciding which changesets to transplant, just as if only
584 585 these revisions had been pulled.
585 586 If --all/-a is specified, all the revisions up to the heads specified
586 587 with --branch will be transplanted.
587 588
588 589 Example:
589 590
590 591 - transplant all changes up to REV on top of your current revision::
591 592
592 593 hg transplant --branch REV --all
593 594
594 595 You can optionally mark selected transplanted changesets as merge
595 596 changesets. You will not be prompted to transplant any ancestors
596 597 of a merged transplant, and you can merge descendants of them
597 598 normally instead of transplanting them.
598 599
599 600 Merge changesets may be transplanted directly by specifying the
600 601 proper parent changeset by calling :hg:`transplant --parent`.
601 602
602 603 If no merges or revisions are provided, :hg:`transplant` will
603 604 start an interactive changeset browser.
604 605
605 606 If a changeset application fails, you can fix the merge by hand
606 607 and then resume where you left off by calling :hg:`transplant
607 608 --continue/-c`.
608 609 '''
609 610 with repo.wlock():
610 611 return _dotransplant(ui, repo, *revs, **opts)
611 612
612 613 def _dotransplant(ui, repo, *revs, **opts):
613 614 def incwalk(repo, csets, match=util.always):
614 615 for node in csets:
615 616 if match(node):
616 617 yield node
617 618
618 619 def transplantwalk(repo, dest, heads, match=util.always):
619 620 '''Yield all nodes that are ancestors of a head but not ancestors
620 621 of dest.
621 622 If no heads are specified, the heads of repo will be used.'''
622 623 if not heads:
623 624 heads = repo.heads()
624 625 ancestors = []
625 626 ctx = repo[dest]
626 627 for head in heads:
627 628 ancestors.append(ctx.ancestor(repo[head]).node())
628 629 for node in repo.changelog.nodesbetween(ancestors, heads)[0]:
629 630 if match(node):
630 631 yield node
631 632
632 633 def checkopts(opts, revs):
633 634 if opts.get('continue'):
634 635 if opts.get('branch') or opts.get('all') or opts.get('merge'):
635 636 raise error.Abort(_('--continue is incompatible with '
636 637 '--branch, --all and --merge'))
637 638 return
638 639 if not (opts.get('source') or revs or
639 640 opts.get('merge') or opts.get('branch')):
640 641 raise error.Abort(_('no source URL, branch revision, or revision '
641 642 'list provided'))
642 643 if opts.get('all'):
643 644 if not opts.get('branch'):
644 645 raise error.Abort(_('--all requires a branch revision'))
645 646 if revs:
646 647 raise error.Abort(_('--all is incompatible with a '
647 648 'revision list'))
648 649
649 650 checkopts(opts, revs)
650 651
651 652 if not opts.get('log'):
652 653 # deprecated config: transplant.log
653 654 opts['log'] = ui.config('transplant', 'log')
654 655 if not opts.get('filter'):
655 656 # deprecated config: transplant.filter
656 657 opts['filter'] = ui.config('transplant', 'filter')
657 658
658 659 tp = transplanter(ui, repo, opts)
659 660
660 661 p1, p2 = repo.dirstate.parents()
661 662 if len(repo) > 0 and p1 == revlog.nullid:
662 663 raise error.Abort(_('no revision checked out'))
663 664 if opts.get('continue'):
664 665 if not tp.canresume():
665 666 raise error.Abort(_('no transplant to continue'))
666 667 else:
667 668 cmdutil.checkunfinished(repo)
668 669 if p2 != revlog.nullid:
669 670 raise error.Abort(_('outstanding uncommitted merges'))
670 671 m, a, r, d = repo.status()[:4]
671 672 if m or a or r or d:
672 673 raise error.Abort(_('outstanding local changes'))
673 674
674 675 sourcerepo = opts.get('source')
675 676 if sourcerepo:
676 677 peer = hg.peer(repo, opts, ui.expandpath(sourcerepo))
677 678 heads = map(peer.lookup, opts.get('branch', ()))
678 679 target = set(heads)
679 680 for r in revs:
680 681 try:
681 682 target.add(peer.lookup(r))
682 683 except error.RepoError:
683 684 pass
684 685 source, csets, cleanupfn = bundlerepo.getremotechanges(ui, repo, peer,
685 686 onlyheads=sorted(target), force=True)
686 687 else:
687 688 source = repo
688 689 heads = map(source.lookup, opts.get('branch', ()))
689 690 cleanupfn = None
690 691
691 692 try:
692 693 if opts.get('continue'):
693 694 tp.resume(repo, source, opts)
694 695 return
695 696
696 697 tf = tp.transplantfilter(repo, source, p1)
697 698 if opts.get('prune'):
698 699 prune = set(source.lookup(r)
699 700 for r in scmutil.revrange(source, opts.get('prune')))
700 701 matchfn = lambda x: tf(x) and x not in prune
701 702 else:
702 703 matchfn = tf
703 704 merges = map(source.lookup, opts.get('merge', ()))
704 705 revmap = {}
705 706 if revs:
706 707 for r in scmutil.revrange(source, revs):
707 708 revmap[int(r)] = source.lookup(r)
708 709 elif opts.get('all') or not merges:
709 710 if source != repo:
710 711 alltransplants = incwalk(source, csets, match=matchfn)
711 712 else:
712 713 alltransplants = transplantwalk(source, p1, heads,
713 714 match=matchfn)
714 715 if opts.get('all'):
715 716 revs = alltransplants
716 717 else:
717 718 revs, newmerges = browserevs(ui, source, alltransplants, opts)
718 719 merges.extend(newmerges)
719 720 for r in revs:
720 721 revmap[source.changelog.rev(r)] = r
721 722 for r in merges:
722 723 revmap[source.changelog.rev(r)] = r
723 724
724 725 tp.apply(repo, source, revmap, merges, opts)
725 726 finally:
726 727 if cleanupfn:
727 728 cleanupfn()
728 729
729 730 revsetpredicate = registrar.revsetpredicate()
730 731
731 732 @revsetpredicate('transplanted([set])')
732 733 def revsettransplanted(repo, subset, x):
733 734 """Transplanted changesets in set, or all transplanted changesets.
734 735 """
735 736 if x:
736 737 s = revset.getset(repo, subset, x)
737 738 else:
738 739 s = subset
739 740 return smartset.baseset([r for r in s if
740 741 repo[r].extra().get('transplant_source')])
741 742
742 743 templatekeyword = registrar.templatekeyword()
743 744
744 745 @templatekeyword('transplanted')
745 746 def kwtransplanted(repo, ctx, **args):
746 747 """String. The node identifier of the transplanted
747 748 changeset if any."""
748 749 n = ctx.extra().get('transplant_source')
749 750 return n and nodemod.hex(n) or ''
750 751
751 752 def extsetup(ui):
752 753 cmdutil.unfinishedstates.append(
753 754 ['transplant/journal', True, False, _('transplant in progress'),
754 755 _("use 'hg transplant --continue' or 'hg update' to abort")])
755 756
756 757 # tell hggettext to extract docstrings from these functions:
757 758 i18nfunctions = [revsettransplanted, kwtransplanted]
@@ -1,3155 +1,3139 b''
1 1 # cmdutil.py - help for command processing in mercurial
2 2 #
3 3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 from __future__ import absolute_import
9 9
10 10 import errno
11 11 import os
12 12 import re
13 13 import tempfile
14 14
15 15 from .i18n import _
16 16 from .node import (
17 17 hex,
18 18 nullid,
19 19 nullrev,
20 20 short,
21 21 )
22 22
23 23 from . import (
24 24 bookmarks,
25 25 changelog,
26 26 copies,
27 27 crecord as crecordmod,
28 28 dirstateguard,
29 29 encoding,
30 30 error,
31 31 formatter,
32 32 logcmdutil,
33 33 match as matchmod,
34 34 obsolete,
35 35 patch,
36 36 pathutil,
37 37 pycompat,
38 38 registrar,
39 39 revlog,
40 40 rewriteutil,
41 41 scmutil,
42 42 smartset,
43 43 templater,
44 44 util,
45 45 vfs as vfsmod,
46 46 )
47 47 stringio = util.stringio
48 48
49 loglimit = logcmdutil.getlimit
50 diffordiffstat = logcmdutil.diffordiffstat
51 _changesetlabels = logcmdutil.changesetlabels
52 changeset_printer = logcmdutil.changesetprinter
53 jsonchangeset = logcmdutil.jsonchangeset
54 changeset_templater = logcmdutil.changesettemplater
55 logtemplatespec = logcmdutil.templatespec
56 makelogtemplater = logcmdutil.maketemplater
57 show_changeset = logcmdutil.changesetdisplayer
58 getlogrevs = logcmdutil.getrevs
59 getloglinerangerevs = logcmdutil.getlinerangerevs
60 displaygraph = logcmdutil.displaygraph
61 graphlog = logcmdutil.graphlog
62 checkunsupportedgraphflags = logcmdutil.checkunsupportedgraphflags
63 graphrevs = logcmdutil.graphrevs
64
65 49 # templates of common command options
66 50
67 51 dryrunopts = [
68 52 ('n', 'dry-run', None,
69 53 _('do not perform actions, just print output')),
70 54 ]
71 55
72 56 remoteopts = [
73 57 ('e', 'ssh', '',
74 58 _('specify ssh command to use'), _('CMD')),
75 59 ('', 'remotecmd', '',
76 60 _('specify hg command to run on the remote side'), _('CMD')),
77 61 ('', 'insecure', None,
78 62 _('do not verify server certificate (ignoring web.cacerts config)')),
79 63 ]
80 64
81 65 walkopts = [
82 66 ('I', 'include', [],
83 67 _('include names matching the given patterns'), _('PATTERN')),
84 68 ('X', 'exclude', [],
85 69 _('exclude names matching the given patterns'), _('PATTERN')),
86 70 ]
87 71
88 72 commitopts = [
89 73 ('m', 'message', '',
90 74 _('use text as commit message'), _('TEXT')),
91 75 ('l', 'logfile', '',
92 76 _('read commit message from file'), _('FILE')),
93 77 ]
94 78
95 79 commitopts2 = [
96 80 ('d', 'date', '',
97 81 _('record the specified date as commit date'), _('DATE')),
98 82 ('u', 'user', '',
99 83 _('record the specified user as committer'), _('USER')),
100 84 ]
101 85
102 86 # hidden for now
103 87 formatteropts = [
104 88 ('T', 'template', '',
105 89 _('display with template (EXPERIMENTAL)'), _('TEMPLATE')),
106 90 ]
107 91
108 92 templateopts = [
109 93 ('', 'style', '',
110 94 _('display using template map file (DEPRECATED)'), _('STYLE')),
111 95 ('T', 'template', '',
112 96 _('display with template'), _('TEMPLATE')),
113 97 ]
114 98
115 99 logopts = [
116 100 ('p', 'patch', None, _('show patch')),
117 101 ('g', 'git', None, _('use git extended diff format')),
118 102 ('l', 'limit', '',
119 103 _('limit number of changes displayed'), _('NUM')),
120 104 ('M', 'no-merges', None, _('do not show merges')),
121 105 ('', 'stat', None, _('output diffstat-style summary of changes')),
122 106 ('G', 'graph', None, _("show the revision DAG")),
123 107 ] + templateopts
124 108
125 109 diffopts = [
126 110 ('a', 'text', None, _('treat all files as text')),
127 111 ('g', 'git', None, _('use git extended diff format')),
128 112 ('', 'binary', None, _('generate binary diffs in git mode (default)')),
129 113 ('', 'nodates', None, _('omit dates from diff headers'))
130 114 ]
131 115
132 116 diffwsopts = [
133 117 ('w', 'ignore-all-space', None,
134 118 _('ignore white space when comparing lines')),
135 119 ('b', 'ignore-space-change', None,
136 120 _('ignore changes in the amount of white space')),
137 121 ('B', 'ignore-blank-lines', None,
138 122 _('ignore changes whose lines are all blank')),
139 123 ('Z', 'ignore-space-at-eol', None,
140 124 _('ignore changes in whitespace at EOL')),
141 125 ]
142 126
143 127 diffopts2 = [
144 128 ('', 'noprefix', None, _('omit a/ and b/ prefixes from filenames')),
145 129 ('p', 'show-function', None, _('show which function each change is in')),
146 130 ('', 'reverse', None, _('produce a diff that undoes the changes')),
147 131 ] + diffwsopts + [
148 132 ('U', 'unified', '',
149 133 _('number of lines of context to show'), _('NUM')),
150 134 ('', 'stat', None, _('output diffstat-style summary of changes')),
151 135 ('', 'root', '', _('produce diffs relative to subdirectory'), _('DIR')),
152 136 ]
153 137
154 138 mergetoolopts = [
155 139 ('t', 'tool', '', _('specify merge tool')),
156 140 ]
157 141
158 142 similarityopts = [
159 143 ('s', 'similarity', '',
160 144 _('guess renamed files by similarity (0<=s<=100)'), _('SIMILARITY'))
161 145 ]
162 146
163 147 subrepoopts = [
164 148 ('S', 'subrepos', None,
165 149 _('recurse into subrepositories'))
166 150 ]
167 151
168 152 debugrevlogopts = [
169 153 ('c', 'changelog', False, _('open changelog')),
170 154 ('m', 'manifest', False, _('open manifest')),
171 155 ('', 'dir', '', _('open directory manifest')),
172 156 ]
173 157
174 158 # special string such that everything below this line will be ingored in the
175 159 # editor text
176 160 _linebelow = "^HG: ------------------------ >8 ------------------------$"
177 161
178 162 def ishunk(x):
179 163 hunkclasses = (crecordmod.uihunk, patch.recordhunk)
180 164 return isinstance(x, hunkclasses)
181 165
182 166 def newandmodified(chunks, originalchunks):
183 167 newlyaddedandmodifiedfiles = set()
184 168 for chunk in chunks:
185 169 if ishunk(chunk) and chunk.header.isnewfile() and chunk not in \
186 170 originalchunks:
187 171 newlyaddedandmodifiedfiles.add(chunk.header.filename())
188 172 return newlyaddedandmodifiedfiles
189 173
190 174 def parsealiases(cmd):
191 175 return cmd.lstrip("^").split("|")
192 176
193 177 def setupwrapcolorwrite(ui):
194 178 # wrap ui.write so diff output can be labeled/colorized
195 179 def wrapwrite(orig, *args, **kw):
196 180 label = kw.pop(r'label', '')
197 181 for chunk, l in patch.difflabel(lambda: args):
198 182 orig(chunk, label=label + l)
199 183
200 184 oldwrite = ui.write
201 185 def wrap(*args, **kwargs):
202 186 return wrapwrite(oldwrite, *args, **kwargs)
203 187 setattr(ui, 'write', wrap)
204 188 return oldwrite
205 189
206 190 def filterchunks(ui, originalhunks, usecurses, testfile, operation=None):
207 191 if usecurses:
208 192 if testfile:
209 193 recordfn = crecordmod.testdecorator(testfile,
210 194 crecordmod.testchunkselector)
211 195 else:
212 196 recordfn = crecordmod.chunkselector
213 197
214 198 return crecordmod.filterpatch(ui, originalhunks, recordfn, operation)
215 199
216 200 else:
217 201 return patch.filterpatch(ui, originalhunks, operation)
218 202
219 203 def recordfilter(ui, originalhunks, operation=None):
220 204 """ Prompts the user to filter the originalhunks and return a list of
221 205 selected hunks.
222 206 *operation* is used for to build ui messages to indicate the user what
223 207 kind of filtering they are doing: reverting, committing, shelving, etc.
224 208 (see patch.filterpatch).
225 209 """
226 210 usecurses = crecordmod.checkcurses(ui)
227 211 testfile = ui.config('experimental', 'crecordtest')
228 212 oldwrite = setupwrapcolorwrite(ui)
229 213 try:
230 214 newchunks, newopts = filterchunks(ui, originalhunks, usecurses,
231 215 testfile, operation)
232 216 finally:
233 217 ui.write = oldwrite
234 218 return newchunks, newopts
235 219
236 220 def dorecord(ui, repo, commitfunc, cmdsuggest, backupall,
237 221 filterfn, *pats, **opts):
238 222 from . import merge as mergemod
239 223 opts = pycompat.byteskwargs(opts)
240 224 if not ui.interactive():
241 225 if cmdsuggest:
242 226 msg = _('running non-interactively, use %s instead') % cmdsuggest
243 227 else:
244 228 msg = _('running non-interactively')
245 229 raise error.Abort(msg)
246 230
247 231 # make sure username is set before going interactive
248 232 if not opts.get('user'):
249 233 ui.username() # raise exception, username not provided
250 234
251 235 def recordfunc(ui, repo, message, match, opts):
252 236 """This is generic record driver.
253 237
254 238 Its job is to interactively filter local changes, and
255 239 accordingly prepare working directory into a state in which the
256 240 job can be delegated to a non-interactive commit command such as
257 241 'commit' or 'qrefresh'.
258 242
259 243 After the actual job is done by non-interactive command, the
260 244 working directory is restored to its original state.
261 245
262 246 In the end we'll record interesting changes, and everything else
263 247 will be left in place, so the user can continue working.
264 248 """
265 249
266 250 checkunfinished(repo, commit=True)
267 251 wctx = repo[None]
268 252 merge = len(wctx.parents()) > 1
269 253 if merge:
270 254 raise error.Abort(_('cannot partially commit a merge '
271 255 '(use "hg commit" instead)'))
272 256
273 257 def fail(f, msg):
274 258 raise error.Abort('%s: %s' % (f, msg))
275 259
276 260 force = opts.get('force')
277 261 if not force:
278 262 vdirs = []
279 263 match.explicitdir = vdirs.append
280 264 match.bad = fail
281 265
282 266 status = repo.status(match=match)
283 267 if not force:
284 268 repo.checkcommitpatterns(wctx, vdirs, match, status, fail)
285 269 diffopts = patch.difffeatureopts(ui, opts=opts, whitespace=True)
286 270 diffopts.nodates = True
287 271 diffopts.git = True
288 272 diffopts.showfunc = True
289 273 originaldiff = patch.diff(repo, changes=status, opts=diffopts)
290 274 originalchunks = patch.parsepatch(originaldiff)
291 275
292 276 # 1. filter patch, since we are intending to apply subset of it
293 277 try:
294 278 chunks, newopts = filterfn(ui, originalchunks)
295 279 except error.PatchError as err:
296 280 raise error.Abort(_('error parsing patch: %s') % err)
297 281 opts.update(newopts)
298 282
299 283 # We need to keep a backup of files that have been newly added and
300 284 # modified during the recording process because there is a previous
301 285 # version without the edit in the workdir
302 286 newlyaddedandmodifiedfiles = newandmodified(chunks, originalchunks)
303 287 contenders = set()
304 288 for h in chunks:
305 289 try:
306 290 contenders.update(set(h.files()))
307 291 except AttributeError:
308 292 pass
309 293
310 294 changed = status.modified + status.added + status.removed
311 295 newfiles = [f for f in changed if f in contenders]
312 296 if not newfiles:
313 297 ui.status(_('no changes to record\n'))
314 298 return 0
315 299
316 300 modified = set(status.modified)
317 301
318 302 # 2. backup changed files, so we can restore them in the end
319 303
320 304 if backupall:
321 305 tobackup = changed
322 306 else:
323 307 tobackup = [f for f in newfiles if f in modified or f in \
324 308 newlyaddedandmodifiedfiles]
325 309 backups = {}
326 310 if tobackup:
327 311 backupdir = repo.vfs.join('record-backups')
328 312 try:
329 313 os.mkdir(backupdir)
330 314 except OSError as err:
331 315 if err.errno != errno.EEXIST:
332 316 raise
333 317 try:
334 318 # backup continues
335 319 for f in tobackup:
336 320 fd, tmpname = tempfile.mkstemp(prefix=f.replace('/', '_')+'.',
337 321 dir=backupdir)
338 322 os.close(fd)
339 323 ui.debug('backup %r as %r\n' % (f, tmpname))
340 324 util.copyfile(repo.wjoin(f), tmpname, copystat=True)
341 325 backups[f] = tmpname
342 326
343 327 fp = stringio()
344 328 for c in chunks:
345 329 fname = c.filename()
346 330 if fname in backups:
347 331 c.write(fp)
348 332 dopatch = fp.tell()
349 333 fp.seek(0)
350 334
351 335 # 2.5 optionally review / modify patch in text editor
352 336 if opts.get('review', False):
353 337 patchtext = (crecordmod.diffhelptext
354 338 + crecordmod.patchhelptext
355 339 + fp.read())
356 340 reviewedpatch = ui.edit(patchtext, "",
357 341 action="diff",
358 342 repopath=repo.path)
359 343 fp.truncate(0)
360 344 fp.write(reviewedpatch)
361 345 fp.seek(0)
362 346
363 347 [os.unlink(repo.wjoin(c)) for c in newlyaddedandmodifiedfiles]
364 348 # 3a. apply filtered patch to clean repo (clean)
365 349 if backups:
366 350 # Equivalent to hg.revert
367 351 m = scmutil.matchfiles(repo, backups.keys())
368 352 mergemod.update(repo, repo.dirstate.p1(),
369 353 False, True, matcher=m)
370 354
371 355 # 3b. (apply)
372 356 if dopatch:
373 357 try:
374 358 ui.debug('applying patch\n')
375 359 ui.debug(fp.getvalue())
376 360 patch.internalpatch(ui, repo, fp, 1, eolmode=None)
377 361 except error.PatchError as err:
378 362 raise error.Abort(str(err))
379 363 del fp
380 364
381 365 # 4. We prepared working directory according to filtered
382 366 # patch. Now is the time to delegate the job to
383 367 # commit/qrefresh or the like!
384 368
385 369 # Make all of the pathnames absolute.
386 370 newfiles = [repo.wjoin(nf) for nf in newfiles]
387 371 return commitfunc(ui, repo, *newfiles, **pycompat.strkwargs(opts))
388 372 finally:
389 373 # 5. finally restore backed-up files
390 374 try:
391 375 dirstate = repo.dirstate
392 376 for realname, tmpname in backups.iteritems():
393 377 ui.debug('restoring %r to %r\n' % (tmpname, realname))
394 378
395 379 if dirstate[realname] == 'n':
396 380 # without normallookup, restoring timestamp
397 381 # may cause partially committed files
398 382 # to be treated as unmodified
399 383 dirstate.normallookup(realname)
400 384
401 385 # copystat=True here and above are a hack to trick any
402 386 # editors that have f open that we haven't modified them.
403 387 #
404 388 # Also note that this racy as an editor could notice the
405 389 # file's mtime before we've finished writing it.
406 390 util.copyfile(tmpname, repo.wjoin(realname), copystat=True)
407 391 os.unlink(tmpname)
408 392 if tobackup:
409 393 os.rmdir(backupdir)
410 394 except OSError:
411 395 pass
412 396
413 397 def recordinwlock(ui, repo, message, match, opts):
414 398 with repo.wlock():
415 399 return recordfunc(ui, repo, message, match, opts)
416 400
417 401 return commit(ui, repo, recordinwlock, pats, opts)
418 402
419 403 class dirnode(object):
420 404 """
421 405 Represent a directory in user working copy with information required for
422 406 the purpose of tersing its status.
423 407
424 408 path is the path to the directory
425 409
426 410 statuses is a set of statuses of all files in this directory (this includes
427 411 all the files in all the subdirectories too)
428 412
429 413 files is a list of files which are direct child of this directory
430 414
431 415 subdirs is a dictionary of sub-directory name as the key and it's own
432 416 dirnode object as the value
433 417 """
434 418
435 419 def __init__(self, dirpath):
436 420 self.path = dirpath
437 421 self.statuses = set([])
438 422 self.files = []
439 423 self.subdirs = {}
440 424
441 425 def _addfileindir(self, filename, status):
442 426 """Add a file in this directory as a direct child."""
443 427 self.files.append((filename, status))
444 428
445 429 def addfile(self, filename, status):
446 430 """
447 431 Add a file to this directory or to its direct parent directory.
448 432
449 433 If the file is not direct child of this directory, we traverse to the
450 434 directory of which this file is a direct child of and add the file
451 435 there.
452 436 """
453 437
454 438 # the filename contains a path separator, it means it's not the direct
455 439 # child of this directory
456 440 if '/' in filename:
457 441 subdir, filep = filename.split('/', 1)
458 442
459 443 # does the dirnode object for subdir exists
460 444 if subdir not in self.subdirs:
461 445 subdirpath = os.path.join(self.path, subdir)
462 446 self.subdirs[subdir] = dirnode(subdirpath)
463 447
464 448 # try adding the file in subdir
465 449 self.subdirs[subdir].addfile(filep, status)
466 450
467 451 else:
468 452 self._addfileindir(filename, status)
469 453
470 454 if status not in self.statuses:
471 455 self.statuses.add(status)
472 456
473 457 def iterfilepaths(self):
474 458 """Yield (status, path) for files directly under this directory."""
475 459 for f, st in self.files:
476 460 yield st, os.path.join(self.path, f)
477 461
478 462 def tersewalk(self, terseargs):
479 463 """
480 464 Yield (status, path) obtained by processing the status of this
481 465 dirnode.
482 466
483 467 terseargs is the string of arguments passed by the user with `--terse`
484 468 flag.
485 469
486 470 Following are the cases which can happen:
487 471
488 472 1) All the files in the directory (including all the files in its
489 473 subdirectories) share the same status and the user has asked us to terse
490 474 that status. -> yield (status, dirpath)
491 475
492 476 2) Otherwise, we do following:
493 477
494 478 a) Yield (status, filepath) for all the files which are in this
495 479 directory (only the ones in this directory, not the subdirs)
496 480
497 481 b) Recurse the function on all the subdirectories of this
498 482 directory
499 483 """
500 484
501 485 if len(self.statuses) == 1:
502 486 onlyst = self.statuses.pop()
503 487
504 488 # Making sure we terse only when the status abbreviation is
505 489 # passed as terse argument
506 490 if onlyst in terseargs:
507 491 yield onlyst, self.path + pycompat.ossep
508 492 return
509 493
510 494 # add the files to status list
511 495 for st, fpath in self.iterfilepaths():
512 496 yield st, fpath
513 497
514 498 #recurse on the subdirs
515 499 for dirobj in self.subdirs.values():
516 500 for st, fpath in dirobj.tersewalk(terseargs):
517 501 yield st, fpath
518 502
519 503 def tersedir(statuslist, terseargs):
520 504 """
521 505 Terse the status if all the files in a directory shares the same status.
522 506
523 507 statuslist is scmutil.status() object which contains a list of files for
524 508 each status.
525 509 terseargs is string which is passed by the user as the argument to `--terse`
526 510 flag.
527 511
528 512 The function makes a tree of objects of dirnode class, and at each node it
529 513 stores the information required to know whether we can terse a certain
530 514 directory or not.
531 515 """
532 516 # the order matters here as that is used to produce final list
533 517 allst = ('m', 'a', 'r', 'd', 'u', 'i', 'c')
534 518
535 519 # checking the argument validity
536 520 for s in pycompat.bytestr(terseargs):
537 521 if s not in allst:
538 522 raise error.Abort(_("'%s' not recognized") % s)
539 523
540 524 # creating a dirnode object for the root of the repo
541 525 rootobj = dirnode('')
542 526 pstatus = ('modified', 'added', 'deleted', 'clean', 'unknown',
543 527 'ignored', 'removed')
544 528
545 529 tersedict = {}
546 530 for attrname in pstatus:
547 531 statuschar = attrname[0:1]
548 532 for f in getattr(statuslist, attrname):
549 533 rootobj.addfile(f, statuschar)
550 534 tersedict[statuschar] = []
551 535
552 536 # we won't be tersing the root dir, so add files in it
553 537 for st, fpath in rootobj.iterfilepaths():
554 538 tersedict[st].append(fpath)
555 539
556 540 # process each sub-directory and build tersedict
557 541 for subdir in rootobj.subdirs.values():
558 542 for st, f in subdir.tersewalk(terseargs):
559 543 tersedict[st].append(f)
560 544
561 545 tersedlist = []
562 546 for st in allst:
563 547 tersedict[st].sort()
564 548 tersedlist.append(tersedict[st])
565 549
566 550 return tersedlist
567 551
568 552 def _commentlines(raw):
569 553 '''Surround lineswith a comment char and a new line'''
570 554 lines = raw.splitlines()
571 555 commentedlines = ['# %s' % line for line in lines]
572 556 return '\n'.join(commentedlines) + '\n'
573 557
574 558 def _conflictsmsg(repo):
575 559 # avoid merge cycle
576 560 from . import merge as mergemod
577 561 mergestate = mergemod.mergestate.read(repo)
578 562 if not mergestate.active():
579 563 return
580 564
581 565 m = scmutil.match(repo[None])
582 566 unresolvedlist = [f for f in mergestate.unresolved() if m(f)]
583 567 if unresolvedlist:
584 568 mergeliststr = '\n'.join(
585 569 [' %s' % util.pathto(repo.root, pycompat.getcwd(), path)
586 570 for path in unresolvedlist])
587 571 msg = _('''Unresolved merge conflicts:
588 572
589 573 %s
590 574
591 575 To mark files as resolved: hg resolve --mark FILE''') % mergeliststr
592 576 else:
593 577 msg = _('No unresolved merge conflicts.')
594 578
595 579 return _commentlines(msg)
596 580
597 581 def _helpmessage(continuecmd, abortcmd):
598 582 msg = _('To continue: %s\n'
599 583 'To abort: %s') % (continuecmd, abortcmd)
600 584 return _commentlines(msg)
601 585
602 586 def _rebasemsg():
603 587 return _helpmessage('hg rebase --continue', 'hg rebase --abort')
604 588
605 589 def _histeditmsg():
606 590 return _helpmessage('hg histedit --continue', 'hg histedit --abort')
607 591
608 592 def _unshelvemsg():
609 593 return _helpmessage('hg unshelve --continue', 'hg unshelve --abort')
610 594
611 595 def _updatecleanmsg(dest=None):
612 596 warning = _('warning: this will discard uncommitted changes')
613 597 return 'hg update --clean %s (%s)' % (dest or '.', warning)
614 598
615 599 def _graftmsg():
616 600 # tweakdefaults requires `update` to have a rev hence the `.`
617 601 return _helpmessage('hg graft --continue', _updatecleanmsg())
618 602
619 603 def _mergemsg():
620 604 # tweakdefaults requires `update` to have a rev hence the `.`
621 605 return _helpmessage('hg commit', _updatecleanmsg())
622 606
623 607 def _bisectmsg():
624 608 msg = _('To mark the changeset good: hg bisect --good\n'
625 609 'To mark the changeset bad: hg bisect --bad\n'
626 610 'To abort: hg bisect --reset\n')
627 611 return _commentlines(msg)
628 612
629 613 def fileexistspredicate(filename):
630 614 return lambda repo: repo.vfs.exists(filename)
631 615
632 616 def _mergepredicate(repo):
633 617 return len(repo[None].parents()) > 1
634 618
635 619 STATES = (
636 620 # (state, predicate to detect states, helpful message function)
637 621 ('histedit', fileexistspredicate('histedit-state'), _histeditmsg),
638 622 ('bisect', fileexistspredicate('bisect.state'), _bisectmsg),
639 623 ('graft', fileexistspredicate('graftstate'), _graftmsg),
640 624 ('unshelve', fileexistspredicate('unshelverebasestate'), _unshelvemsg),
641 625 ('rebase', fileexistspredicate('rebasestate'), _rebasemsg),
642 626 # The merge state is part of a list that will be iterated over.
643 627 # They need to be last because some of the other unfinished states may also
644 628 # be in a merge or update state (eg. rebase, histedit, graft, etc).
645 629 # We want those to have priority.
646 630 ('merge', _mergepredicate, _mergemsg),
647 631 )
648 632
649 633 def _getrepostate(repo):
650 634 # experimental config: commands.status.skipstates
651 635 skip = set(repo.ui.configlist('commands', 'status.skipstates'))
652 636 for state, statedetectionpredicate, msgfn in STATES:
653 637 if state in skip:
654 638 continue
655 639 if statedetectionpredicate(repo):
656 640 return (state, statedetectionpredicate, msgfn)
657 641
658 642 def morestatus(repo, fm):
659 643 statetuple = _getrepostate(repo)
660 644 label = 'status.morestatus'
661 645 if statetuple:
662 646 fm.startitem()
663 647 state, statedetectionpredicate, helpfulmsg = statetuple
664 648 statemsg = _('The repository is in an unfinished *%s* state.') % state
665 649 fm.write('statemsg', '%s\n', _commentlines(statemsg), label=label)
666 650 conmsg = _conflictsmsg(repo)
667 651 if conmsg:
668 652 fm.write('conflictsmsg', '%s\n', conmsg, label=label)
669 653 if helpfulmsg:
670 654 helpmsg = helpfulmsg()
671 655 fm.write('helpmsg', '%s\n', helpmsg, label=label)
672 656
673 657 def findpossible(cmd, table, strict=False):
674 658 """
675 659 Return cmd -> (aliases, command table entry)
676 660 for each matching command.
677 661 Return debug commands (or their aliases) only if no normal command matches.
678 662 """
679 663 choice = {}
680 664 debugchoice = {}
681 665
682 666 if cmd in table:
683 667 # short-circuit exact matches, "log" alias beats "^log|history"
684 668 keys = [cmd]
685 669 else:
686 670 keys = table.keys()
687 671
688 672 allcmds = []
689 673 for e in keys:
690 674 aliases = parsealiases(e)
691 675 allcmds.extend(aliases)
692 676 found = None
693 677 if cmd in aliases:
694 678 found = cmd
695 679 elif not strict:
696 680 for a in aliases:
697 681 if a.startswith(cmd):
698 682 found = a
699 683 break
700 684 if found is not None:
701 685 if aliases[0].startswith("debug") or found.startswith("debug"):
702 686 debugchoice[found] = (aliases, table[e])
703 687 else:
704 688 choice[found] = (aliases, table[e])
705 689
706 690 if not choice and debugchoice:
707 691 choice = debugchoice
708 692
709 693 return choice, allcmds
710 694
711 695 def findcmd(cmd, table, strict=True):
712 696 """Return (aliases, command table entry) for command string."""
713 697 choice, allcmds = findpossible(cmd, table, strict)
714 698
715 699 if cmd in choice:
716 700 return choice[cmd]
717 701
718 702 if len(choice) > 1:
719 703 clist = sorted(choice)
720 704 raise error.AmbiguousCommand(cmd, clist)
721 705
722 706 if choice:
723 707 return list(choice.values())[0]
724 708
725 709 raise error.UnknownCommand(cmd, allcmds)
726 710
727 711 def changebranch(ui, repo, revs, label):
728 712 """ Change the branch name of given revs to label """
729 713
730 714 with repo.wlock(), repo.lock(), repo.transaction('branches'):
731 715 # abort in case of uncommitted merge or dirty wdir
732 716 bailifchanged(repo)
733 717 revs = scmutil.revrange(repo, revs)
734 718 if not revs:
735 719 raise error.Abort("empty revision set")
736 720 roots = repo.revs('roots(%ld)', revs)
737 721 if len(roots) > 1:
738 722 raise error.Abort(_("cannot change branch of non-linear revisions"))
739 723 rewriteutil.precheck(repo, revs, 'change branch of')
740 724
741 725 root = repo[roots.first()]
742 726 if not root.p1().branch() == label and label in repo.branchmap():
743 727 raise error.Abort(_("a branch of the same name already exists"))
744 728
745 729 if repo.revs('merge() and %ld', revs):
746 730 raise error.Abort(_("cannot change branch of a merge commit"))
747 731 if repo.revs('obsolete() and %ld', revs):
748 732 raise error.Abort(_("cannot change branch of a obsolete changeset"))
749 733
750 734 # make sure only topological heads
751 735 if repo.revs('heads(%ld) - head()', revs):
752 736 raise error.Abort(_("cannot change branch in middle of a stack"))
753 737
754 738 replacements = {}
755 739 # avoid import cycle mercurial.cmdutil -> mercurial.context ->
756 740 # mercurial.subrepo -> mercurial.cmdutil
757 741 from . import context
758 742 for rev in revs:
759 743 ctx = repo[rev]
760 744 oldbranch = ctx.branch()
761 745 # check if ctx has same branch
762 746 if oldbranch == label:
763 747 continue
764 748
765 749 def filectxfn(repo, newctx, path):
766 750 try:
767 751 return ctx[path]
768 752 except error.ManifestLookupError:
769 753 return None
770 754
771 755 ui.debug("changing branch of '%s' from '%s' to '%s'\n"
772 756 % (hex(ctx.node()), oldbranch, label))
773 757 extra = ctx.extra()
774 758 extra['branch_change'] = hex(ctx.node())
775 759 # While changing branch of set of linear commits, make sure that
776 760 # we base our commits on new parent rather than old parent which
777 761 # was obsoleted while changing the branch
778 762 p1 = ctx.p1().node()
779 763 p2 = ctx.p2().node()
780 764 if p1 in replacements:
781 765 p1 = replacements[p1][0]
782 766 if p2 in replacements:
783 767 p2 = replacements[p2][0]
784 768
785 769 mc = context.memctx(repo, (p1, p2),
786 770 ctx.description(),
787 771 ctx.files(),
788 772 filectxfn,
789 773 user=ctx.user(),
790 774 date=ctx.date(),
791 775 extra=extra,
792 776 branch=label)
793 777
794 778 commitphase = ctx.phase()
795 779 overrides = {('phases', 'new-commit'): commitphase}
796 780 with repo.ui.configoverride(overrides, 'branch-change'):
797 781 newnode = repo.commitctx(mc)
798 782
799 783 replacements[ctx.node()] = (newnode,)
800 784 ui.debug('new node id is %s\n' % hex(newnode))
801 785
802 786 # create obsmarkers and move bookmarks
803 787 scmutil.cleanupnodes(repo, replacements, 'branch-change')
804 788
805 789 # move the working copy too
806 790 wctx = repo[None]
807 791 # in-progress merge is a bit too complex for now.
808 792 if len(wctx.parents()) == 1:
809 793 newid = replacements.get(wctx.p1().node())
810 794 if newid is not None:
811 795 # avoid import cycle mercurial.cmdutil -> mercurial.hg ->
812 796 # mercurial.cmdutil
813 797 from . import hg
814 798 hg.update(repo, newid[0], quietempty=True)
815 799
816 800 ui.status(_("changed branch on %d changesets\n") % len(replacements))
817 801
818 802 def findrepo(p):
819 803 while not os.path.isdir(os.path.join(p, ".hg")):
820 804 oldp, p = p, os.path.dirname(p)
821 805 if p == oldp:
822 806 return None
823 807
824 808 return p
825 809
826 810 def bailifchanged(repo, merge=True, hint=None):
827 811 """ enforce the precondition that working directory must be clean.
828 812
829 813 'merge' can be set to false if a pending uncommitted merge should be
830 814 ignored (such as when 'update --check' runs).
831 815
832 816 'hint' is the usual hint given to Abort exception.
833 817 """
834 818
835 819 if merge and repo.dirstate.p2() != nullid:
836 820 raise error.Abort(_('outstanding uncommitted merge'), hint=hint)
837 821 modified, added, removed, deleted = repo.status()[:4]
838 822 if modified or added or removed or deleted:
839 823 raise error.Abort(_('uncommitted changes'), hint=hint)
840 824 ctx = repo[None]
841 825 for s in sorted(ctx.substate):
842 826 ctx.sub(s).bailifchanged(hint=hint)
843 827
844 828 def logmessage(ui, opts):
845 829 """ get the log message according to -m and -l option """
846 830 message = opts.get('message')
847 831 logfile = opts.get('logfile')
848 832
849 833 if message and logfile:
850 834 raise error.Abort(_('options --message and --logfile are mutually '
851 835 'exclusive'))
852 836 if not message and logfile:
853 837 try:
854 838 if isstdiofilename(logfile):
855 839 message = ui.fin.read()
856 840 else:
857 841 message = '\n'.join(util.readfile(logfile).splitlines())
858 842 except IOError as inst:
859 843 raise error.Abort(_("can't read commit message '%s': %s") %
860 844 (logfile, encoding.strtolocal(inst.strerror)))
861 845 return message
862 846
863 847 def mergeeditform(ctxorbool, baseformname):
864 848 """return appropriate editform name (referencing a committemplate)
865 849
866 850 'ctxorbool' is either a ctx to be committed, or a bool indicating whether
867 851 merging is committed.
868 852
869 853 This returns baseformname with '.merge' appended if it is a merge,
870 854 otherwise '.normal' is appended.
871 855 """
872 856 if isinstance(ctxorbool, bool):
873 857 if ctxorbool:
874 858 return baseformname + ".merge"
875 859 elif 1 < len(ctxorbool.parents()):
876 860 return baseformname + ".merge"
877 861
878 862 return baseformname + ".normal"
879 863
880 864 def getcommiteditor(edit=False, finishdesc=None, extramsg=None,
881 865 editform='', **opts):
882 866 """get appropriate commit message editor according to '--edit' option
883 867
884 868 'finishdesc' is a function to be called with edited commit message
885 869 (= 'description' of the new changeset) just after editing, but
886 870 before checking empty-ness. It should return actual text to be
887 871 stored into history. This allows to change description before
888 872 storing.
889 873
890 874 'extramsg' is a extra message to be shown in the editor instead of
891 875 'Leave message empty to abort commit' line. 'HG: ' prefix and EOL
892 876 is automatically added.
893 877
894 878 'editform' is a dot-separated list of names, to distinguish
895 879 the purpose of commit text editing.
896 880
897 881 'getcommiteditor' returns 'commitforceeditor' regardless of
898 882 'edit', if one of 'finishdesc' or 'extramsg' is specified, because
899 883 they are specific for usage in MQ.
900 884 """
901 885 if edit or finishdesc or extramsg:
902 886 return lambda r, c, s: commitforceeditor(r, c, s,
903 887 finishdesc=finishdesc,
904 888 extramsg=extramsg,
905 889 editform=editform)
906 890 elif editform:
907 891 return lambda r, c, s: commiteditor(r, c, s, editform=editform)
908 892 else:
909 893 return commiteditor
910 894
911 895 def makefilename(repo, pat, node, desc=None,
912 896 total=None, seqno=None, revwidth=None, pathname=None):
913 897 node_expander = {
914 898 'H': lambda: hex(node),
915 899 'R': lambda: '%d' % repo.changelog.rev(node),
916 900 'h': lambda: short(node),
917 901 'm': lambda: re.sub('[^\w]', '_', desc or '')
918 902 }
919 903 expander = {
920 904 '%': lambda: '%',
921 905 'b': lambda: os.path.basename(repo.root),
922 906 }
923 907
924 908 try:
925 909 if node:
926 910 expander.update(node_expander)
927 911 if node:
928 912 expander['r'] = (lambda:
929 913 ('%d' % repo.changelog.rev(node)).zfill(revwidth or 0))
930 914 if total is not None:
931 915 expander['N'] = lambda: '%d' % total
932 916 if seqno is not None:
933 917 expander['n'] = lambda: '%d' % seqno
934 918 if total is not None and seqno is not None:
935 919 expander['n'] = (lambda: ('%d' % seqno).zfill(len('%d' % total)))
936 920 if pathname is not None:
937 921 expander['s'] = lambda: os.path.basename(pathname)
938 922 expander['d'] = lambda: os.path.dirname(pathname) or '.'
939 923 expander['p'] = lambda: pathname
940 924
941 925 newname = []
942 926 patlen = len(pat)
943 927 i = 0
944 928 while i < patlen:
945 929 c = pat[i:i + 1]
946 930 if c == '%':
947 931 i += 1
948 932 c = pat[i:i + 1]
949 933 c = expander[c]()
950 934 newname.append(c)
951 935 i += 1
952 936 return ''.join(newname)
953 937 except KeyError as inst:
954 938 raise error.Abort(_("invalid format spec '%%%s' in output filename") %
955 939 inst.args[0])
956 940
957 941 def isstdiofilename(pat):
958 942 """True if the given pat looks like a filename denoting stdin/stdout"""
959 943 return not pat or pat == '-'
960 944
961 945 class _unclosablefile(object):
962 946 def __init__(self, fp):
963 947 self._fp = fp
964 948
965 949 def close(self):
966 950 pass
967 951
968 952 def __iter__(self):
969 953 return iter(self._fp)
970 954
971 955 def __getattr__(self, attr):
972 956 return getattr(self._fp, attr)
973 957
974 958 def __enter__(self):
975 959 return self
976 960
977 961 def __exit__(self, exc_type, exc_value, exc_tb):
978 962 pass
979 963
980 964 def makefileobj(repo, pat, node=None, desc=None, total=None,
981 965 seqno=None, revwidth=None, mode='wb', modemap=None,
982 966 pathname=None):
983 967
984 968 writable = mode not in ('r', 'rb')
985 969
986 970 if isstdiofilename(pat):
987 971 if writable:
988 972 fp = repo.ui.fout
989 973 else:
990 974 fp = repo.ui.fin
991 975 return _unclosablefile(fp)
992 976 fn = makefilename(repo, pat, node, desc, total, seqno, revwidth, pathname)
993 977 if modemap is not None:
994 978 mode = modemap.get(fn, mode)
995 979 if mode == 'wb':
996 980 modemap[fn] = 'ab'
997 981 return open(fn, mode)
998 982
999 983 def openrevlog(repo, cmd, file_, opts):
1000 984 """opens the changelog, manifest, a filelog or a given revlog"""
1001 985 cl = opts['changelog']
1002 986 mf = opts['manifest']
1003 987 dir = opts['dir']
1004 988 msg = None
1005 989 if cl and mf:
1006 990 msg = _('cannot specify --changelog and --manifest at the same time')
1007 991 elif cl and dir:
1008 992 msg = _('cannot specify --changelog and --dir at the same time')
1009 993 elif cl or mf or dir:
1010 994 if file_:
1011 995 msg = _('cannot specify filename with --changelog or --manifest')
1012 996 elif not repo:
1013 997 msg = _('cannot specify --changelog or --manifest or --dir '
1014 998 'without a repository')
1015 999 if msg:
1016 1000 raise error.Abort(msg)
1017 1001
1018 1002 r = None
1019 1003 if repo:
1020 1004 if cl:
1021 1005 r = repo.unfiltered().changelog
1022 1006 elif dir:
1023 1007 if 'treemanifest' not in repo.requirements:
1024 1008 raise error.Abort(_("--dir can only be used on repos with "
1025 1009 "treemanifest enabled"))
1026 1010 dirlog = repo.manifestlog._revlog.dirlog(dir)
1027 1011 if len(dirlog):
1028 1012 r = dirlog
1029 1013 elif mf:
1030 1014 r = repo.manifestlog._revlog
1031 1015 elif file_:
1032 1016 filelog = repo.file(file_)
1033 1017 if len(filelog):
1034 1018 r = filelog
1035 1019 if not r:
1036 1020 if not file_:
1037 1021 raise error.CommandError(cmd, _('invalid arguments'))
1038 1022 if not os.path.isfile(file_):
1039 1023 raise error.Abort(_("revlog '%s' not found") % file_)
1040 1024 r = revlog.revlog(vfsmod.vfs(pycompat.getcwd(), audit=False),
1041 1025 file_[:-2] + ".i")
1042 1026 return r
1043 1027
1044 1028 def copy(ui, repo, pats, opts, rename=False):
1045 1029 # called with the repo lock held
1046 1030 #
1047 1031 # hgsep => pathname that uses "/" to separate directories
1048 1032 # ossep => pathname that uses os.sep to separate directories
1049 1033 cwd = repo.getcwd()
1050 1034 targets = {}
1051 1035 after = opts.get("after")
1052 1036 dryrun = opts.get("dry_run")
1053 1037 wctx = repo[None]
1054 1038
1055 1039 def walkpat(pat):
1056 1040 srcs = []
1057 1041 if after:
1058 1042 badstates = '?'
1059 1043 else:
1060 1044 badstates = '?r'
1061 1045 m = scmutil.match(wctx, [pat], opts, globbed=True)
1062 1046 for abs in wctx.walk(m):
1063 1047 state = repo.dirstate[abs]
1064 1048 rel = m.rel(abs)
1065 1049 exact = m.exact(abs)
1066 1050 if state in badstates:
1067 1051 if exact and state == '?':
1068 1052 ui.warn(_('%s: not copying - file is not managed\n') % rel)
1069 1053 if exact and state == 'r':
1070 1054 ui.warn(_('%s: not copying - file has been marked for'
1071 1055 ' remove\n') % rel)
1072 1056 continue
1073 1057 # abs: hgsep
1074 1058 # rel: ossep
1075 1059 srcs.append((abs, rel, exact))
1076 1060 return srcs
1077 1061
1078 1062 # abssrc: hgsep
1079 1063 # relsrc: ossep
1080 1064 # otarget: ossep
1081 1065 def copyfile(abssrc, relsrc, otarget, exact):
1082 1066 abstarget = pathutil.canonpath(repo.root, cwd, otarget)
1083 1067 if '/' in abstarget:
1084 1068 # We cannot normalize abstarget itself, this would prevent
1085 1069 # case only renames, like a => A.
1086 1070 abspath, absname = abstarget.rsplit('/', 1)
1087 1071 abstarget = repo.dirstate.normalize(abspath) + '/' + absname
1088 1072 reltarget = repo.pathto(abstarget, cwd)
1089 1073 target = repo.wjoin(abstarget)
1090 1074 src = repo.wjoin(abssrc)
1091 1075 state = repo.dirstate[abstarget]
1092 1076
1093 1077 scmutil.checkportable(ui, abstarget)
1094 1078
1095 1079 # check for collisions
1096 1080 prevsrc = targets.get(abstarget)
1097 1081 if prevsrc is not None:
1098 1082 ui.warn(_('%s: not overwriting - %s collides with %s\n') %
1099 1083 (reltarget, repo.pathto(abssrc, cwd),
1100 1084 repo.pathto(prevsrc, cwd)))
1101 1085 return
1102 1086
1103 1087 # check for overwrites
1104 1088 exists = os.path.lexists(target)
1105 1089 samefile = False
1106 1090 if exists and abssrc != abstarget:
1107 1091 if (repo.dirstate.normalize(abssrc) ==
1108 1092 repo.dirstate.normalize(abstarget)):
1109 1093 if not rename:
1110 1094 ui.warn(_("%s: can't copy - same file\n") % reltarget)
1111 1095 return
1112 1096 exists = False
1113 1097 samefile = True
1114 1098
1115 1099 if not after and exists or after and state in 'mn':
1116 1100 if not opts['force']:
1117 1101 if state in 'mn':
1118 1102 msg = _('%s: not overwriting - file already committed\n')
1119 1103 if after:
1120 1104 flags = '--after --force'
1121 1105 else:
1122 1106 flags = '--force'
1123 1107 if rename:
1124 1108 hint = _('(hg rename %s to replace the file by '
1125 1109 'recording a rename)\n') % flags
1126 1110 else:
1127 1111 hint = _('(hg copy %s to replace the file by '
1128 1112 'recording a copy)\n') % flags
1129 1113 else:
1130 1114 msg = _('%s: not overwriting - file exists\n')
1131 1115 if rename:
1132 1116 hint = _('(hg rename --after to record the rename)\n')
1133 1117 else:
1134 1118 hint = _('(hg copy --after to record the copy)\n')
1135 1119 ui.warn(msg % reltarget)
1136 1120 ui.warn(hint)
1137 1121 return
1138 1122
1139 1123 if after:
1140 1124 if not exists:
1141 1125 if rename:
1142 1126 ui.warn(_('%s: not recording move - %s does not exist\n') %
1143 1127 (relsrc, reltarget))
1144 1128 else:
1145 1129 ui.warn(_('%s: not recording copy - %s does not exist\n') %
1146 1130 (relsrc, reltarget))
1147 1131 return
1148 1132 elif not dryrun:
1149 1133 try:
1150 1134 if exists:
1151 1135 os.unlink(target)
1152 1136 targetdir = os.path.dirname(target) or '.'
1153 1137 if not os.path.isdir(targetdir):
1154 1138 os.makedirs(targetdir)
1155 1139 if samefile:
1156 1140 tmp = target + "~hgrename"
1157 1141 os.rename(src, tmp)
1158 1142 os.rename(tmp, target)
1159 1143 else:
1160 1144 util.copyfile(src, target)
1161 1145 srcexists = True
1162 1146 except IOError as inst:
1163 1147 if inst.errno == errno.ENOENT:
1164 1148 ui.warn(_('%s: deleted in working directory\n') % relsrc)
1165 1149 srcexists = False
1166 1150 else:
1167 1151 ui.warn(_('%s: cannot copy - %s\n') %
1168 1152 (relsrc, encoding.strtolocal(inst.strerror)))
1169 1153 return True # report a failure
1170 1154
1171 1155 if ui.verbose or not exact:
1172 1156 if rename:
1173 1157 ui.status(_('moving %s to %s\n') % (relsrc, reltarget))
1174 1158 else:
1175 1159 ui.status(_('copying %s to %s\n') % (relsrc, reltarget))
1176 1160
1177 1161 targets[abstarget] = abssrc
1178 1162
1179 1163 # fix up dirstate
1180 1164 scmutil.dirstatecopy(ui, repo, wctx, abssrc, abstarget,
1181 1165 dryrun=dryrun, cwd=cwd)
1182 1166 if rename and not dryrun:
1183 1167 if not after and srcexists and not samefile:
1184 1168 repo.wvfs.unlinkpath(abssrc)
1185 1169 wctx.forget([abssrc])
1186 1170
1187 1171 # pat: ossep
1188 1172 # dest ossep
1189 1173 # srcs: list of (hgsep, hgsep, ossep, bool)
1190 1174 # return: function that takes hgsep and returns ossep
1191 1175 def targetpathfn(pat, dest, srcs):
1192 1176 if os.path.isdir(pat):
1193 1177 abspfx = pathutil.canonpath(repo.root, cwd, pat)
1194 1178 abspfx = util.localpath(abspfx)
1195 1179 if destdirexists:
1196 1180 striplen = len(os.path.split(abspfx)[0])
1197 1181 else:
1198 1182 striplen = len(abspfx)
1199 1183 if striplen:
1200 1184 striplen += len(pycompat.ossep)
1201 1185 res = lambda p: os.path.join(dest, util.localpath(p)[striplen:])
1202 1186 elif destdirexists:
1203 1187 res = lambda p: os.path.join(dest,
1204 1188 os.path.basename(util.localpath(p)))
1205 1189 else:
1206 1190 res = lambda p: dest
1207 1191 return res
1208 1192
1209 1193 # pat: ossep
1210 1194 # dest ossep
1211 1195 # srcs: list of (hgsep, hgsep, ossep, bool)
1212 1196 # return: function that takes hgsep and returns ossep
1213 1197 def targetpathafterfn(pat, dest, srcs):
1214 1198 if matchmod.patkind(pat):
1215 1199 # a mercurial pattern
1216 1200 res = lambda p: os.path.join(dest,
1217 1201 os.path.basename(util.localpath(p)))
1218 1202 else:
1219 1203 abspfx = pathutil.canonpath(repo.root, cwd, pat)
1220 1204 if len(abspfx) < len(srcs[0][0]):
1221 1205 # A directory. Either the target path contains the last
1222 1206 # component of the source path or it does not.
1223 1207 def evalpath(striplen):
1224 1208 score = 0
1225 1209 for s in srcs:
1226 1210 t = os.path.join(dest, util.localpath(s[0])[striplen:])
1227 1211 if os.path.lexists(t):
1228 1212 score += 1
1229 1213 return score
1230 1214
1231 1215 abspfx = util.localpath(abspfx)
1232 1216 striplen = len(abspfx)
1233 1217 if striplen:
1234 1218 striplen += len(pycompat.ossep)
1235 1219 if os.path.isdir(os.path.join(dest, os.path.split(abspfx)[1])):
1236 1220 score = evalpath(striplen)
1237 1221 striplen1 = len(os.path.split(abspfx)[0])
1238 1222 if striplen1:
1239 1223 striplen1 += len(pycompat.ossep)
1240 1224 if evalpath(striplen1) > score:
1241 1225 striplen = striplen1
1242 1226 res = lambda p: os.path.join(dest,
1243 1227 util.localpath(p)[striplen:])
1244 1228 else:
1245 1229 # a file
1246 1230 if destdirexists:
1247 1231 res = lambda p: os.path.join(dest,
1248 1232 os.path.basename(util.localpath(p)))
1249 1233 else:
1250 1234 res = lambda p: dest
1251 1235 return res
1252 1236
1253 1237 pats = scmutil.expandpats(pats)
1254 1238 if not pats:
1255 1239 raise error.Abort(_('no source or destination specified'))
1256 1240 if len(pats) == 1:
1257 1241 raise error.Abort(_('no destination specified'))
1258 1242 dest = pats.pop()
1259 1243 destdirexists = os.path.isdir(dest) and not os.path.islink(dest)
1260 1244 if not destdirexists:
1261 1245 if len(pats) > 1 or matchmod.patkind(pats[0]):
1262 1246 raise error.Abort(_('with multiple sources, destination must be an '
1263 1247 'existing directory'))
1264 1248 if util.endswithsep(dest):
1265 1249 raise error.Abort(_('destination %s is not a directory') % dest)
1266 1250
1267 1251 tfn = targetpathfn
1268 1252 if after:
1269 1253 tfn = targetpathafterfn
1270 1254 copylist = []
1271 1255 for pat in pats:
1272 1256 srcs = walkpat(pat)
1273 1257 if not srcs:
1274 1258 continue
1275 1259 copylist.append((tfn(pat, dest, srcs), srcs))
1276 1260 if not copylist:
1277 1261 raise error.Abort(_('no files to copy'))
1278 1262
1279 1263 errors = 0
1280 1264 for targetpath, srcs in copylist:
1281 1265 for abssrc, relsrc, exact in srcs:
1282 1266 if copyfile(abssrc, relsrc, targetpath(abssrc), exact):
1283 1267 errors += 1
1284 1268
1285 1269 if errors:
1286 1270 ui.warn(_('(consider using --after)\n'))
1287 1271
1288 1272 return errors != 0
1289 1273
1290 1274 ## facility to let extension process additional data into an import patch
1291 1275 # list of identifier to be executed in order
1292 1276 extrapreimport = [] # run before commit
1293 1277 extrapostimport = [] # run after commit
1294 1278 # mapping from identifier to actual import function
1295 1279 #
1296 1280 # 'preimport' are run before the commit is made and are provided the following
1297 1281 # arguments:
1298 1282 # - repo: the localrepository instance,
1299 1283 # - patchdata: data extracted from patch header (cf m.patch.patchheadermap),
1300 1284 # - extra: the future extra dictionary of the changeset, please mutate it,
1301 1285 # - opts: the import options.
1302 1286 # XXX ideally, we would just pass an ctx ready to be computed, that would allow
1303 1287 # mutation of in memory commit and more. Feel free to rework the code to get
1304 1288 # there.
1305 1289 extrapreimportmap = {}
1306 1290 # 'postimport' are run after the commit is made and are provided the following
1307 1291 # argument:
1308 1292 # - ctx: the changectx created by import.
1309 1293 extrapostimportmap = {}
1310 1294
1311 1295 def tryimportone(ui, repo, hunk, parents, opts, msgs, updatefunc):
1312 1296 """Utility function used by commands.import to import a single patch
1313 1297
1314 1298 This function is explicitly defined here to help the evolve extension to
1315 1299 wrap this part of the import logic.
1316 1300
1317 1301 The API is currently a bit ugly because it a simple code translation from
1318 1302 the import command. Feel free to make it better.
1319 1303
1320 1304 :hunk: a patch (as a binary string)
1321 1305 :parents: nodes that will be parent of the created commit
1322 1306 :opts: the full dict of option passed to the import command
1323 1307 :msgs: list to save commit message to.
1324 1308 (used in case we need to save it when failing)
1325 1309 :updatefunc: a function that update a repo to a given node
1326 1310 updatefunc(<repo>, <node>)
1327 1311 """
1328 1312 # avoid cycle context -> subrepo -> cmdutil
1329 1313 from . import context
1330 1314 extractdata = patch.extract(ui, hunk)
1331 1315 tmpname = extractdata.get('filename')
1332 1316 message = extractdata.get('message')
1333 1317 user = opts.get('user') or extractdata.get('user')
1334 1318 date = opts.get('date') or extractdata.get('date')
1335 1319 branch = extractdata.get('branch')
1336 1320 nodeid = extractdata.get('nodeid')
1337 1321 p1 = extractdata.get('p1')
1338 1322 p2 = extractdata.get('p2')
1339 1323
1340 1324 nocommit = opts.get('no_commit')
1341 1325 importbranch = opts.get('import_branch')
1342 1326 update = not opts.get('bypass')
1343 1327 strip = opts["strip"]
1344 1328 prefix = opts["prefix"]
1345 1329 sim = float(opts.get('similarity') or 0)
1346 1330 if not tmpname:
1347 1331 return (None, None, False)
1348 1332
1349 1333 rejects = False
1350 1334
1351 1335 try:
1352 1336 cmdline_message = logmessage(ui, opts)
1353 1337 if cmdline_message:
1354 1338 # pickup the cmdline msg
1355 1339 message = cmdline_message
1356 1340 elif message:
1357 1341 # pickup the patch msg
1358 1342 message = message.strip()
1359 1343 else:
1360 1344 # launch the editor
1361 1345 message = None
1362 1346 ui.debug('message:\n%s\n' % message)
1363 1347
1364 1348 if len(parents) == 1:
1365 1349 parents.append(repo[nullid])
1366 1350 if opts.get('exact'):
1367 1351 if not nodeid or not p1:
1368 1352 raise error.Abort(_('not a Mercurial patch'))
1369 1353 p1 = repo[p1]
1370 1354 p2 = repo[p2 or nullid]
1371 1355 elif p2:
1372 1356 try:
1373 1357 p1 = repo[p1]
1374 1358 p2 = repo[p2]
1375 1359 # Without any options, consider p2 only if the
1376 1360 # patch is being applied on top of the recorded
1377 1361 # first parent.
1378 1362 if p1 != parents[0]:
1379 1363 p1 = parents[0]
1380 1364 p2 = repo[nullid]
1381 1365 except error.RepoError:
1382 1366 p1, p2 = parents
1383 1367 if p2.node() == nullid:
1384 1368 ui.warn(_("warning: import the patch as a normal revision\n"
1385 1369 "(use --exact to import the patch as a merge)\n"))
1386 1370 else:
1387 1371 p1, p2 = parents
1388 1372
1389 1373 n = None
1390 1374 if update:
1391 1375 if p1 != parents[0]:
1392 1376 updatefunc(repo, p1.node())
1393 1377 if p2 != parents[1]:
1394 1378 repo.setparents(p1.node(), p2.node())
1395 1379
1396 1380 if opts.get('exact') or importbranch:
1397 1381 repo.dirstate.setbranch(branch or 'default')
1398 1382
1399 1383 partial = opts.get('partial', False)
1400 1384 files = set()
1401 1385 try:
1402 1386 patch.patch(ui, repo, tmpname, strip=strip, prefix=prefix,
1403 1387 files=files, eolmode=None, similarity=sim / 100.0)
1404 1388 except error.PatchError as e:
1405 1389 if not partial:
1406 1390 raise error.Abort(str(e))
1407 1391 if partial:
1408 1392 rejects = True
1409 1393
1410 1394 files = list(files)
1411 1395 if nocommit:
1412 1396 if message:
1413 1397 msgs.append(message)
1414 1398 else:
1415 1399 if opts.get('exact') or p2:
1416 1400 # If you got here, you either use --force and know what
1417 1401 # you are doing or used --exact or a merge patch while
1418 1402 # being updated to its first parent.
1419 1403 m = None
1420 1404 else:
1421 1405 m = scmutil.matchfiles(repo, files or [])
1422 1406 editform = mergeeditform(repo[None], 'import.normal')
1423 1407 if opts.get('exact'):
1424 1408 editor = None
1425 1409 else:
1426 1410 editor = getcommiteditor(editform=editform,
1427 1411 **pycompat.strkwargs(opts))
1428 1412 extra = {}
1429 1413 for idfunc in extrapreimport:
1430 1414 extrapreimportmap[idfunc](repo, extractdata, extra, opts)
1431 1415 overrides = {}
1432 1416 if partial:
1433 1417 overrides[('ui', 'allowemptycommit')] = True
1434 1418 with repo.ui.configoverride(overrides, 'import'):
1435 1419 n = repo.commit(message, user,
1436 1420 date, match=m,
1437 1421 editor=editor, extra=extra)
1438 1422 for idfunc in extrapostimport:
1439 1423 extrapostimportmap[idfunc](repo[n])
1440 1424 else:
1441 1425 if opts.get('exact') or importbranch:
1442 1426 branch = branch or 'default'
1443 1427 else:
1444 1428 branch = p1.branch()
1445 1429 store = patch.filestore()
1446 1430 try:
1447 1431 files = set()
1448 1432 try:
1449 1433 patch.patchrepo(ui, repo, p1, store, tmpname, strip, prefix,
1450 1434 files, eolmode=None)
1451 1435 except error.PatchError as e:
1452 1436 raise error.Abort(str(e))
1453 1437 if opts.get('exact'):
1454 1438 editor = None
1455 1439 else:
1456 1440 editor = getcommiteditor(editform='import.bypass')
1457 1441 memctx = context.memctx(repo, (p1.node(), p2.node()),
1458 1442 message,
1459 1443 files=files,
1460 1444 filectxfn=store,
1461 1445 user=user,
1462 1446 date=date,
1463 1447 branch=branch,
1464 1448 editor=editor)
1465 1449 n = memctx.commit()
1466 1450 finally:
1467 1451 store.close()
1468 1452 if opts.get('exact') and nocommit:
1469 1453 # --exact with --no-commit is still useful in that it does merge
1470 1454 # and branch bits
1471 1455 ui.warn(_("warning: can't check exact import with --no-commit\n"))
1472 1456 elif opts.get('exact') and hex(n) != nodeid:
1473 1457 raise error.Abort(_('patch is damaged or loses information'))
1474 1458 msg = _('applied to working directory')
1475 1459 if n:
1476 1460 # i18n: refers to a short changeset id
1477 1461 msg = _('created %s') % short(n)
1478 1462 return (msg, n, rejects)
1479 1463 finally:
1480 1464 os.unlink(tmpname)
1481 1465
1482 1466 # facility to let extensions include additional data in an exported patch
1483 1467 # list of identifiers to be executed in order
1484 1468 extraexport = []
1485 1469 # mapping from identifier to actual export function
1486 1470 # function as to return a string to be added to the header or None
1487 1471 # it is given two arguments (sequencenumber, changectx)
1488 1472 extraexportmap = {}
1489 1473
1490 1474 def _exportsingle(repo, ctx, match, switch_parent, rev, seqno, write, diffopts):
1491 1475 node = scmutil.binnode(ctx)
1492 1476 parents = [p.node() for p in ctx.parents() if p]
1493 1477 branch = ctx.branch()
1494 1478 if switch_parent:
1495 1479 parents.reverse()
1496 1480
1497 1481 if parents:
1498 1482 prev = parents[0]
1499 1483 else:
1500 1484 prev = nullid
1501 1485
1502 1486 write("# HG changeset patch\n")
1503 1487 write("# User %s\n" % ctx.user())
1504 1488 write("# Date %d %d\n" % ctx.date())
1505 1489 write("# %s\n" % util.datestr(ctx.date()))
1506 1490 if branch and branch != 'default':
1507 1491 write("# Branch %s\n" % branch)
1508 1492 write("# Node ID %s\n" % hex(node))
1509 1493 write("# Parent %s\n" % hex(prev))
1510 1494 if len(parents) > 1:
1511 1495 write("# Parent %s\n" % hex(parents[1]))
1512 1496
1513 1497 for headerid in extraexport:
1514 1498 header = extraexportmap[headerid](seqno, ctx)
1515 1499 if header is not None:
1516 1500 write('# %s\n' % header)
1517 1501 write(ctx.description().rstrip())
1518 1502 write("\n\n")
1519 1503
1520 1504 for chunk, label in patch.diffui(repo, prev, node, match, opts=diffopts):
1521 1505 write(chunk, label=label)
1522 1506
1523 1507 def export(repo, revs, fntemplate='hg-%h.patch', fp=None, switch_parent=False,
1524 1508 opts=None, match=None):
1525 1509 '''export changesets as hg patches
1526 1510
1527 1511 Args:
1528 1512 repo: The repository from which we're exporting revisions.
1529 1513 revs: A list of revisions to export as revision numbers.
1530 1514 fntemplate: An optional string to use for generating patch file names.
1531 1515 fp: An optional file-like object to which patches should be written.
1532 1516 switch_parent: If True, show diffs against second parent when not nullid.
1533 1517 Default is false, which always shows diff against p1.
1534 1518 opts: diff options to use for generating the patch.
1535 1519 match: If specified, only export changes to files matching this matcher.
1536 1520
1537 1521 Returns:
1538 1522 Nothing.
1539 1523
1540 1524 Side Effect:
1541 1525 "HG Changeset Patch" data is emitted to one of the following
1542 1526 destinations:
1543 1527 fp is specified: All revs are written to the specified
1544 1528 file-like object.
1545 1529 fntemplate specified: Each rev is written to a unique file named using
1546 1530 the given template.
1547 1531 Neither fp nor template specified: All revs written to repo.ui.write()
1548 1532 '''
1549 1533
1550 1534 total = len(revs)
1551 1535 revwidth = max(len(str(rev)) for rev in revs)
1552 1536 filemode = {}
1553 1537
1554 1538 write = None
1555 1539 dest = '<unnamed>'
1556 1540 if fp:
1557 1541 dest = getattr(fp, 'name', dest)
1558 1542 def write(s, **kw):
1559 1543 fp.write(s)
1560 1544 elif not fntemplate:
1561 1545 write = repo.ui.write
1562 1546
1563 1547 for seqno, rev in enumerate(revs, 1):
1564 1548 ctx = repo[rev]
1565 1549 fo = None
1566 1550 if not fp and fntemplate:
1567 1551 desc_lines = ctx.description().rstrip().split('\n')
1568 1552 desc = desc_lines[0] #Commit always has a first line.
1569 1553 fo = makefileobj(repo, fntemplate, ctx.node(), desc=desc,
1570 1554 total=total, seqno=seqno, revwidth=revwidth,
1571 1555 mode='wb', modemap=filemode)
1572 1556 dest = fo.name
1573 1557 def write(s, **kw):
1574 1558 fo.write(s)
1575 1559 if not dest.startswith('<'):
1576 1560 repo.ui.note("%s\n" % dest)
1577 1561 _exportsingle(
1578 1562 repo, ctx, match, switch_parent, rev, seqno, write, opts)
1579 1563 if fo is not None:
1580 1564 fo.close()
1581 1565
1582 1566 class _regrettablereprbytes(bytes):
1583 1567 """Bytes subclass that makes the repr the same on Python 3 as Python 2.
1584 1568
1585 1569 This is a huge hack.
1586 1570 """
1587 1571 def __repr__(self):
1588 1572 return repr(pycompat.sysstr(self))
1589 1573
1590 1574 def _maybebytestr(v):
1591 1575 if pycompat.ispy3 and isinstance(v, bytes):
1592 1576 return _regrettablereprbytes(v)
1593 1577 return v
1594 1578
1595 1579 def showmarker(fm, marker, index=None):
1596 1580 """utility function to display obsolescence marker in a readable way
1597 1581
1598 1582 To be used by debug function."""
1599 1583 if index is not None:
1600 1584 fm.write('index', '%i ', index)
1601 1585 fm.write('prednode', '%s ', hex(marker.prednode()))
1602 1586 succs = marker.succnodes()
1603 1587 fm.condwrite(succs, 'succnodes', '%s ',
1604 1588 fm.formatlist(map(hex, succs), name='node'))
1605 1589 fm.write('flag', '%X ', marker.flags())
1606 1590 parents = marker.parentnodes()
1607 1591 if parents is not None:
1608 1592 fm.write('parentnodes', '{%s} ',
1609 1593 fm.formatlist(map(hex, parents), name='node', sep=', '))
1610 1594 fm.write('date', '(%s) ', fm.formatdate(marker.date()))
1611 1595 meta = marker.metadata().copy()
1612 1596 meta.pop('date', None)
1613 1597 smeta = {_maybebytestr(k): _maybebytestr(v) for k, v in meta.iteritems()}
1614 1598 fm.write('metadata', '{%s}', fm.formatdict(smeta, fmt='%r: %r', sep=', '))
1615 1599 fm.plain('\n')
1616 1600
1617 1601 def finddate(ui, repo, date):
1618 1602 """Find the tipmost changeset that matches the given date spec"""
1619 1603
1620 1604 df = util.matchdate(date)
1621 1605 m = scmutil.matchall(repo)
1622 1606 results = {}
1623 1607
1624 1608 def prep(ctx, fns):
1625 1609 d = ctx.date()
1626 1610 if df(d[0]):
1627 1611 results[ctx.rev()] = d
1628 1612
1629 1613 for ctx in walkchangerevs(repo, m, {'rev': None}, prep):
1630 1614 rev = ctx.rev()
1631 1615 if rev in results:
1632 1616 ui.status(_("found revision %s from %s\n") %
1633 1617 (rev, util.datestr(results[rev])))
1634 1618 return '%d' % rev
1635 1619
1636 1620 raise error.Abort(_("revision matching date not found"))
1637 1621
1638 1622 def increasingwindows(windowsize=8, sizelimit=512):
1639 1623 while True:
1640 1624 yield windowsize
1641 1625 if windowsize < sizelimit:
1642 1626 windowsize *= 2
1643 1627
1644 1628 def _walkrevs(repo, opts):
1645 1629 # Default --rev value depends on --follow but --follow behavior
1646 1630 # depends on revisions resolved from --rev...
1647 1631 follow = opts.get('follow') or opts.get('follow_first')
1648 1632 if opts.get('rev'):
1649 1633 revs = scmutil.revrange(repo, opts['rev'])
1650 1634 elif follow and repo.dirstate.p1() == nullid:
1651 1635 revs = smartset.baseset()
1652 1636 elif follow:
1653 1637 revs = repo.revs('reverse(:.)')
1654 1638 else:
1655 1639 revs = smartset.spanset(repo)
1656 1640 revs.reverse()
1657 1641 return revs
1658 1642
1659 1643 class FileWalkError(Exception):
1660 1644 pass
1661 1645
1662 1646 def walkfilerevs(repo, match, follow, revs, fncache):
1663 1647 '''Walks the file history for the matched files.
1664 1648
1665 1649 Returns the changeset revs that are involved in the file history.
1666 1650
1667 1651 Throws FileWalkError if the file history can't be walked using
1668 1652 filelogs alone.
1669 1653 '''
1670 1654 wanted = set()
1671 1655 copies = []
1672 1656 minrev, maxrev = min(revs), max(revs)
1673 1657 def filerevgen(filelog, last):
1674 1658 """
1675 1659 Only files, no patterns. Check the history of each file.
1676 1660
1677 1661 Examines filelog entries within minrev, maxrev linkrev range
1678 1662 Returns an iterator yielding (linkrev, parentlinkrevs, copied)
1679 1663 tuples in backwards order
1680 1664 """
1681 1665 cl_count = len(repo)
1682 1666 revs = []
1683 1667 for j in xrange(0, last + 1):
1684 1668 linkrev = filelog.linkrev(j)
1685 1669 if linkrev < minrev:
1686 1670 continue
1687 1671 # only yield rev for which we have the changelog, it can
1688 1672 # happen while doing "hg log" during a pull or commit
1689 1673 if linkrev >= cl_count:
1690 1674 break
1691 1675
1692 1676 parentlinkrevs = []
1693 1677 for p in filelog.parentrevs(j):
1694 1678 if p != nullrev:
1695 1679 parentlinkrevs.append(filelog.linkrev(p))
1696 1680 n = filelog.node(j)
1697 1681 revs.append((linkrev, parentlinkrevs,
1698 1682 follow and filelog.renamed(n)))
1699 1683
1700 1684 return reversed(revs)
1701 1685 def iterfiles():
1702 1686 pctx = repo['.']
1703 1687 for filename in match.files():
1704 1688 if follow:
1705 1689 if filename not in pctx:
1706 1690 raise error.Abort(_('cannot follow file not in parent '
1707 1691 'revision: "%s"') % filename)
1708 1692 yield filename, pctx[filename].filenode()
1709 1693 else:
1710 1694 yield filename, None
1711 1695 for filename_node in copies:
1712 1696 yield filename_node
1713 1697
1714 1698 for file_, node in iterfiles():
1715 1699 filelog = repo.file(file_)
1716 1700 if not len(filelog):
1717 1701 if node is None:
1718 1702 # A zero count may be a directory or deleted file, so
1719 1703 # try to find matching entries on the slow path.
1720 1704 if follow:
1721 1705 raise error.Abort(
1722 1706 _('cannot follow nonexistent file: "%s"') % file_)
1723 1707 raise FileWalkError("Cannot walk via filelog")
1724 1708 else:
1725 1709 continue
1726 1710
1727 1711 if node is None:
1728 1712 last = len(filelog) - 1
1729 1713 else:
1730 1714 last = filelog.rev(node)
1731 1715
1732 1716 # keep track of all ancestors of the file
1733 1717 ancestors = {filelog.linkrev(last)}
1734 1718
1735 1719 # iterate from latest to oldest revision
1736 1720 for rev, flparentlinkrevs, copied in filerevgen(filelog, last):
1737 1721 if not follow:
1738 1722 if rev > maxrev:
1739 1723 continue
1740 1724 else:
1741 1725 # Note that last might not be the first interesting
1742 1726 # rev to us:
1743 1727 # if the file has been changed after maxrev, we'll
1744 1728 # have linkrev(last) > maxrev, and we still need
1745 1729 # to explore the file graph
1746 1730 if rev not in ancestors:
1747 1731 continue
1748 1732 # XXX insert 1327 fix here
1749 1733 if flparentlinkrevs:
1750 1734 ancestors.update(flparentlinkrevs)
1751 1735
1752 1736 fncache.setdefault(rev, []).append(file_)
1753 1737 wanted.add(rev)
1754 1738 if copied:
1755 1739 copies.append(copied)
1756 1740
1757 1741 return wanted
1758 1742
1759 1743 class _followfilter(object):
1760 1744 def __init__(self, repo, onlyfirst=False):
1761 1745 self.repo = repo
1762 1746 self.startrev = nullrev
1763 1747 self.roots = set()
1764 1748 self.onlyfirst = onlyfirst
1765 1749
1766 1750 def match(self, rev):
1767 1751 def realparents(rev):
1768 1752 if self.onlyfirst:
1769 1753 return self.repo.changelog.parentrevs(rev)[0:1]
1770 1754 else:
1771 1755 return filter(lambda x: x != nullrev,
1772 1756 self.repo.changelog.parentrevs(rev))
1773 1757
1774 1758 if self.startrev == nullrev:
1775 1759 self.startrev = rev
1776 1760 return True
1777 1761
1778 1762 if rev > self.startrev:
1779 1763 # forward: all descendants
1780 1764 if not self.roots:
1781 1765 self.roots.add(self.startrev)
1782 1766 for parent in realparents(rev):
1783 1767 if parent in self.roots:
1784 1768 self.roots.add(rev)
1785 1769 return True
1786 1770 else:
1787 1771 # backwards: all parents
1788 1772 if not self.roots:
1789 1773 self.roots.update(realparents(self.startrev))
1790 1774 if rev in self.roots:
1791 1775 self.roots.remove(rev)
1792 1776 self.roots.update(realparents(rev))
1793 1777 return True
1794 1778
1795 1779 return False
1796 1780
1797 1781 def walkchangerevs(repo, match, opts, prepare):
1798 1782 '''Iterate over files and the revs in which they changed.
1799 1783
1800 1784 Callers most commonly need to iterate backwards over the history
1801 1785 in which they are interested. Doing so has awful (quadratic-looking)
1802 1786 performance, so we use iterators in a "windowed" way.
1803 1787
1804 1788 We walk a window of revisions in the desired order. Within the
1805 1789 window, we first walk forwards to gather data, then in the desired
1806 1790 order (usually backwards) to display it.
1807 1791
1808 1792 This function returns an iterator yielding contexts. Before
1809 1793 yielding each context, the iterator will first call the prepare
1810 1794 function on each context in the window in forward order.'''
1811 1795
1812 1796 follow = opts.get('follow') or opts.get('follow_first')
1813 1797 revs = _walkrevs(repo, opts)
1814 1798 if not revs:
1815 1799 return []
1816 1800 wanted = set()
1817 1801 slowpath = match.anypats() or (not match.always() and opts.get('removed'))
1818 1802 fncache = {}
1819 1803 change = repo.changectx
1820 1804
1821 1805 # First step is to fill wanted, the set of revisions that we want to yield.
1822 1806 # When it does not induce extra cost, we also fill fncache for revisions in
1823 1807 # wanted: a cache of filenames that were changed (ctx.files()) and that
1824 1808 # match the file filtering conditions.
1825 1809
1826 1810 if match.always():
1827 1811 # No files, no patterns. Display all revs.
1828 1812 wanted = revs
1829 1813 elif not slowpath:
1830 1814 # We only have to read through the filelog to find wanted revisions
1831 1815
1832 1816 try:
1833 1817 wanted = walkfilerevs(repo, match, follow, revs, fncache)
1834 1818 except FileWalkError:
1835 1819 slowpath = True
1836 1820
1837 1821 # We decided to fall back to the slowpath because at least one
1838 1822 # of the paths was not a file. Check to see if at least one of them
1839 1823 # existed in history, otherwise simply return
1840 1824 for path in match.files():
1841 1825 if path == '.' or path in repo.store:
1842 1826 break
1843 1827 else:
1844 1828 return []
1845 1829
1846 1830 if slowpath:
1847 1831 # We have to read the changelog to match filenames against
1848 1832 # changed files
1849 1833
1850 1834 if follow:
1851 1835 raise error.Abort(_('can only follow copies/renames for explicit '
1852 1836 'filenames'))
1853 1837
1854 1838 # The slow path checks files modified in every changeset.
1855 1839 # This is really slow on large repos, so compute the set lazily.
1856 1840 class lazywantedset(object):
1857 1841 def __init__(self):
1858 1842 self.set = set()
1859 1843 self.revs = set(revs)
1860 1844
1861 1845 # No need to worry about locality here because it will be accessed
1862 1846 # in the same order as the increasing window below.
1863 1847 def __contains__(self, value):
1864 1848 if value in self.set:
1865 1849 return True
1866 1850 elif not value in self.revs:
1867 1851 return False
1868 1852 else:
1869 1853 self.revs.discard(value)
1870 1854 ctx = change(value)
1871 1855 matches = filter(match, ctx.files())
1872 1856 if matches:
1873 1857 fncache[value] = matches
1874 1858 self.set.add(value)
1875 1859 return True
1876 1860 return False
1877 1861
1878 1862 def discard(self, value):
1879 1863 self.revs.discard(value)
1880 1864 self.set.discard(value)
1881 1865
1882 1866 wanted = lazywantedset()
1883 1867
1884 1868 # it might be worthwhile to do this in the iterator if the rev range
1885 1869 # is descending and the prune args are all within that range
1886 1870 for rev in opts.get('prune', ()):
1887 1871 rev = repo[rev].rev()
1888 1872 ff = _followfilter(repo)
1889 1873 stop = min(revs[0], revs[-1])
1890 1874 for x in xrange(rev, stop - 1, -1):
1891 1875 if ff.match(x):
1892 1876 wanted = wanted - [x]
1893 1877
1894 1878 # Now that wanted is correctly initialized, we can iterate over the
1895 1879 # revision range, yielding only revisions in wanted.
1896 1880 def iterate():
1897 1881 if follow and match.always():
1898 1882 ff = _followfilter(repo, onlyfirst=opts.get('follow_first'))
1899 1883 def want(rev):
1900 1884 return ff.match(rev) and rev in wanted
1901 1885 else:
1902 1886 def want(rev):
1903 1887 return rev in wanted
1904 1888
1905 1889 it = iter(revs)
1906 1890 stopiteration = False
1907 1891 for windowsize in increasingwindows():
1908 1892 nrevs = []
1909 1893 for i in xrange(windowsize):
1910 1894 rev = next(it, None)
1911 1895 if rev is None:
1912 1896 stopiteration = True
1913 1897 break
1914 1898 elif want(rev):
1915 1899 nrevs.append(rev)
1916 1900 for rev in sorted(nrevs):
1917 1901 fns = fncache.get(rev)
1918 1902 ctx = change(rev)
1919 1903 if not fns:
1920 1904 def fns_generator():
1921 1905 for f in ctx.files():
1922 1906 if match(f):
1923 1907 yield f
1924 1908 fns = fns_generator()
1925 1909 prepare(ctx, fns)
1926 1910 for rev in nrevs:
1927 1911 yield change(rev)
1928 1912
1929 1913 if stopiteration:
1930 1914 break
1931 1915
1932 1916 return iterate()
1933 1917
1934 1918 def add(ui, repo, match, prefix, explicitonly, **opts):
1935 1919 join = lambda f: os.path.join(prefix, f)
1936 1920 bad = []
1937 1921
1938 1922 badfn = lambda x, y: bad.append(x) or match.bad(x, y)
1939 1923 names = []
1940 1924 wctx = repo[None]
1941 1925 cca = None
1942 1926 abort, warn = scmutil.checkportabilityalert(ui)
1943 1927 if abort or warn:
1944 1928 cca = scmutil.casecollisionauditor(ui, abort, repo.dirstate)
1945 1929
1946 1930 badmatch = matchmod.badmatch(match, badfn)
1947 1931 dirstate = repo.dirstate
1948 1932 # We don't want to just call wctx.walk here, since it would return a lot of
1949 1933 # clean files, which we aren't interested in and takes time.
1950 1934 for f in sorted(dirstate.walk(badmatch, subrepos=sorted(wctx.substate),
1951 1935 unknown=True, ignored=False, full=False)):
1952 1936 exact = match.exact(f)
1953 1937 if exact or not explicitonly and f not in wctx and repo.wvfs.lexists(f):
1954 1938 if cca:
1955 1939 cca(f)
1956 1940 names.append(f)
1957 1941 if ui.verbose or not exact:
1958 1942 ui.status(_('adding %s\n') % match.rel(f))
1959 1943
1960 1944 for subpath in sorted(wctx.substate):
1961 1945 sub = wctx.sub(subpath)
1962 1946 try:
1963 1947 submatch = matchmod.subdirmatcher(subpath, match)
1964 1948 if opts.get(r'subrepos'):
1965 1949 bad.extend(sub.add(ui, submatch, prefix, False, **opts))
1966 1950 else:
1967 1951 bad.extend(sub.add(ui, submatch, prefix, True, **opts))
1968 1952 except error.LookupError:
1969 1953 ui.status(_("skipping missing subrepository: %s\n")
1970 1954 % join(subpath))
1971 1955
1972 1956 if not opts.get(r'dry_run'):
1973 1957 rejected = wctx.add(names, prefix)
1974 1958 bad.extend(f for f in rejected if f in match.files())
1975 1959 return bad
1976 1960
1977 1961 def addwebdirpath(repo, serverpath, webconf):
1978 1962 webconf[serverpath] = repo.root
1979 1963 repo.ui.debug('adding %s = %s\n' % (serverpath, repo.root))
1980 1964
1981 1965 for r in repo.revs('filelog("path:.hgsub")'):
1982 1966 ctx = repo[r]
1983 1967 for subpath in ctx.substate:
1984 1968 ctx.sub(subpath).addwebdirpath(serverpath, webconf)
1985 1969
1986 1970 def forget(ui, repo, match, prefix, explicitonly):
1987 1971 join = lambda f: os.path.join(prefix, f)
1988 1972 bad = []
1989 1973 badfn = lambda x, y: bad.append(x) or match.bad(x, y)
1990 1974 wctx = repo[None]
1991 1975 forgot = []
1992 1976
1993 1977 s = repo.status(match=matchmod.badmatch(match, badfn), clean=True)
1994 1978 forget = sorted(s.modified + s.added + s.deleted + s.clean)
1995 1979 if explicitonly:
1996 1980 forget = [f for f in forget if match.exact(f)]
1997 1981
1998 1982 for subpath in sorted(wctx.substate):
1999 1983 sub = wctx.sub(subpath)
2000 1984 try:
2001 1985 submatch = matchmod.subdirmatcher(subpath, match)
2002 1986 subbad, subforgot = sub.forget(submatch, prefix)
2003 1987 bad.extend([subpath + '/' + f for f in subbad])
2004 1988 forgot.extend([subpath + '/' + f for f in subforgot])
2005 1989 except error.LookupError:
2006 1990 ui.status(_("skipping missing subrepository: %s\n")
2007 1991 % join(subpath))
2008 1992
2009 1993 if not explicitonly:
2010 1994 for f in match.files():
2011 1995 if f not in repo.dirstate and not repo.wvfs.isdir(f):
2012 1996 if f not in forgot:
2013 1997 if repo.wvfs.exists(f):
2014 1998 # Don't complain if the exact case match wasn't given.
2015 1999 # But don't do this until after checking 'forgot', so
2016 2000 # that subrepo files aren't normalized, and this op is
2017 2001 # purely from data cached by the status walk above.
2018 2002 if repo.dirstate.normalize(f) in repo.dirstate:
2019 2003 continue
2020 2004 ui.warn(_('not removing %s: '
2021 2005 'file is already untracked\n')
2022 2006 % match.rel(f))
2023 2007 bad.append(f)
2024 2008
2025 2009 for f in forget:
2026 2010 if ui.verbose or not match.exact(f):
2027 2011 ui.status(_('removing %s\n') % match.rel(f))
2028 2012
2029 2013 rejected = wctx.forget(forget, prefix)
2030 2014 bad.extend(f for f in rejected if f in match.files())
2031 2015 forgot.extend(f for f in forget if f not in rejected)
2032 2016 return bad, forgot
2033 2017
2034 2018 def files(ui, ctx, m, fm, fmt, subrepos):
2035 2019 rev = ctx.rev()
2036 2020 ret = 1
2037 2021 ds = ctx.repo().dirstate
2038 2022
2039 2023 for f in ctx.matches(m):
2040 2024 if rev is None and ds[f] == 'r':
2041 2025 continue
2042 2026 fm.startitem()
2043 2027 if ui.verbose:
2044 2028 fc = ctx[f]
2045 2029 fm.write('size flags', '% 10d % 1s ', fc.size(), fc.flags())
2046 2030 fm.data(abspath=f)
2047 2031 fm.write('path', fmt, m.rel(f))
2048 2032 ret = 0
2049 2033
2050 2034 for subpath in sorted(ctx.substate):
2051 2035 submatch = matchmod.subdirmatcher(subpath, m)
2052 2036 if (subrepos or m.exact(subpath) or any(submatch.files())):
2053 2037 sub = ctx.sub(subpath)
2054 2038 try:
2055 2039 recurse = m.exact(subpath) or subrepos
2056 2040 if sub.printfiles(ui, submatch, fm, fmt, recurse) == 0:
2057 2041 ret = 0
2058 2042 except error.LookupError:
2059 2043 ui.status(_("skipping missing subrepository: %s\n")
2060 2044 % m.abs(subpath))
2061 2045
2062 2046 return ret
2063 2047
2064 2048 def remove(ui, repo, m, prefix, after, force, subrepos, warnings=None):
2065 2049 join = lambda f: os.path.join(prefix, f)
2066 2050 ret = 0
2067 2051 s = repo.status(match=m, clean=True)
2068 2052 modified, added, deleted, clean = s[0], s[1], s[3], s[6]
2069 2053
2070 2054 wctx = repo[None]
2071 2055
2072 2056 if warnings is None:
2073 2057 warnings = []
2074 2058 warn = True
2075 2059 else:
2076 2060 warn = False
2077 2061
2078 2062 subs = sorted(wctx.substate)
2079 2063 total = len(subs)
2080 2064 count = 0
2081 2065 for subpath in subs:
2082 2066 count += 1
2083 2067 submatch = matchmod.subdirmatcher(subpath, m)
2084 2068 if subrepos or m.exact(subpath) or any(submatch.files()):
2085 2069 ui.progress(_('searching'), count, total=total, unit=_('subrepos'))
2086 2070 sub = wctx.sub(subpath)
2087 2071 try:
2088 2072 if sub.removefiles(submatch, prefix, after, force, subrepos,
2089 2073 warnings):
2090 2074 ret = 1
2091 2075 except error.LookupError:
2092 2076 warnings.append(_("skipping missing subrepository: %s\n")
2093 2077 % join(subpath))
2094 2078 ui.progress(_('searching'), None)
2095 2079
2096 2080 # warn about failure to delete explicit files/dirs
2097 2081 deleteddirs = util.dirs(deleted)
2098 2082 files = m.files()
2099 2083 total = len(files)
2100 2084 count = 0
2101 2085 for f in files:
2102 2086 def insubrepo():
2103 2087 for subpath in wctx.substate:
2104 2088 if f.startswith(subpath + '/'):
2105 2089 return True
2106 2090 return False
2107 2091
2108 2092 count += 1
2109 2093 ui.progress(_('deleting'), count, total=total, unit=_('files'))
2110 2094 isdir = f in deleteddirs or wctx.hasdir(f)
2111 2095 if (f in repo.dirstate or isdir or f == '.'
2112 2096 or insubrepo() or f in subs):
2113 2097 continue
2114 2098
2115 2099 if repo.wvfs.exists(f):
2116 2100 if repo.wvfs.isdir(f):
2117 2101 warnings.append(_('not removing %s: no tracked files\n')
2118 2102 % m.rel(f))
2119 2103 else:
2120 2104 warnings.append(_('not removing %s: file is untracked\n')
2121 2105 % m.rel(f))
2122 2106 # missing files will generate a warning elsewhere
2123 2107 ret = 1
2124 2108 ui.progress(_('deleting'), None)
2125 2109
2126 2110 if force:
2127 2111 list = modified + deleted + clean + added
2128 2112 elif after:
2129 2113 list = deleted
2130 2114 remaining = modified + added + clean
2131 2115 total = len(remaining)
2132 2116 count = 0
2133 2117 for f in remaining:
2134 2118 count += 1
2135 2119 ui.progress(_('skipping'), count, total=total, unit=_('files'))
2136 2120 if ui.verbose or (f in files):
2137 2121 warnings.append(_('not removing %s: file still exists\n')
2138 2122 % m.rel(f))
2139 2123 ret = 1
2140 2124 ui.progress(_('skipping'), None)
2141 2125 else:
2142 2126 list = deleted + clean
2143 2127 total = len(modified) + len(added)
2144 2128 count = 0
2145 2129 for f in modified:
2146 2130 count += 1
2147 2131 ui.progress(_('skipping'), count, total=total, unit=_('files'))
2148 2132 warnings.append(_('not removing %s: file is modified (use -f'
2149 2133 ' to force removal)\n') % m.rel(f))
2150 2134 ret = 1
2151 2135 for f in added:
2152 2136 count += 1
2153 2137 ui.progress(_('skipping'), count, total=total, unit=_('files'))
2154 2138 warnings.append(_("not removing %s: file has been marked for add"
2155 2139 " (use 'hg forget' to undo add)\n") % m.rel(f))
2156 2140 ret = 1
2157 2141 ui.progress(_('skipping'), None)
2158 2142
2159 2143 list = sorted(list)
2160 2144 total = len(list)
2161 2145 count = 0
2162 2146 for f in list:
2163 2147 count += 1
2164 2148 if ui.verbose or not m.exact(f):
2165 2149 ui.progress(_('deleting'), count, total=total, unit=_('files'))
2166 2150 ui.status(_('removing %s\n') % m.rel(f))
2167 2151 ui.progress(_('deleting'), None)
2168 2152
2169 2153 with repo.wlock():
2170 2154 if not after:
2171 2155 for f in list:
2172 2156 if f in added:
2173 2157 continue # we never unlink added files on remove
2174 2158 repo.wvfs.unlinkpath(f, ignoremissing=True)
2175 2159 repo[None].forget(list)
2176 2160
2177 2161 if warn:
2178 2162 for warning in warnings:
2179 2163 ui.warn(warning)
2180 2164
2181 2165 return ret
2182 2166
2183 2167 def _updatecatformatter(fm, ctx, matcher, path, decode):
2184 2168 """Hook for adding data to the formatter used by ``hg cat``.
2185 2169
2186 2170 Extensions (e.g., lfs) can wrap this to inject keywords/data, but must call
2187 2171 this method first."""
2188 2172 data = ctx[path].data()
2189 2173 if decode:
2190 2174 data = ctx.repo().wwritedata(path, data)
2191 2175 fm.startitem()
2192 2176 fm.write('data', '%s', data)
2193 2177 fm.data(abspath=path, path=matcher.rel(path))
2194 2178
2195 2179 def cat(ui, repo, ctx, matcher, basefm, fntemplate, prefix, **opts):
2196 2180 err = 1
2197 2181 opts = pycompat.byteskwargs(opts)
2198 2182
2199 2183 def write(path):
2200 2184 filename = None
2201 2185 if fntemplate:
2202 2186 filename = makefilename(repo, fntemplate, ctx.node(),
2203 2187 pathname=os.path.join(prefix, path))
2204 2188 # attempt to create the directory if it does not already exist
2205 2189 try:
2206 2190 os.makedirs(os.path.dirname(filename))
2207 2191 except OSError:
2208 2192 pass
2209 2193 with formatter.maybereopen(basefm, filename, opts) as fm:
2210 2194 _updatecatformatter(fm, ctx, matcher, path, opts.get('decode'))
2211 2195
2212 2196 # Automation often uses hg cat on single files, so special case it
2213 2197 # for performance to avoid the cost of parsing the manifest.
2214 2198 if len(matcher.files()) == 1 and not matcher.anypats():
2215 2199 file = matcher.files()[0]
2216 2200 mfl = repo.manifestlog
2217 2201 mfnode = ctx.manifestnode()
2218 2202 try:
2219 2203 if mfnode and mfl[mfnode].find(file)[0]:
2220 2204 write(file)
2221 2205 return 0
2222 2206 except KeyError:
2223 2207 pass
2224 2208
2225 2209 for abs in ctx.walk(matcher):
2226 2210 write(abs)
2227 2211 err = 0
2228 2212
2229 2213 for subpath in sorted(ctx.substate):
2230 2214 sub = ctx.sub(subpath)
2231 2215 try:
2232 2216 submatch = matchmod.subdirmatcher(subpath, matcher)
2233 2217
2234 2218 if not sub.cat(submatch, basefm, fntemplate,
2235 2219 os.path.join(prefix, sub._path),
2236 2220 **pycompat.strkwargs(opts)):
2237 2221 err = 0
2238 2222 except error.RepoLookupError:
2239 2223 ui.status(_("skipping missing subrepository: %s\n")
2240 2224 % os.path.join(prefix, subpath))
2241 2225
2242 2226 return err
2243 2227
2244 2228 def commit(ui, repo, commitfunc, pats, opts):
2245 2229 '''commit the specified files or all outstanding changes'''
2246 2230 date = opts.get('date')
2247 2231 if date:
2248 2232 opts['date'] = util.parsedate(date)
2249 2233 message = logmessage(ui, opts)
2250 2234 matcher = scmutil.match(repo[None], pats, opts)
2251 2235
2252 2236 dsguard = None
2253 2237 # extract addremove carefully -- this function can be called from a command
2254 2238 # that doesn't support addremove
2255 2239 if opts.get('addremove'):
2256 2240 dsguard = dirstateguard.dirstateguard(repo, 'commit')
2257 2241 with dsguard or util.nullcontextmanager():
2258 2242 if dsguard:
2259 2243 if scmutil.addremove(repo, matcher, "", opts) != 0:
2260 2244 raise error.Abort(
2261 2245 _("failed to mark all new/missing files as added/removed"))
2262 2246
2263 2247 return commitfunc(ui, repo, message, matcher, opts)
2264 2248
2265 2249 def samefile(f, ctx1, ctx2):
2266 2250 if f in ctx1.manifest():
2267 2251 a = ctx1.filectx(f)
2268 2252 if f in ctx2.manifest():
2269 2253 b = ctx2.filectx(f)
2270 2254 return (not a.cmp(b)
2271 2255 and a.flags() == b.flags())
2272 2256 else:
2273 2257 return False
2274 2258 else:
2275 2259 return f not in ctx2.manifest()
2276 2260
2277 2261 def amend(ui, repo, old, extra, pats, opts):
2278 2262 # avoid cycle context -> subrepo -> cmdutil
2279 2263 from . import context
2280 2264
2281 2265 # amend will reuse the existing user if not specified, but the obsolete
2282 2266 # marker creation requires that the current user's name is specified.
2283 2267 if obsolete.isenabled(repo, obsolete.createmarkersopt):
2284 2268 ui.username() # raise exception if username not set
2285 2269
2286 2270 ui.note(_('amending changeset %s\n') % old)
2287 2271 base = old.p1()
2288 2272
2289 2273 with repo.wlock(), repo.lock(), repo.transaction('amend'):
2290 2274 # Participating changesets:
2291 2275 #
2292 2276 # wctx o - workingctx that contains changes from working copy
2293 2277 # | to go into amending commit
2294 2278 # |
2295 2279 # old o - changeset to amend
2296 2280 # |
2297 2281 # base o - first parent of the changeset to amend
2298 2282 wctx = repo[None]
2299 2283
2300 2284 # Copy to avoid mutating input
2301 2285 extra = extra.copy()
2302 2286 # Update extra dict from amended commit (e.g. to preserve graft
2303 2287 # source)
2304 2288 extra.update(old.extra())
2305 2289
2306 2290 # Also update it from the from the wctx
2307 2291 extra.update(wctx.extra())
2308 2292
2309 2293 user = opts.get('user') or old.user()
2310 2294 date = opts.get('date') or old.date()
2311 2295
2312 2296 # Parse the date to allow comparison between date and old.date()
2313 2297 date = util.parsedate(date)
2314 2298
2315 2299 if len(old.parents()) > 1:
2316 2300 # ctx.files() isn't reliable for merges, so fall back to the
2317 2301 # slower repo.status() method
2318 2302 files = set([fn for st in repo.status(base, old)[:3]
2319 2303 for fn in st])
2320 2304 else:
2321 2305 files = set(old.files())
2322 2306
2323 2307 # add/remove the files to the working copy if the "addremove" option
2324 2308 # was specified.
2325 2309 matcher = scmutil.match(wctx, pats, opts)
2326 2310 if (opts.get('addremove')
2327 2311 and scmutil.addremove(repo, matcher, "", opts)):
2328 2312 raise error.Abort(
2329 2313 _("failed to mark all new/missing files as added/removed"))
2330 2314
2331 2315 # Check subrepos. This depends on in-place wctx._status update in
2332 2316 # subrepo.precommit(). To minimize the risk of this hack, we do
2333 2317 # nothing if .hgsub does not exist.
2334 2318 if '.hgsub' in wctx or '.hgsub' in old:
2335 2319 from . import subrepo # avoid cycle: cmdutil -> subrepo -> cmdutil
2336 2320 subs, commitsubs, newsubstate = subrepo.precommit(
2337 2321 ui, wctx, wctx._status, matcher)
2338 2322 # amend should abort if commitsubrepos is enabled
2339 2323 assert not commitsubs
2340 2324 if subs:
2341 2325 subrepo.writestate(repo, newsubstate)
2342 2326
2343 2327 filestoamend = set(f for f in wctx.files() if matcher(f))
2344 2328
2345 2329 changes = (len(filestoamend) > 0)
2346 2330 if changes:
2347 2331 # Recompute copies (avoid recording a -> b -> a)
2348 2332 copied = copies.pathcopies(base, wctx, matcher)
2349 2333 if old.p2:
2350 2334 copied.update(copies.pathcopies(old.p2(), wctx, matcher))
2351 2335
2352 2336 # Prune files which were reverted by the updates: if old
2353 2337 # introduced file X and the file was renamed in the working
2354 2338 # copy, then those two files are the same and
2355 2339 # we can discard X from our list of files. Likewise if X
2356 2340 # was removed, it's no longer relevant. If X is missing (aka
2357 2341 # deleted), old X must be preserved.
2358 2342 files.update(filestoamend)
2359 2343 files = [f for f in files if (not samefile(f, wctx, base)
2360 2344 or f in wctx.deleted())]
2361 2345
2362 2346 def filectxfn(repo, ctx_, path):
2363 2347 try:
2364 2348 # If the file being considered is not amongst the files
2365 2349 # to be amended, we should return the file context from the
2366 2350 # old changeset. This avoids issues when only some files in
2367 2351 # the working copy are being amended but there are also
2368 2352 # changes to other files from the old changeset.
2369 2353 if path not in filestoamend:
2370 2354 return old.filectx(path)
2371 2355
2372 2356 # Return None for removed files.
2373 2357 if path in wctx.removed():
2374 2358 return None
2375 2359
2376 2360 fctx = wctx[path]
2377 2361 flags = fctx.flags()
2378 2362 mctx = context.memfilectx(repo, ctx_,
2379 2363 fctx.path(), fctx.data(),
2380 2364 islink='l' in flags,
2381 2365 isexec='x' in flags,
2382 2366 copied=copied.get(path))
2383 2367 return mctx
2384 2368 except KeyError:
2385 2369 return None
2386 2370 else:
2387 2371 ui.note(_('copying changeset %s to %s\n') % (old, base))
2388 2372
2389 2373 # Use version of files as in the old cset
2390 2374 def filectxfn(repo, ctx_, path):
2391 2375 try:
2392 2376 return old.filectx(path)
2393 2377 except KeyError:
2394 2378 return None
2395 2379
2396 2380 # See if we got a message from -m or -l, if not, open the editor with
2397 2381 # the message of the changeset to amend.
2398 2382 message = logmessage(ui, opts)
2399 2383
2400 2384 editform = mergeeditform(old, 'commit.amend')
2401 2385 editor = getcommiteditor(editform=editform,
2402 2386 **pycompat.strkwargs(opts))
2403 2387
2404 2388 if not message:
2405 2389 editor = getcommiteditor(edit=True, editform=editform)
2406 2390 message = old.description()
2407 2391
2408 2392 pureextra = extra.copy()
2409 2393 extra['amend_source'] = old.hex()
2410 2394
2411 2395 new = context.memctx(repo,
2412 2396 parents=[base.node(), old.p2().node()],
2413 2397 text=message,
2414 2398 files=files,
2415 2399 filectxfn=filectxfn,
2416 2400 user=user,
2417 2401 date=date,
2418 2402 extra=extra,
2419 2403 editor=editor)
2420 2404
2421 2405 newdesc = changelog.stripdesc(new.description())
2422 2406 if ((not changes)
2423 2407 and newdesc == old.description()
2424 2408 and user == old.user()
2425 2409 and date == old.date()
2426 2410 and pureextra == old.extra()):
2427 2411 # nothing changed. continuing here would create a new node
2428 2412 # anyway because of the amend_source noise.
2429 2413 #
2430 2414 # This not what we expect from amend.
2431 2415 return old.node()
2432 2416
2433 2417 if opts.get('secret'):
2434 2418 commitphase = 'secret'
2435 2419 else:
2436 2420 commitphase = old.phase()
2437 2421 overrides = {('phases', 'new-commit'): commitphase}
2438 2422 with ui.configoverride(overrides, 'amend'):
2439 2423 newid = repo.commitctx(new)
2440 2424
2441 2425 # Reroute the working copy parent to the new changeset
2442 2426 repo.setparents(newid, nullid)
2443 2427 mapping = {old.node(): (newid,)}
2444 2428 obsmetadata = None
2445 2429 if opts.get('note'):
2446 2430 obsmetadata = {'note': opts['note']}
2447 2431 scmutil.cleanupnodes(repo, mapping, 'amend', metadata=obsmetadata)
2448 2432
2449 2433 # Fixing the dirstate because localrepo.commitctx does not update
2450 2434 # it. This is rather convenient because we did not need to update
2451 2435 # the dirstate for all the files in the new commit which commitctx
2452 2436 # could have done if it updated the dirstate. Now, we can
2453 2437 # selectively update the dirstate only for the amended files.
2454 2438 dirstate = repo.dirstate
2455 2439
2456 2440 # Update the state of the files which were added and
2457 2441 # and modified in the amend to "normal" in the dirstate.
2458 2442 normalfiles = set(wctx.modified() + wctx.added()) & filestoamend
2459 2443 for f in normalfiles:
2460 2444 dirstate.normal(f)
2461 2445
2462 2446 # Update the state of files which were removed in the amend
2463 2447 # to "removed" in the dirstate.
2464 2448 removedfiles = set(wctx.removed()) & filestoamend
2465 2449 for f in removedfiles:
2466 2450 dirstate.drop(f)
2467 2451
2468 2452 return newid
2469 2453
2470 2454 def commiteditor(repo, ctx, subs, editform=''):
2471 2455 if ctx.description():
2472 2456 return ctx.description()
2473 2457 return commitforceeditor(repo, ctx, subs, editform=editform,
2474 2458 unchangedmessagedetection=True)
2475 2459
2476 2460 def commitforceeditor(repo, ctx, subs, finishdesc=None, extramsg=None,
2477 2461 editform='', unchangedmessagedetection=False):
2478 2462 if not extramsg:
2479 2463 extramsg = _("Leave message empty to abort commit.")
2480 2464
2481 2465 forms = [e for e in editform.split('.') if e]
2482 2466 forms.insert(0, 'changeset')
2483 2467 templatetext = None
2484 2468 while forms:
2485 2469 ref = '.'.join(forms)
2486 2470 if repo.ui.config('committemplate', ref):
2487 2471 templatetext = committext = buildcommittemplate(
2488 2472 repo, ctx, subs, extramsg, ref)
2489 2473 break
2490 2474 forms.pop()
2491 2475 else:
2492 2476 committext = buildcommittext(repo, ctx, subs, extramsg)
2493 2477
2494 2478 # run editor in the repository root
2495 2479 olddir = pycompat.getcwd()
2496 2480 os.chdir(repo.root)
2497 2481
2498 2482 # make in-memory changes visible to external process
2499 2483 tr = repo.currenttransaction()
2500 2484 repo.dirstate.write(tr)
2501 2485 pending = tr and tr.writepending() and repo.root
2502 2486
2503 2487 editortext = repo.ui.edit(committext, ctx.user(), ctx.extra(),
2504 2488 editform=editform, pending=pending,
2505 2489 repopath=repo.path, action='commit')
2506 2490 text = editortext
2507 2491
2508 2492 # strip away anything below this special string (used for editors that want
2509 2493 # to display the diff)
2510 2494 stripbelow = re.search(_linebelow, text, flags=re.MULTILINE)
2511 2495 if stripbelow:
2512 2496 text = text[:stripbelow.start()]
2513 2497
2514 2498 text = re.sub("(?m)^HG:.*(\n|$)", "", text)
2515 2499 os.chdir(olddir)
2516 2500
2517 2501 if finishdesc:
2518 2502 text = finishdesc(text)
2519 2503 if not text.strip():
2520 2504 raise error.Abort(_("empty commit message"))
2521 2505 if unchangedmessagedetection and editortext == templatetext:
2522 2506 raise error.Abort(_("commit message unchanged"))
2523 2507
2524 2508 return text
2525 2509
2526 2510 def buildcommittemplate(repo, ctx, subs, extramsg, ref):
2527 2511 ui = repo.ui
2528 2512 spec = formatter.templatespec(ref, None, None)
2529 t = changeset_templater(ui, repo, spec, None, {}, False)
2513 t = logcmdutil.changesettemplater(ui, repo, spec, None, {}, False)
2530 2514 t.t.cache.update((k, templater.unquotestring(v))
2531 2515 for k, v in repo.ui.configitems('committemplate'))
2532 2516
2533 2517 if not extramsg:
2534 2518 extramsg = '' # ensure that extramsg is string
2535 2519
2536 2520 ui.pushbuffer()
2537 2521 t.show(ctx, extramsg=extramsg)
2538 2522 return ui.popbuffer()
2539 2523
2540 2524 def hgprefix(msg):
2541 2525 return "\n".join(["HG: %s" % a for a in msg.split("\n") if a])
2542 2526
2543 2527 def buildcommittext(repo, ctx, subs, extramsg):
2544 2528 edittext = []
2545 2529 modified, added, removed = ctx.modified(), ctx.added(), ctx.removed()
2546 2530 if ctx.description():
2547 2531 edittext.append(ctx.description())
2548 2532 edittext.append("")
2549 2533 edittext.append("") # Empty line between message and comments.
2550 2534 edittext.append(hgprefix(_("Enter commit message."
2551 2535 " Lines beginning with 'HG:' are removed.")))
2552 2536 edittext.append(hgprefix(extramsg))
2553 2537 edittext.append("HG: --")
2554 2538 edittext.append(hgprefix(_("user: %s") % ctx.user()))
2555 2539 if ctx.p2():
2556 2540 edittext.append(hgprefix(_("branch merge")))
2557 2541 if ctx.branch():
2558 2542 edittext.append(hgprefix(_("branch '%s'") % ctx.branch()))
2559 2543 if bookmarks.isactivewdirparent(repo):
2560 2544 edittext.append(hgprefix(_("bookmark '%s'") % repo._activebookmark))
2561 2545 edittext.extend([hgprefix(_("subrepo %s") % s) for s in subs])
2562 2546 edittext.extend([hgprefix(_("added %s") % f) for f in added])
2563 2547 edittext.extend([hgprefix(_("changed %s") % f) for f in modified])
2564 2548 edittext.extend([hgprefix(_("removed %s") % f) for f in removed])
2565 2549 if not added and not modified and not removed:
2566 2550 edittext.append(hgprefix(_("no files changed")))
2567 2551 edittext.append("")
2568 2552
2569 2553 return "\n".join(edittext)
2570 2554
2571 2555 def commitstatus(repo, node, branch, bheads=None, opts=None):
2572 2556 if opts is None:
2573 2557 opts = {}
2574 2558 ctx = repo[node]
2575 2559 parents = ctx.parents()
2576 2560
2577 2561 if (not opts.get('amend') and bheads and node not in bheads and not
2578 2562 [x for x in parents if x.node() in bheads and x.branch() == branch]):
2579 2563 repo.ui.status(_('created new head\n'))
2580 2564 # The message is not printed for initial roots. For the other
2581 2565 # changesets, it is printed in the following situations:
2582 2566 #
2583 2567 # Par column: for the 2 parents with ...
2584 2568 # N: null or no parent
2585 2569 # B: parent is on another named branch
2586 2570 # C: parent is a regular non head changeset
2587 2571 # H: parent was a branch head of the current branch
2588 2572 # Msg column: whether we print "created new head" message
2589 2573 # In the following, it is assumed that there already exists some
2590 2574 # initial branch heads of the current branch, otherwise nothing is
2591 2575 # printed anyway.
2592 2576 #
2593 2577 # Par Msg Comment
2594 2578 # N N y additional topo root
2595 2579 #
2596 2580 # B N y additional branch root
2597 2581 # C N y additional topo head
2598 2582 # H N n usual case
2599 2583 #
2600 2584 # B B y weird additional branch root
2601 2585 # C B y branch merge
2602 2586 # H B n merge with named branch
2603 2587 #
2604 2588 # C C y additional head from merge
2605 2589 # C H n merge with a head
2606 2590 #
2607 2591 # H H n head merge: head count decreases
2608 2592
2609 2593 if not opts.get('close_branch'):
2610 2594 for r in parents:
2611 2595 if r.closesbranch() and r.branch() == branch:
2612 2596 repo.ui.status(_('reopening closed branch head %d\n') % r)
2613 2597
2614 2598 if repo.ui.debugflag:
2615 2599 repo.ui.write(_('committed changeset %d:%s\n') % (int(ctx), ctx.hex()))
2616 2600 elif repo.ui.verbose:
2617 2601 repo.ui.write(_('committed changeset %d:%s\n') % (int(ctx), ctx))
2618 2602
2619 2603 def postcommitstatus(repo, pats, opts):
2620 2604 return repo.status(match=scmutil.match(repo[None], pats, opts))
2621 2605
2622 2606 def revert(ui, repo, ctx, parents, *pats, **opts):
2623 2607 opts = pycompat.byteskwargs(opts)
2624 2608 parent, p2 = parents
2625 2609 node = ctx.node()
2626 2610
2627 2611 mf = ctx.manifest()
2628 2612 if node == p2:
2629 2613 parent = p2
2630 2614
2631 2615 # need all matching names in dirstate and manifest of target rev,
2632 2616 # so have to walk both. do not print errors if files exist in one
2633 2617 # but not other. in both cases, filesets should be evaluated against
2634 2618 # workingctx to get consistent result (issue4497). this means 'set:**'
2635 2619 # cannot be used to select missing files from target rev.
2636 2620
2637 2621 # `names` is a mapping for all elements in working copy and target revision
2638 2622 # The mapping is in the form:
2639 2623 # <asb path in repo> -> (<path from CWD>, <exactly specified by matcher?>)
2640 2624 names = {}
2641 2625
2642 2626 with repo.wlock():
2643 2627 ## filling of the `names` mapping
2644 2628 # walk dirstate to fill `names`
2645 2629
2646 2630 interactive = opts.get('interactive', False)
2647 2631 wctx = repo[None]
2648 2632 m = scmutil.match(wctx, pats, opts)
2649 2633
2650 2634 # we'll need this later
2651 2635 targetsubs = sorted(s for s in wctx.substate if m(s))
2652 2636
2653 2637 if not m.always():
2654 2638 matcher = matchmod.badmatch(m, lambda x, y: False)
2655 2639 for abs in wctx.walk(matcher):
2656 2640 names[abs] = m.rel(abs), m.exact(abs)
2657 2641
2658 2642 # walk target manifest to fill `names`
2659 2643
2660 2644 def badfn(path, msg):
2661 2645 if path in names:
2662 2646 return
2663 2647 if path in ctx.substate:
2664 2648 return
2665 2649 path_ = path + '/'
2666 2650 for f in names:
2667 2651 if f.startswith(path_):
2668 2652 return
2669 2653 ui.warn("%s: %s\n" % (m.rel(path), msg))
2670 2654
2671 2655 for abs in ctx.walk(matchmod.badmatch(m, badfn)):
2672 2656 if abs not in names:
2673 2657 names[abs] = m.rel(abs), m.exact(abs)
2674 2658
2675 2659 # Find status of all file in `names`.
2676 2660 m = scmutil.matchfiles(repo, names)
2677 2661
2678 2662 changes = repo.status(node1=node, match=m,
2679 2663 unknown=True, ignored=True, clean=True)
2680 2664 else:
2681 2665 changes = repo.status(node1=node, match=m)
2682 2666 for kind in changes:
2683 2667 for abs in kind:
2684 2668 names[abs] = m.rel(abs), m.exact(abs)
2685 2669
2686 2670 m = scmutil.matchfiles(repo, names)
2687 2671
2688 2672 modified = set(changes.modified)
2689 2673 added = set(changes.added)
2690 2674 removed = set(changes.removed)
2691 2675 _deleted = set(changes.deleted)
2692 2676 unknown = set(changes.unknown)
2693 2677 unknown.update(changes.ignored)
2694 2678 clean = set(changes.clean)
2695 2679 modadded = set()
2696 2680
2697 2681 # We need to account for the state of the file in the dirstate,
2698 2682 # even when we revert against something else than parent. This will
2699 2683 # slightly alter the behavior of revert (doing back up or not, delete
2700 2684 # or just forget etc).
2701 2685 if parent == node:
2702 2686 dsmodified = modified
2703 2687 dsadded = added
2704 2688 dsremoved = removed
2705 2689 # store all local modifications, useful later for rename detection
2706 2690 localchanges = dsmodified | dsadded
2707 2691 modified, added, removed = set(), set(), set()
2708 2692 else:
2709 2693 changes = repo.status(node1=parent, match=m)
2710 2694 dsmodified = set(changes.modified)
2711 2695 dsadded = set(changes.added)
2712 2696 dsremoved = set(changes.removed)
2713 2697 # store all local modifications, useful later for rename detection
2714 2698 localchanges = dsmodified | dsadded
2715 2699
2716 2700 # only take into account for removes between wc and target
2717 2701 clean |= dsremoved - removed
2718 2702 dsremoved &= removed
2719 2703 # distinct between dirstate remove and other
2720 2704 removed -= dsremoved
2721 2705
2722 2706 modadded = added & dsmodified
2723 2707 added -= modadded
2724 2708
2725 2709 # tell newly modified apart.
2726 2710 dsmodified &= modified
2727 2711 dsmodified |= modified & dsadded # dirstate added may need backup
2728 2712 modified -= dsmodified
2729 2713
2730 2714 # We need to wait for some post-processing to update this set
2731 2715 # before making the distinction. The dirstate will be used for
2732 2716 # that purpose.
2733 2717 dsadded = added
2734 2718
2735 2719 # in case of merge, files that are actually added can be reported as
2736 2720 # modified, we need to post process the result
2737 2721 if p2 != nullid:
2738 2722 mergeadd = set(dsmodified)
2739 2723 for path in dsmodified:
2740 2724 if path in mf:
2741 2725 mergeadd.remove(path)
2742 2726 dsadded |= mergeadd
2743 2727 dsmodified -= mergeadd
2744 2728
2745 2729 # if f is a rename, update `names` to also revert the source
2746 2730 cwd = repo.getcwd()
2747 2731 for f in localchanges:
2748 2732 src = repo.dirstate.copied(f)
2749 2733 # XXX should we check for rename down to target node?
2750 2734 if src and src not in names and repo.dirstate[src] == 'r':
2751 2735 dsremoved.add(src)
2752 2736 names[src] = (repo.pathto(src, cwd), True)
2753 2737
2754 2738 # determine the exact nature of the deleted changesets
2755 2739 deladded = set(_deleted)
2756 2740 for path in _deleted:
2757 2741 if path in mf:
2758 2742 deladded.remove(path)
2759 2743 deleted = _deleted - deladded
2760 2744
2761 2745 # distinguish between file to forget and the other
2762 2746 added = set()
2763 2747 for abs in dsadded:
2764 2748 if repo.dirstate[abs] != 'a':
2765 2749 added.add(abs)
2766 2750 dsadded -= added
2767 2751
2768 2752 for abs in deladded:
2769 2753 if repo.dirstate[abs] == 'a':
2770 2754 dsadded.add(abs)
2771 2755 deladded -= dsadded
2772 2756
2773 2757 # For files marked as removed, we check if an unknown file is present at
2774 2758 # the same path. If a such file exists it may need to be backed up.
2775 2759 # Making the distinction at this stage helps have simpler backup
2776 2760 # logic.
2777 2761 removunk = set()
2778 2762 for abs in removed:
2779 2763 target = repo.wjoin(abs)
2780 2764 if os.path.lexists(target):
2781 2765 removunk.add(abs)
2782 2766 removed -= removunk
2783 2767
2784 2768 dsremovunk = set()
2785 2769 for abs in dsremoved:
2786 2770 target = repo.wjoin(abs)
2787 2771 if os.path.lexists(target):
2788 2772 dsremovunk.add(abs)
2789 2773 dsremoved -= dsremovunk
2790 2774
2791 2775 # action to be actually performed by revert
2792 2776 # (<list of file>, message>) tuple
2793 2777 actions = {'revert': ([], _('reverting %s\n')),
2794 2778 'add': ([], _('adding %s\n')),
2795 2779 'remove': ([], _('removing %s\n')),
2796 2780 'drop': ([], _('removing %s\n')),
2797 2781 'forget': ([], _('forgetting %s\n')),
2798 2782 'undelete': ([], _('undeleting %s\n')),
2799 2783 'noop': (None, _('no changes needed to %s\n')),
2800 2784 'unknown': (None, _('file not managed: %s\n')),
2801 2785 }
2802 2786
2803 2787 # "constant" that convey the backup strategy.
2804 2788 # All set to `discard` if `no-backup` is set do avoid checking
2805 2789 # no_backup lower in the code.
2806 2790 # These values are ordered for comparison purposes
2807 2791 backupinteractive = 3 # do backup if interactively modified
2808 2792 backup = 2 # unconditionally do backup
2809 2793 check = 1 # check if the existing file differs from target
2810 2794 discard = 0 # never do backup
2811 2795 if opts.get('no_backup'):
2812 2796 backupinteractive = backup = check = discard
2813 2797 if interactive:
2814 2798 dsmodifiedbackup = backupinteractive
2815 2799 else:
2816 2800 dsmodifiedbackup = backup
2817 2801 tobackup = set()
2818 2802
2819 2803 backupanddel = actions['remove']
2820 2804 if not opts.get('no_backup'):
2821 2805 backupanddel = actions['drop']
2822 2806
2823 2807 disptable = (
2824 2808 # dispatch table:
2825 2809 # file state
2826 2810 # action
2827 2811 # make backup
2828 2812
2829 2813 ## Sets that results that will change file on disk
2830 2814 # Modified compared to target, no local change
2831 2815 (modified, actions['revert'], discard),
2832 2816 # Modified compared to target, but local file is deleted
2833 2817 (deleted, actions['revert'], discard),
2834 2818 # Modified compared to target, local change
2835 2819 (dsmodified, actions['revert'], dsmodifiedbackup),
2836 2820 # Added since target
2837 2821 (added, actions['remove'], discard),
2838 2822 # Added in working directory
2839 2823 (dsadded, actions['forget'], discard),
2840 2824 # Added since target, have local modification
2841 2825 (modadded, backupanddel, backup),
2842 2826 # Added since target but file is missing in working directory
2843 2827 (deladded, actions['drop'], discard),
2844 2828 # Removed since target, before working copy parent
2845 2829 (removed, actions['add'], discard),
2846 2830 # Same as `removed` but an unknown file exists at the same path
2847 2831 (removunk, actions['add'], check),
2848 2832 # Removed since targe, marked as such in working copy parent
2849 2833 (dsremoved, actions['undelete'], discard),
2850 2834 # Same as `dsremoved` but an unknown file exists at the same path
2851 2835 (dsremovunk, actions['undelete'], check),
2852 2836 ## the following sets does not result in any file changes
2853 2837 # File with no modification
2854 2838 (clean, actions['noop'], discard),
2855 2839 # Existing file, not tracked anywhere
2856 2840 (unknown, actions['unknown'], discard),
2857 2841 )
2858 2842
2859 2843 for abs, (rel, exact) in sorted(names.items()):
2860 2844 # target file to be touch on disk (relative to cwd)
2861 2845 target = repo.wjoin(abs)
2862 2846 # search the entry in the dispatch table.
2863 2847 # if the file is in any of these sets, it was touched in the working
2864 2848 # directory parent and we are sure it needs to be reverted.
2865 2849 for table, (xlist, msg), dobackup in disptable:
2866 2850 if abs not in table:
2867 2851 continue
2868 2852 if xlist is not None:
2869 2853 xlist.append(abs)
2870 2854 if dobackup:
2871 2855 # If in interactive mode, don't automatically create
2872 2856 # .orig files (issue4793)
2873 2857 if dobackup == backupinteractive:
2874 2858 tobackup.add(abs)
2875 2859 elif (backup <= dobackup or wctx[abs].cmp(ctx[abs])):
2876 2860 bakname = scmutil.origpath(ui, repo, rel)
2877 2861 ui.note(_('saving current version of %s as %s\n') %
2878 2862 (rel, bakname))
2879 2863 if not opts.get('dry_run'):
2880 2864 if interactive:
2881 2865 util.copyfile(target, bakname)
2882 2866 else:
2883 2867 util.rename(target, bakname)
2884 2868 if ui.verbose or not exact:
2885 2869 if not isinstance(msg, bytes):
2886 2870 msg = msg(abs)
2887 2871 ui.status(msg % rel)
2888 2872 elif exact:
2889 2873 ui.warn(msg % rel)
2890 2874 break
2891 2875
2892 2876 if not opts.get('dry_run'):
2893 2877 needdata = ('revert', 'add', 'undelete')
2894 2878 _revertprefetch(repo, ctx, *[actions[name][0] for name in needdata])
2895 2879 _performrevert(repo, parents, ctx, actions, interactive, tobackup)
2896 2880
2897 2881 if targetsubs:
2898 2882 # Revert the subrepos on the revert list
2899 2883 for sub in targetsubs:
2900 2884 try:
2901 2885 wctx.sub(sub).revert(ctx.substate[sub], *pats,
2902 2886 **pycompat.strkwargs(opts))
2903 2887 except KeyError:
2904 2888 raise error.Abort("subrepository '%s' does not exist in %s!"
2905 2889 % (sub, short(ctx.node())))
2906 2890
2907 2891 def _revertprefetch(repo, ctx, *files):
2908 2892 """Let extension changing the storage layer prefetch content"""
2909 2893
2910 2894 def _performrevert(repo, parents, ctx, actions, interactive=False,
2911 2895 tobackup=None):
2912 2896 """function that actually perform all the actions computed for revert
2913 2897
2914 2898 This is an independent function to let extension to plug in and react to
2915 2899 the imminent revert.
2916 2900
2917 2901 Make sure you have the working directory locked when calling this function.
2918 2902 """
2919 2903 parent, p2 = parents
2920 2904 node = ctx.node()
2921 2905 excluded_files = []
2922 2906 matcher_opts = {"exclude": excluded_files}
2923 2907
2924 2908 def checkout(f):
2925 2909 fc = ctx[f]
2926 2910 repo.wwrite(f, fc.data(), fc.flags())
2927 2911
2928 2912 def doremove(f):
2929 2913 try:
2930 2914 repo.wvfs.unlinkpath(f)
2931 2915 except OSError:
2932 2916 pass
2933 2917 repo.dirstate.remove(f)
2934 2918
2935 2919 audit_path = pathutil.pathauditor(repo.root, cached=True)
2936 2920 for f in actions['forget'][0]:
2937 2921 if interactive:
2938 2922 choice = repo.ui.promptchoice(
2939 2923 _("forget added file %s (Yn)?$$ &Yes $$ &No") % f)
2940 2924 if choice == 0:
2941 2925 repo.dirstate.drop(f)
2942 2926 else:
2943 2927 excluded_files.append(repo.wjoin(f))
2944 2928 else:
2945 2929 repo.dirstate.drop(f)
2946 2930 for f in actions['remove'][0]:
2947 2931 audit_path(f)
2948 2932 if interactive:
2949 2933 choice = repo.ui.promptchoice(
2950 2934 _("remove added file %s (Yn)?$$ &Yes $$ &No") % f)
2951 2935 if choice == 0:
2952 2936 doremove(f)
2953 2937 else:
2954 2938 excluded_files.append(repo.wjoin(f))
2955 2939 else:
2956 2940 doremove(f)
2957 2941 for f in actions['drop'][0]:
2958 2942 audit_path(f)
2959 2943 repo.dirstate.remove(f)
2960 2944
2961 2945 normal = None
2962 2946 if node == parent:
2963 2947 # We're reverting to our parent. If possible, we'd like status
2964 2948 # to report the file as clean. We have to use normallookup for
2965 2949 # merges to avoid losing information about merged/dirty files.
2966 2950 if p2 != nullid:
2967 2951 normal = repo.dirstate.normallookup
2968 2952 else:
2969 2953 normal = repo.dirstate.normal
2970 2954
2971 2955 newlyaddedandmodifiedfiles = set()
2972 2956 if interactive:
2973 2957 # Prompt the user for changes to revert
2974 2958 torevert = [repo.wjoin(f) for f in actions['revert'][0]]
2975 2959 m = scmutil.match(ctx, torevert, matcher_opts)
2976 2960 diffopts = patch.difffeatureopts(repo.ui, whitespace=True)
2977 2961 diffopts.nodates = True
2978 2962 diffopts.git = True
2979 2963 operation = 'discard'
2980 2964 reversehunks = True
2981 2965 if node != parent:
2982 2966 operation = 'apply'
2983 2967 reversehunks = False
2984 2968 if reversehunks:
2985 2969 diff = patch.diff(repo, ctx.node(), None, m, opts=diffopts)
2986 2970 else:
2987 2971 diff = patch.diff(repo, None, ctx.node(), m, opts=diffopts)
2988 2972 originalchunks = patch.parsepatch(diff)
2989 2973
2990 2974 try:
2991 2975
2992 2976 chunks, opts = recordfilter(repo.ui, originalchunks,
2993 2977 operation=operation)
2994 2978 if reversehunks:
2995 2979 chunks = patch.reversehunks(chunks)
2996 2980
2997 2981 except error.PatchError as err:
2998 2982 raise error.Abort(_('error parsing patch: %s') % err)
2999 2983
3000 2984 newlyaddedandmodifiedfiles = newandmodified(chunks, originalchunks)
3001 2985 if tobackup is None:
3002 2986 tobackup = set()
3003 2987 # Apply changes
3004 2988 fp = stringio()
3005 2989 for c in chunks:
3006 2990 # Create a backup file only if this hunk should be backed up
3007 2991 if ishunk(c) and c.header.filename() in tobackup:
3008 2992 abs = c.header.filename()
3009 2993 target = repo.wjoin(abs)
3010 2994 bakname = scmutil.origpath(repo.ui, repo, m.rel(abs))
3011 2995 util.copyfile(target, bakname)
3012 2996 tobackup.remove(abs)
3013 2997 c.write(fp)
3014 2998 dopatch = fp.tell()
3015 2999 fp.seek(0)
3016 3000 if dopatch:
3017 3001 try:
3018 3002 patch.internalpatch(repo.ui, repo, fp, 1, eolmode=None)
3019 3003 except error.PatchError as err:
3020 3004 raise error.Abort(str(err))
3021 3005 del fp
3022 3006 else:
3023 3007 for f in actions['revert'][0]:
3024 3008 checkout(f)
3025 3009 if normal:
3026 3010 normal(f)
3027 3011
3028 3012 for f in actions['add'][0]:
3029 3013 # Don't checkout modified files, they are already created by the diff
3030 3014 if f not in newlyaddedandmodifiedfiles:
3031 3015 checkout(f)
3032 3016 repo.dirstate.add(f)
3033 3017
3034 3018 normal = repo.dirstate.normallookup
3035 3019 if node == parent and p2 == nullid:
3036 3020 normal = repo.dirstate.normal
3037 3021 for f in actions['undelete'][0]:
3038 3022 checkout(f)
3039 3023 normal(f)
3040 3024
3041 3025 copied = copies.pathcopies(repo[parent], ctx)
3042 3026
3043 3027 for f in actions['add'][0] + actions['undelete'][0] + actions['revert'][0]:
3044 3028 if f in copied:
3045 3029 repo.dirstate.copy(copied[f], f)
3046 3030
3047 3031 class command(registrar.command):
3048 3032 """deprecated: used registrar.command instead"""
3049 3033 def _doregister(self, func, name, *args, **kwargs):
3050 3034 func._deprecatedregistrar = True # flag for deprecwarn in extensions.py
3051 3035 return super(command, self)._doregister(func, name, *args, **kwargs)
3052 3036
3053 3037 # a list of (ui, repo, otherpeer, opts, missing) functions called by
3054 3038 # commands.outgoing. "missing" is "missing" of the result of
3055 3039 # "findcommonoutgoing()"
3056 3040 outgoinghooks = util.hooks()
3057 3041
3058 3042 # a list of (ui, repo) functions called by commands.summary
3059 3043 summaryhooks = util.hooks()
3060 3044
3061 3045 # a list of (ui, repo, opts, changes) functions called by commands.summary.
3062 3046 #
3063 3047 # functions should return tuple of booleans below, if 'changes' is None:
3064 3048 # (whether-incomings-are-needed, whether-outgoings-are-needed)
3065 3049 #
3066 3050 # otherwise, 'changes' is a tuple of tuples below:
3067 3051 # - (sourceurl, sourcebranch, sourcepeer, incoming)
3068 3052 # - (desturl, destbranch, destpeer, outgoing)
3069 3053 summaryremotehooks = util.hooks()
3070 3054
3071 3055 # A list of state files kept by multistep operations like graft.
3072 3056 # Since graft cannot be aborted, it is considered 'clearable' by update.
3073 3057 # note: bisect is intentionally excluded
3074 3058 # (state file, clearable, allowcommit, error, hint)
3075 3059 unfinishedstates = [
3076 3060 ('graftstate', True, False, _('graft in progress'),
3077 3061 _("use 'hg graft --continue' or 'hg update' to abort")),
3078 3062 ('updatestate', True, False, _('last update was interrupted'),
3079 3063 _("use 'hg update' to get a consistent checkout"))
3080 3064 ]
3081 3065
3082 3066 def checkunfinished(repo, commit=False):
3083 3067 '''Look for an unfinished multistep operation, like graft, and abort
3084 3068 if found. It's probably good to check this right before
3085 3069 bailifchanged().
3086 3070 '''
3087 3071 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3088 3072 if commit and allowcommit:
3089 3073 continue
3090 3074 if repo.vfs.exists(f):
3091 3075 raise error.Abort(msg, hint=hint)
3092 3076
3093 3077 def clearunfinished(repo):
3094 3078 '''Check for unfinished operations (as above), and clear the ones
3095 3079 that are clearable.
3096 3080 '''
3097 3081 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3098 3082 if not clearable and repo.vfs.exists(f):
3099 3083 raise error.Abort(msg, hint=hint)
3100 3084 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3101 3085 if clearable and repo.vfs.exists(f):
3102 3086 util.unlink(repo.vfs.join(f))
3103 3087
3104 3088 afterresolvedstates = [
3105 3089 ('graftstate',
3106 3090 _('hg graft --continue')),
3107 3091 ]
3108 3092
3109 3093 def howtocontinue(repo):
3110 3094 '''Check for an unfinished operation and return the command to finish
3111 3095 it.
3112 3096
3113 3097 afterresolvedstates tuples define a .hg/{file} and the corresponding
3114 3098 command needed to finish it.
3115 3099
3116 3100 Returns a (msg, warning) tuple. 'msg' is a string and 'warning' is
3117 3101 a boolean.
3118 3102 '''
3119 3103 contmsg = _("continue: %s")
3120 3104 for f, msg in afterresolvedstates:
3121 3105 if repo.vfs.exists(f):
3122 3106 return contmsg % msg, True
3123 3107 if repo[None].dirty(missing=True, merge=False, branch=False):
3124 3108 return contmsg % _("hg commit"), False
3125 3109 return None, None
3126 3110
3127 3111 def checkafterresolved(repo):
3128 3112 '''Inform the user about the next action after completing hg resolve
3129 3113
3130 3114 If there's a matching afterresolvedstates, howtocontinue will yield
3131 3115 repo.ui.warn as the reporter.
3132 3116
3133 3117 Otherwise, it will yield repo.ui.note.
3134 3118 '''
3135 3119 msg, warning = howtocontinue(repo)
3136 3120 if msg is not None:
3137 3121 if warning:
3138 3122 repo.ui.warn("%s\n" % msg)
3139 3123 else:
3140 3124 repo.ui.note("%s\n" % msg)
3141 3125
3142 3126 def wrongtooltocontinue(repo, task):
3143 3127 '''Raise an abort suggesting how to properly continue if there is an
3144 3128 active task.
3145 3129
3146 3130 Uses howtocontinue() to find the active task.
3147 3131
3148 3132 If there's no task (repo.ui.note for 'hg commit'), it does not offer
3149 3133 a hint.
3150 3134 '''
3151 3135 after = howtocontinue(repo)
3152 3136 hint = None
3153 3137 if after[1]:
3154 3138 hint = after[0]
3155 3139 raise error.Abort(_('no %s in progress') % task, hint=hint)
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
General Comments 0
You need to be logged in to leave comments. Login now