##// END OF EJS Templates
Allow the user to specify the fallback encoding for the changelog...
Alexis S. L. Carvalho -
r3835:d1ce5461 default
parent child Browse files
Show More
@@ -1,512 +1,515
1 1 HGRC(5)
2 2 =======
3 3 Bryan O'Sullivan <bos@serpentine.com>
4 4
5 5 NAME
6 6 ----
7 7 hgrc - configuration files for Mercurial
8 8
9 9 SYNOPSIS
10 10 --------
11 11
12 12 The Mercurial system uses a set of configuration files to control
13 13 aspects of its behaviour.
14 14
15 15 FILES
16 16 -----
17 17
18 18 Mercurial reads configuration data from several files, if they exist.
19 19 The names of these files depend on the system on which Mercurial is
20 20 installed.
21 21
22 22 (Unix) <install-root>/etc/mercurial/hgrc.d/*.rc::
23 23 (Unix) <install-root>/etc/mercurial/hgrc::
24 24 Per-installation configuration files, searched for in the
25 25 directory where Mercurial is installed. For example, if installed
26 26 in /shared/tools, Mercurial will look in
27 27 /shared/tools/etc/mercurial/hgrc. Options in these files apply to
28 28 all Mercurial commands executed by any user in any directory.
29 29
30 30 (Unix) /etc/mercurial/hgrc.d/*.rc::
31 31 (Unix) /etc/mercurial/hgrc::
32 32 (Windows) C:\Mercurial\Mercurial.ini::
33 33 Per-system configuration files, for the system on which Mercurial
34 34 is running. Options in these files apply to all Mercurial
35 35 commands executed by any user in any directory. Options in these
36 36 files override per-installation options.
37 37
38 38 (Unix) $HOME/.hgrc::
39 39 (Windows) C:\Documents and Settings\USERNAME\Mercurial.ini::
40 40 (Windows) $HOME\Mercurial.ini::
41 41 Per-user configuration file, for the user running Mercurial.
42 42 Options in this file apply to all Mercurial commands executed by
43 43 any user in any directory. Options in this file override
44 44 per-installation and per-system options.
45 45 On Windows system, one of these is chosen exclusively according
46 46 to definition of HOME environment variable.
47 47
48 48 (Unix, Windows) <repo>/.hg/hgrc::
49 49 Per-repository configuration options that only apply in a
50 50 particular repository. This file is not version-controlled, and
51 51 will not get transferred during a "clone" operation. Options in
52 52 this file override options in all other configuration files.
53 53 On Unix, most of this file will be ignored if it doesn't belong
54 54 to a trusted user or to a trusted group. See the documentation
55 55 for the trusted section below for more details.
56 56
57 57 SYNTAX
58 58 ------
59 59
60 60 A configuration file consists of sections, led by a "[section]" header
61 61 and followed by "name: value" entries; "name=value" is also accepted.
62 62
63 63 [spam]
64 64 eggs=ham
65 65 green=
66 66 eggs
67 67
68 68 Each line contains one entry. If the lines that follow are indented,
69 69 they are treated as continuations of that entry.
70 70
71 71 Leading whitespace is removed from values. Empty lines are skipped.
72 72
73 73 The optional values can contain format strings which refer to other
74 74 values in the same section, or values in a special DEFAULT section.
75 75
76 76 Lines beginning with "#" or ";" are ignored and may be used to provide
77 77 comments.
78 78
79 79 SECTIONS
80 80 --------
81 81
82 82 This section describes the different sections that may appear in a
83 83 Mercurial "hgrc" file, the purpose of each section, its possible
84 84 keys, and their possible values.
85 85
86 86 decode/encode::
87 87 Filters for transforming files on checkout/checkin. This would
88 88 typically be used for newline processing or other
89 89 localization/canonicalization of files.
90 90
91 91 Filters consist of a filter pattern followed by a filter command.
92 92 Filter patterns are globs by default, rooted at the repository
93 93 root. For example, to match any file ending in ".txt" in the root
94 94 directory only, use the pattern "*.txt". To match any file ending
95 95 in ".c" anywhere in the repository, use the pattern "**.c".
96 96
97 97 The filter command can start with a specifier, either "pipe:" or
98 98 "tempfile:". If no specifier is given, "pipe:" is used by default.
99 99
100 100 A "pipe:" command must accept data on stdin and return the
101 101 transformed data on stdout.
102 102
103 103 Pipe example:
104 104
105 105 [encode]
106 106 # uncompress gzip files on checkin to improve delta compression
107 107 # note: not necessarily a good idea, just an example
108 108 *.gz = pipe: gunzip
109 109
110 110 [decode]
111 111 # recompress gzip files when writing them to the working dir (we
112 112 # can safely omit "pipe:", because it's the default)
113 113 *.gz = gzip
114 114
115 115 A "tempfile:" command is a template. The string INFILE is replaced
116 116 with the name of a temporary file that contains the data to be
117 117 filtered by the command. The string OUTFILE is replaced with the
118 118 name of an empty temporary file, where the filtered data must be
119 119 written by the command.
120 120
121 121 NOTE: the tempfile mechanism is recommended for Windows systems,
122 122 where the standard shell I/O redirection operators often have
123 123 strange effects. In particular, if you are doing line ending
124 124 conversion on Windows using the popular dos2unix and unix2dos
125 125 programs, you *must* use the tempfile mechanism, as using pipes will
126 126 corrupt the contents of your files.
127 127
128 128 Tempfile example:
129 129
130 130 [encode]
131 131 # convert files to unix line ending conventions on checkin
132 132 **.txt = tempfile: dos2unix -n INFILE OUTFILE
133 133
134 134 [decode]
135 135 # convert files to windows line ending conventions when writing
136 136 # them to the working dir
137 137 **.txt = tempfile: unix2dos -n INFILE OUTFILE
138 138
139 139 defaults::
140 140 Use the [defaults] section to define command defaults, i.e. the
141 141 default options/arguments to pass to the specified commands.
142 142
143 143 The following example makes 'hg log' run in verbose mode, and
144 144 'hg status' show only the modified files, by default.
145 145
146 146 [defaults]
147 147 log = -v
148 148 status = -m
149 149
150 150 The actual commands, instead of their aliases, must be used when
151 151 defining command defaults. The command defaults will also be
152 152 applied to the aliases of the commands defined.
153 153
154 154 email::
155 155 Settings for extensions that send email messages.
156 156 from;;
157 157 Optional. Email address to use in "From" header and SMTP envelope
158 158 of outgoing messages.
159 159 to;;
160 160 Optional. Comma-separated list of recipients' email addresses.
161 161 cc;;
162 162 Optional. Comma-separated list of carbon copy recipients'
163 163 email addresses.
164 164 bcc;;
165 165 Optional. Comma-separated list of blind carbon copy
166 166 recipients' email addresses. Cannot be set interactively.
167 167 method;;
168 168 Optional. Method to use to send email messages. If value is
169 169 "smtp" (default), use SMTP (see section "[smtp]" for
170 170 configuration). Otherwise, use as name of program to run that
171 171 acts like sendmail (takes "-f" option for sender, list of
172 172 recipients on command line, message on stdin). Normally, setting
173 173 this to "sendmail" or "/usr/sbin/sendmail" is enough to use
174 174 sendmail to send messages.
175 175
176 176 Email example:
177 177
178 178 [email]
179 179 from = Joseph User <joe.user@example.com>
180 180 method = /usr/sbin/sendmail
181 181
182 182 extensions::
183 183 Mercurial has an extension mechanism for adding new features. To
184 184 enable an extension, create an entry for it in this section.
185 185
186 186 If you know that the extension is already in Python's search path,
187 187 you can give the name of the module, followed by "=", with nothing
188 188 after the "=".
189 189
190 190 Otherwise, give a name that you choose, followed by "=", followed by
191 191 the path to the ".py" file (including the file name extension) that
192 192 defines the extension.
193 193
194 194 Example for ~/.hgrc:
195 195
196 196 [extensions]
197 197 # (the mq extension will get loaded from mercurial's path)
198 198 hgext.mq =
199 199 # (this extension will get loaded from the file specified)
200 200 myfeature = ~/.hgext/myfeature.py
201 201
202 202 hooks::
203 203 Commands or Python functions that get automatically executed by
204 204 various actions such as starting or finishing a commit. Multiple
205 205 hooks can be run for the same action by appending a suffix to the
206 206 action. Overriding a site-wide hook can be done by changing its
207 207 value or setting it to an empty string.
208 208
209 209 Example .hg/hgrc:
210 210
211 211 [hooks]
212 212 # do not use the site-wide hook
213 213 incoming =
214 214 incoming.email = /my/email/hook
215 215 incoming.autobuild = /my/build/hook
216 216
217 217 Most hooks are run with environment variables set that give added
218 218 useful information. For each hook below, the environment variables
219 219 it is passed are listed with names of the form "$HG_foo".
220 220
221 221 changegroup;;
222 222 Run after a changegroup has been added via push, pull or
223 223 unbundle. ID of the first new changeset is in $HG_NODE. URL from
224 224 which changes came is in $HG_URL.
225 225 commit;;
226 226 Run after a changeset has been created in the local repository.
227 227 ID of the newly created changeset is in $HG_NODE. Parent
228 228 changeset IDs are in $HG_PARENT1 and $HG_PARENT2.
229 229 incoming;;
230 230 Run after a changeset has been pulled, pushed, or unbundled into
231 231 the local repository. The ID of the newly arrived changeset is in
232 232 $HG_NODE. URL that was source of changes came is in $HG_URL.
233 233 outgoing;;
234 234 Run after sending changes from local repository to another. ID of
235 235 first changeset sent is in $HG_NODE. Source of operation is in
236 236 $HG_SOURCE; see "preoutgoing" hook for description.
237 237 prechangegroup;;
238 238 Run before a changegroup is added via push, pull or unbundle.
239 239 Exit status 0 allows the changegroup to proceed. Non-zero status
240 240 will cause the push, pull or unbundle to fail. URL from which
241 241 changes will come is in $HG_URL.
242 242 precommit;;
243 243 Run before starting a local commit. Exit status 0 allows the
244 244 commit to proceed. Non-zero status will cause the commit to fail.
245 245 Parent changeset IDs are in $HG_PARENT1 and $HG_PARENT2.
246 246 preoutgoing;;
247 247 Run before computing changes to send from the local repository to
248 248 another. Non-zero status will cause failure. This lets you
249 249 prevent pull over http or ssh. Also prevents against local pull,
250 250 push (outbound) or bundle commands, but not effective, since you
251 251 can just copy files instead then. Source of operation is in
252 252 $HG_SOURCE. If "serve", operation is happening on behalf of
253 253 remote ssh or http repository. If "push", "pull" or "bundle",
254 254 operation is happening on behalf of repository on same system.
255 255 pretag;;
256 256 Run before creating a tag. Exit status 0 allows the tag to be
257 257 created. Non-zero status will cause the tag to fail. ID of
258 258 changeset to tag is in $HG_NODE. Name of tag is in $HG_TAG. Tag
259 259 is local if $HG_LOCAL=1, in repo if $HG_LOCAL=0.
260 260 pretxnchangegroup;;
261 261 Run after a changegroup has been added via push, pull or unbundle,
262 262 but before the transaction has been committed. Changegroup is
263 263 visible to hook program. This lets you validate incoming changes
264 264 before accepting them. Passed the ID of the first new changeset
265 265 in $HG_NODE. Exit status 0 allows the transaction to commit.
266 266 Non-zero status will cause the transaction to be rolled back and
267 267 the push, pull or unbundle will fail. URL that was source of
268 268 changes is in $HG_URL.
269 269 pretxncommit;;
270 270 Run after a changeset has been created but the transaction not yet
271 271 committed. Changeset is visible to hook program. This lets you
272 272 validate commit message and changes. Exit status 0 allows the
273 273 commit to proceed. Non-zero status will cause the transaction to
274 274 be rolled back. ID of changeset is in $HG_NODE. Parent changeset
275 275 IDs are in $HG_PARENT1 and $HG_PARENT2.
276 276 preupdate;;
277 277 Run before updating the working directory. Exit status 0 allows
278 278 the update to proceed. Non-zero status will prevent the update.
279 279 Changeset ID of first new parent is in $HG_PARENT1. If merge, ID
280 280 of second new parent is in $HG_PARENT2.
281 281 tag;;
282 282 Run after a tag is created. ID of tagged changeset is in
283 283 $HG_NODE. Name of tag is in $HG_TAG. Tag is local if
284 284 $HG_LOCAL=1, in repo if $HG_LOCAL=0.
285 285 update;;
286 286 Run after updating the working directory. Changeset ID of first
287 287 new parent is in $HG_PARENT1. If merge, ID of second new parent
288 288 is in $HG_PARENT2. If update succeeded, $HG_ERROR=0. If update
289 289 failed (e.g. because conflicts not resolved), $HG_ERROR=1.
290 290
291 291 Note: In earlier releases, the names of hook environment variables
292 292 did not have a "HG_" prefix. The old unprefixed names are no longer
293 293 provided in the environment.
294 294
295 295 The syntax for Python hooks is as follows:
296 296
297 297 hookname = python:modulename.submodule.callable
298 298
299 299 Python hooks are run within the Mercurial process. Each hook is
300 300 called with at least three keyword arguments: a ui object (keyword
301 301 "ui"), a repository object (keyword "repo"), and a "hooktype"
302 302 keyword that tells what kind of hook is used. Arguments listed as
303 303 environment variables above are passed as keyword arguments, with no
304 304 "HG_" prefix, and names in lower case.
305 305
306 306 If a Python hook returns a "true" value or raises an exception, this
307 307 is treated as failure of the hook.
308 308
309 309 http_proxy::
310 310 Used to access web-based Mercurial repositories through a HTTP
311 311 proxy.
312 312 host;;
313 313 Host name and (optional) port of the proxy server, for example
314 314 "myproxy:8000".
315 315 no;;
316 316 Optional. Comma-separated list of host names that should bypass
317 317 the proxy.
318 318 passwd;;
319 319 Optional. Password to authenticate with at the proxy server.
320 320 user;;
321 321 Optional. User name to authenticate with at the proxy server.
322 322
323 323 smtp::
324 324 Configuration for extensions that need to send email messages.
325 325 host;;
326 326 Host name of mail server, e.g. "mail.example.com".
327 327 port;;
328 328 Optional. Port to connect to on mail server. Default: 25.
329 329 tls;;
330 330 Optional. Whether to connect to mail server using TLS. True or
331 331 False. Default: False.
332 332 username;;
333 333 Optional. User name to authenticate to SMTP server with.
334 334 If username is specified, password must also be specified.
335 335 Default: none.
336 336 password;;
337 337 Optional. Password to authenticate to SMTP server with.
338 338 If username is specified, password must also be specified.
339 339 Default: none.
340 340 local_hostname;;
341 341 Optional. It's the hostname that the sender can use to identify itself
342 342 to the MTA.
343 343
344 344 paths::
345 345 Assigns symbolic names to repositories. The left side is the
346 346 symbolic name, and the right gives the directory or URL that is the
347 347 location of the repository. Default paths can be declared by
348 348 setting the following entries.
349 349 default;;
350 350 Directory or URL to use when pulling if no source is specified.
351 351 Default is set to repository from which the current repository
352 352 was cloned.
353 353 default-push;;
354 354 Optional. Directory or URL to use when pushing if no destination
355 355 is specified.
356 356
357 357 server::
358 358 Controls generic server settings.
359 359 uncompressed;;
360 360 Whether to allow clients to clone a repo using the uncompressed
361 361 streaming protocol. This transfers about 40% more data than a
362 362 regular clone, but uses less memory and CPU on both server and
363 363 client. Over a LAN (100Mbps or better) or a very fast WAN, an
364 364 uncompressed streaming clone is a lot faster (~10x) than a regular
365 365 clone. Over most WAN connections (anything slower than about
366 366 6Mbps), uncompressed streaming is slower, because of the extra
367 367 data transfer overhead. Default is False.
368 368
369 369 trusted::
370 370 For security reasons, Mercurial will not use the settings in
371 371 the .hg/hgrc file from a repository if it doesn't belong to a
372 372 trusted user or to a trusted group. The main exception is the
373 373 web interface, which automatically uses some safe settings, since
374 374 it's common to serve repositories from different users.
375 375
376 376 This section specifies what users and groups are trusted. The
377 377 current user is always trusted. To trust everybody, list a user
378 378 or a group with name "*".
379 379
380 380 users;;
381 381 Comma-separated list of trusted users.
382 382 groups;;
383 383 Comma-separated list of trusted groups.
384 384
385 385 ui::
386 386 User interface controls.
387 387 debug;;
388 388 Print debugging information. True or False. Default is False.
389 389 editor;;
390 390 The editor to use during a commit. Default is $EDITOR or "vi".
391 fallbackencoding;;
392 Encoding to try if it's not possible to decode the changelog using
393 UTF-8. Default is ISO-8859-1.
391 394 ignore;;
392 395 A file to read per-user ignore patterns from. This file should be in
393 396 the same format as a repository-wide .hgignore file. This option
394 397 supports hook syntax, so if you want to specify multiple ignore
395 398 files, you can do so by setting something like
396 399 "ignore.other = ~/.hgignore2". For details of the ignore file
397 400 format, see the hgignore(5) man page.
398 401 interactive;;
399 402 Allow to prompt the user. True or False. Default is True.
400 403 logtemplate;;
401 404 Template string for commands that print changesets.
402 405 style;;
403 406 Name of style to use for command output.
404 407 merge;;
405 408 The conflict resolution program to use during a manual merge.
406 409 Default is "hgmerge".
407 410 quiet;;
408 411 Reduce the amount of output printed. True or False. Default is False.
409 412 remotecmd;;
410 413 remote command to use for clone/push/pull operations. Default is 'hg'.
411 414 ssh;;
412 415 command to use for SSH connections. Default is 'ssh'.
413 416 strict;;
414 417 Require exact command names, instead of allowing unambiguous
415 418 abbreviations. True or False. Default is False.
416 419 timeout;;
417 420 The timeout used when a lock is held (in seconds), a negative value
418 421 means no timeout. Default is 600.
419 422 username;;
420 423 The committer of a changeset created when running "commit".
421 424 Typically a person's name and email address, e.g. "Fred Widget
422 425 <fred@example.com>". Default is $EMAIL or username@hostname.
423 426 verbose;;
424 427 Increase the amount of output printed. True or False. Default is False.
425 428
426 429
427 430 web::
428 431 Web interface configuration.
429 432 accesslog;;
430 433 Where to output the access log. Default is stdout.
431 434 address;;
432 435 Interface address to bind to. Default is all.
433 436 allow_archive;;
434 437 List of archive format (bz2, gz, zip) allowed for downloading.
435 438 Default is empty.
436 439 allowbz2;;
437 440 (DEPRECATED) Whether to allow .tar.bz2 downloading of repo revisions.
438 441 Default is false.
439 442 allowgz;;
440 443 (DEPRECATED) Whether to allow .tar.gz downloading of repo revisions.
441 444 Default is false.
442 445 allowpull;;
443 446 Whether to allow pulling from the repository. Default is true.
444 447 allow_push;;
445 448 Whether to allow pushing to the repository. If empty or not set,
446 449 push is not allowed. If the special value "*", any remote user
447 450 can push, including unauthenticated users. Otherwise, the remote
448 451 user must have been authenticated, and the authenticated user name
449 452 must be present in this list (separated by whitespace or ",").
450 453 The contents of the allow_push list are examined after the
451 454 deny_push list.
452 455 allowzip;;
453 456 (DEPRECATED) Whether to allow .zip downloading of repo revisions.
454 457 Default is false. This feature creates temporary files.
455 458 baseurl;;
456 459 Base URL to use when publishing URLs in other locations, so
457 460 third-party tools like email notification hooks can construct URLs.
458 461 Example: "http://hgserver/repos/"
459 462 contact;;
460 463 Name or email address of the person in charge of the repository.
461 464 Default is "unknown".
462 465 deny_push;;
463 466 Whether to deny pushing to the repository. If empty or not set,
464 467 push is not denied. If the special value "*", all remote users
465 468 are denied push. Otherwise, unauthenticated users are all denied,
466 469 and any authenticated user name present in this list (separated by
467 470 whitespace or ",") is also denied. The contents of the deny_push
468 471 list are examined before the allow_push list.
469 472 description;;
470 473 Textual description of the repository's purpose or contents.
471 474 Default is "unknown".
472 475 errorlog;;
473 476 Where to output the error log. Default is stderr.
474 477 ipv6;;
475 478 Whether to use IPv6. Default is false.
476 479 name;;
477 480 Repository name to use in the web interface. Default is current
478 481 working directory.
479 482 maxchanges;;
480 483 Maximum number of changes to list on the changelog. Default is 10.
481 484 maxfiles;;
482 485 Maximum number of files to list per changeset. Default is 10.
483 486 port;;
484 487 Port to listen on. Default is 8000.
485 488 push_ssl;;
486 489 Whether to require that inbound pushes be transported over SSL to
487 490 prevent password sniffing. Default is true.
488 491 stripes;;
489 492 How many lines a "zebra stripe" should span in multiline output.
490 493 Default is 1; set to 0 to disable.
491 494 style;;
492 495 Which template map style to use.
493 496 templates;;
494 497 Where to find the HTML templates. Default is install path.
495 498
496 499
497 500 AUTHOR
498 501 ------
499 502 Bryan O'Sullivan <bos@serpentine.com>.
500 503
501 504 Mercurial was written by Matt Mackall <mpm@selenic.com>.
502 505
503 506 SEE ALSO
504 507 --------
505 508 hg(1), hgignore(5)
506 509
507 510 COPYING
508 511 -------
509 512 This manual page is copyright 2005 Bryan O'Sullivan.
510 513 Mercurial is copyright 2005, 2006 Matt Mackall.
511 514 Free use of this software is granted under the terms of the GNU General
512 515 Public License (GPL).
@@ -1,1931 +1,1935
1 1 # localrepo.py - read/write repository class for mercurial
2 2 #
3 3 # Copyright 2005, 2006 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms
6 6 # of the GNU General Public License, incorporated herein by reference.
7 7
8 8 from node import *
9 9 from i18n import gettext as _
10 10 from demandload import *
11 11 import repo
12 12 demandload(globals(), "appendfile changegroup")
13 13 demandload(globals(), "changelog dirstate filelog manifest context")
14 14 demandload(globals(), "re lock transaction tempfile stat mdiff errno ui")
15 15 demandload(globals(), "os revlog time util")
16 16
17 17 class localrepository(repo.repository):
18 18 capabilities = ('lookup', 'changegroupsubset')
19 19
20 20 def __del__(self):
21 21 self.transhandle = None
22 22 def __init__(self, parentui, path=None, create=0):
23 23 repo.repository.__init__(self)
24 24 if not path:
25 25 p = os.getcwd()
26 26 while not os.path.isdir(os.path.join(p, ".hg")):
27 27 oldp = p
28 28 p = os.path.dirname(p)
29 29 if p == oldp:
30 30 raise repo.RepoError(_("There is no Mercurial repository"
31 31 " here (.hg not found)"))
32 32 path = p
33 33 self.path = os.path.join(path, ".hg")
34 34 self.spath = self.path
35 35
36 36 if not os.path.isdir(self.path):
37 37 if create:
38 38 if not os.path.exists(path):
39 39 os.mkdir(path)
40 40 os.mkdir(self.path)
41 41 if self.spath != self.path:
42 42 os.mkdir(self.spath)
43 43 else:
44 44 raise repo.RepoError(_("repository %s not found") % path)
45 45 elif create:
46 46 raise repo.RepoError(_("repository %s already exists") % path)
47 47
48 48 self.root = os.path.realpath(path)
49 49 self.origroot = path
50 50 self.ui = ui.ui(parentui=parentui)
51 51 self.opener = util.opener(self.path)
52 52 self.sopener = util.opener(self.spath)
53 53 self.wopener = util.opener(self.root)
54 54
55 55 try:
56 56 self.ui.readconfig(self.join("hgrc"), self.root)
57 57 except IOError:
58 58 pass
59 59
60 60 v = self.ui.configrevlog()
61 61 self.revlogversion = int(v.get('format', revlog.REVLOG_DEFAULT_FORMAT))
62 62 self.revlogv1 = self.revlogversion != revlog.REVLOGV0
63 63 fl = v.get('flags', None)
64 64 flags = 0
65 65 if fl != None:
66 66 for x in fl.split():
67 67 flags |= revlog.flagstr(x)
68 68 elif self.revlogv1:
69 69 flags = revlog.REVLOG_DEFAULT_FLAGS
70 70
71 71 v = self.revlogversion | flags
72 72 self.manifest = manifest.manifest(self.sopener, v)
73 73 self.changelog = changelog.changelog(self.sopener, v)
74 74
75 fallback = self.ui.config('ui', 'fallbackencoding')
76 if fallback:
77 util._fallbackencoding = fallback
78
75 79 # the changelog might not have the inline index flag
76 80 # on. If the format of the changelog is the same as found in
77 81 # .hgrc, apply any flags found in the .hgrc as well.
78 82 # Otherwise, just version from the changelog
79 83 v = self.changelog.version
80 84 if v == self.revlogversion:
81 85 v |= flags
82 86 self.revlogversion = v
83 87
84 88 self.tagscache = None
85 89 self.branchcache = None
86 90 self.nodetagscache = None
87 91 self.encodepats = None
88 92 self.decodepats = None
89 93 self.transhandle = None
90 94
91 95 self.dirstate = dirstate.dirstate(self.opener, self.ui, self.root)
92 96
93 97 def url(self):
94 98 return 'file:' + self.root
95 99
96 100 def hook(self, name, throw=False, **args):
97 101 def callhook(hname, funcname):
98 102 '''call python hook. hook is callable object, looked up as
99 103 name in python module. if callable returns "true", hook
100 104 fails, else passes. if hook raises exception, treated as
101 105 hook failure. exception propagates if throw is "true".
102 106
103 107 reason for "true" meaning "hook failed" is so that
104 108 unmodified commands (e.g. mercurial.commands.update) can
105 109 be run as hooks without wrappers to convert return values.'''
106 110
107 111 self.ui.note(_("calling hook %s: %s\n") % (hname, funcname))
108 112 d = funcname.rfind('.')
109 113 if d == -1:
110 114 raise util.Abort(_('%s hook is invalid ("%s" not in a module)')
111 115 % (hname, funcname))
112 116 modname = funcname[:d]
113 117 try:
114 118 obj = __import__(modname)
115 119 except ImportError:
116 120 try:
117 121 # extensions are loaded with hgext_ prefix
118 122 obj = __import__("hgext_%s" % modname)
119 123 except ImportError:
120 124 raise util.Abort(_('%s hook is invalid '
121 125 '(import of "%s" failed)') %
122 126 (hname, modname))
123 127 try:
124 128 for p in funcname.split('.')[1:]:
125 129 obj = getattr(obj, p)
126 130 except AttributeError, err:
127 131 raise util.Abort(_('%s hook is invalid '
128 132 '("%s" is not defined)') %
129 133 (hname, funcname))
130 134 if not callable(obj):
131 135 raise util.Abort(_('%s hook is invalid '
132 136 '("%s" is not callable)') %
133 137 (hname, funcname))
134 138 try:
135 139 r = obj(ui=self.ui, repo=self, hooktype=name, **args)
136 140 except (KeyboardInterrupt, util.SignalInterrupt):
137 141 raise
138 142 except Exception, exc:
139 143 if isinstance(exc, util.Abort):
140 144 self.ui.warn(_('error: %s hook failed: %s\n') %
141 145 (hname, exc.args[0]))
142 146 else:
143 147 self.ui.warn(_('error: %s hook raised an exception: '
144 148 '%s\n') % (hname, exc))
145 149 if throw:
146 150 raise
147 151 self.ui.print_exc()
148 152 return True
149 153 if r:
150 154 if throw:
151 155 raise util.Abort(_('%s hook failed') % hname)
152 156 self.ui.warn(_('warning: %s hook failed\n') % hname)
153 157 return r
154 158
155 159 def runhook(name, cmd):
156 160 self.ui.note(_("running hook %s: %s\n") % (name, cmd))
157 161 env = dict([('HG_' + k.upper(), v) for k, v in args.iteritems()])
158 162 r = util.system(cmd, environ=env, cwd=self.root)
159 163 if r:
160 164 desc, r = util.explain_exit(r)
161 165 if throw:
162 166 raise util.Abort(_('%s hook %s') % (name, desc))
163 167 self.ui.warn(_('warning: %s hook %s\n') % (name, desc))
164 168 return r
165 169
166 170 r = False
167 171 hooks = [(hname, cmd) for hname, cmd in self.ui.configitems("hooks")
168 172 if hname.split(".", 1)[0] == name and cmd]
169 173 hooks.sort()
170 174 for hname, cmd in hooks:
171 175 if cmd.startswith('python:'):
172 176 r = callhook(hname, cmd[7:].strip()) or r
173 177 else:
174 178 r = runhook(hname, cmd) or r
175 179 return r
176 180
177 181 tag_disallowed = ':\r\n'
178 182
179 183 def tag(self, name, node, message, local, user, date):
180 184 '''tag a revision with a symbolic name.
181 185
182 186 if local is True, the tag is stored in a per-repository file.
183 187 otherwise, it is stored in the .hgtags file, and a new
184 188 changeset is committed with the change.
185 189
186 190 keyword arguments:
187 191
188 192 local: whether to store tag in non-version-controlled file
189 193 (default False)
190 194
191 195 message: commit message to use if committing
192 196
193 197 user: name of user to use if committing
194 198
195 199 date: date tuple to use if committing'''
196 200
197 201 for c in self.tag_disallowed:
198 202 if c in name:
199 203 raise util.Abort(_('%r cannot be used in a tag name') % c)
200 204
201 205 self.hook('pretag', throw=True, node=hex(node), tag=name, local=local)
202 206
203 207 if local:
204 208 # local tags are stored in the current charset
205 209 self.opener('localtags', 'a').write('%s %s\n' % (hex(node), name))
206 210 self.hook('tag', node=hex(node), tag=name, local=local)
207 211 return
208 212
209 213 for x in self.status()[:5]:
210 214 if '.hgtags' in x:
211 215 raise util.Abort(_('working copy of .hgtags is changed '
212 216 '(please commit .hgtags manually)'))
213 217
214 218 # committed tags are stored in UTF-8
215 219 line = '%s %s\n' % (hex(node), util.fromlocal(name))
216 220 self.wfile('.hgtags', 'ab').write(line)
217 221 if self.dirstate.state('.hgtags') == '?':
218 222 self.add(['.hgtags'])
219 223
220 224 self.commit(['.hgtags'], message, user, date)
221 225 self.hook('tag', node=hex(node), tag=name, local=local)
222 226
223 227 def tags(self):
224 228 '''return a mapping of tag to node'''
225 229 if not self.tagscache:
226 230 self.tagscache = {}
227 231
228 232 def parsetag(line, context):
229 233 if not line:
230 234 return
231 235 s = l.split(" ", 1)
232 236 if len(s) != 2:
233 237 self.ui.warn(_("%s: cannot parse entry\n") % context)
234 238 return
235 239 node, key = s
236 240 key = util.tolocal(key.strip()) # stored in UTF-8
237 241 try:
238 242 bin_n = bin(node)
239 243 except TypeError:
240 244 self.ui.warn(_("%s: node '%s' is not well formed\n") %
241 245 (context, node))
242 246 return
243 247 if bin_n not in self.changelog.nodemap:
244 248 self.ui.warn(_("%s: tag '%s' refers to unknown node\n") %
245 249 (context, key))
246 250 return
247 251 self.tagscache[key] = bin_n
248 252
249 253 # read the tags file from each head, ending with the tip,
250 254 # and add each tag found to the map, with "newer" ones
251 255 # taking precedence
252 256 f = None
253 257 for rev, node, fnode in self._hgtagsnodes():
254 258 f = (f and f.filectx(fnode) or
255 259 self.filectx('.hgtags', fileid=fnode))
256 260 count = 0
257 261 for l in f.data().splitlines():
258 262 count += 1
259 263 parsetag(l, _("%s, line %d") % (str(f), count))
260 264
261 265 try:
262 266 f = self.opener("localtags")
263 267 count = 0
264 268 for l in f:
265 269 # localtags are stored in the local character set
266 270 # while the internal tag table is stored in UTF-8
267 271 l = util.fromlocal(l)
268 272 count += 1
269 273 parsetag(l, _("localtags, line %d") % count)
270 274 except IOError:
271 275 pass
272 276
273 277 self.tagscache['tip'] = self.changelog.tip()
274 278
275 279 return self.tagscache
276 280
277 281 def _hgtagsnodes(self):
278 282 heads = self.heads()
279 283 heads.reverse()
280 284 last = {}
281 285 ret = []
282 286 for node in heads:
283 287 c = self.changectx(node)
284 288 rev = c.rev()
285 289 try:
286 290 fnode = c.filenode('.hgtags')
287 291 except repo.LookupError:
288 292 continue
289 293 ret.append((rev, node, fnode))
290 294 if fnode in last:
291 295 ret[last[fnode]] = None
292 296 last[fnode] = len(ret) - 1
293 297 return [item for item in ret if item]
294 298
295 299 def tagslist(self):
296 300 '''return a list of tags ordered by revision'''
297 301 l = []
298 302 for t, n in self.tags().items():
299 303 try:
300 304 r = self.changelog.rev(n)
301 305 except:
302 306 r = -2 # sort to the beginning of the list if unknown
303 307 l.append((r, t, n))
304 308 l.sort()
305 309 return [(t, n) for r, t, n in l]
306 310
307 311 def nodetags(self, node):
308 312 '''return the tags associated with a node'''
309 313 if not self.nodetagscache:
310 314 self.nodetagscache = {}
311 315 for t, n in self.tags().items():
312 316 self.nodetagscache.setdefault(n, []).append(t)
313 317 return self.nodetagscache.get(node, [])
314 318
315 319 def _branchtags(self):
316 320 partial, last, lrev = self._readbranchcache()
317 321
318 322 tiprev = self.changelog.count() - 1
319 323 if lrev != tiprev:
320 324 self._updatebranchcache(partial, lrev+1, tiprev+1)
321 325 self._writebranchcache(partial, self.changelog.tip(), tiprev)
322 326
323 327 return partial
324 328
325 329 def branchtags(self):
326 330 if self.branchcache is not None:
327 331 return self.branchcache
328 332
329 333 self.branchcache = {} # avoid recursion in changectx
330 334 partial = self._branchtags()
331 335
332 336 # the branch cache is stored on disk as UTF-8, but in the local
333 337 # charset internally
334 338 for k, v in partial.items():
335 339 self.branchcache[util.tolocal(k)] = v
336 340 return self.branchcache
337 341
338 342 def _readbranchcache(self):
339 343 partial = {}
340 344 try:
341 345 f = self.opener("branches.cache")
342 346 lines = f.read().split('\n')
343 347 f.close()
344 348 last, lrev = lines.pop(0).rstrip().split(" ", 1)
345 349 last, lrev = bin(last), int(lrev)
346 350 if not (lrev < self.changelog.count() and
347 351 self.changelog.node(lrev) == last): # sanity check
348 352 # invalidate the cache
349 353 raise ValueError('Invalid branch cache: unknown tip')
350 354 for l in lines:
351 355 if not l: continue
352 356 node, label = l.rstrip().split(" ", 1)
353 357 partial[label] = bin(node)
354 358 except (KeyboardInterrupt, util.SignalInterrupt):
355 359 raise
356 360 except Exception, inst:
357 361 if self.ui.debugflag:
358 362 self.ui.warn(str(inst), '\n')
359 363 partial, last, lrev = {}, nullid, nullrev
360 364 return partial, last, lrev
361 365
362 366 def _writebranchcache(self, branches, tip, tiprev):
363 367 try:
364 368 f = self.opener("branches.cache", "w")
365 369 f.write("%s %s\n" % (hex(tip), tiprev))
366 370 for label, node in branches.iteritems():
367 371 f.write("%s %s\n" % (hex(node), label))
368 372 except IOError:
369 373 pass
370 374
371 375 def _updatebranchcache(self, partial, start, end):
372 376 for r in xrange(start, end):
373 377 c = self.changectx(r)
374 378 b = c.branch()
375 379 if b:
376 380 partial[b] = c.node()
377 381
378 382 def lookup(self, key):
379 383 if key == '.':
380 384 key = self.dirstate.parents()[0]
381 385 if key == nullid:
382 386 raise repo.RepoError(_("no revision checked out"))
383 387 elif key == 'null':
384 388 return nullid
385 389 n = self.changelog._match(key)
386 390 if n:
387 391 return n
388 392 if key in self.tags():
389 393 return self.tags()[key]
390 394 if key in self.branchtags():
391 395 return self.branchtags()[key]
392 396 n = self.changelog._partialmatch(key)
393 397 if n:
394 398 return n
395 399 raise repo.RepoError(_("unknown revision '%s'") % key)
396 400
397 401 def dev(self):
398 402 return os.lstat(self.path).st_dev
399 403
400 404 def local(self):
401 405 return True
402 406
403 407 def join(self, f):
404 408 return os.path.join(self.path, f)
405 409
406 410 def sjoin(self, f):
407 411 return os.path.join(self.spath, f)
408 412
409 413 def wjoin(self, f):
410 414 return os.path.join(self.root, f)
411 415
412 416 def file(self, f):
413 417 if f[0] == '/':
414 418 f = f[1:]
415 419 return filelog.filelog(self.sopener, f, self.revlogversion)
416 420
417 421 def changectx(self, changeid=None):
418 422 return context.changectx(self, changeid)
419 423
420 424 def workingctx(self):
421 425 return context.workingctx(self)
422 426
423 427 def parents(self, changeid=None):
424 428 '''
425 429 get list of changectxs for parents of changeid or working directory
426 430 '''
427 431 if changeid is None:
428 432 pl = self.dirstate.parents()
429 433 else:
430 434 n = self.changelog.lookup(changeid)
431 435 pl = self.changelog.parents(n)
432 436 if pl[1] == nullid:
433 437 return [self.changectx(pl[0])]
434 438 return [self.changectx(pl[0]), self.changectx(pl[1])]
435 439
436 440 def filectx(self, path, changeid=None, fileid=None):
437 441 """changeid can be a changeset revision, node, or tag.
438 442 fileid can be a file revision or node."""
439 443 return context.filectx(self, path, changeid, fileid)
440 444
441 445 def getcwd(self):
442 446 return self.dirstate.getcwd()
443 447
444 448 def wfile(self, f, mode='r'):
445 449 return self.wopener(f, mode)
446 450
447 451 def wread(self, filename):
448 452 if self.encodepats == None:
449 453 l = []
450 454 for pat, cmd in self.ui.configitems("encode"):
451 455 mf = util.matcher(self.root, "", [pat], [], [])[1]
452 456 l.append((mf, cmd))
453 457 self.encodepats = l
454 458
455 459 data = self.wopener(filename, 'r').read()
456 460
457 461 for mf, cmd in self.encodepats:
458 462 if mf(filename):
459 463 self.ui.debug(_("filtering %s through %s\n") % (filename, cmd))
460 464 data = util.filter(data, cmd)
461 465 break
462 466
463 467 return data
464 468
465 469 def wwrite(self, filename, data, fd=None):
466 470 if self.decodepats == None:
467 471 l = []
468 472 for pat, cmd in self.ui.configitems("decode"):
469 473 mf = util.matcher(self.root, "", [pat], [], [])[1]
470 474 l.append((mf, cmd))
471 475 self.decodepats = l
472 476
473 477 for mf, cmd in self.decodepats:
474 478 if mf(filename):
475 479 self.ui.debug(_("filtering %s through %s\n") % (filename, cmd))
476 480 data = util.filter(data, cmd)
477 481 break
478 482
479 483 if fd:
480 484 return fd.write(data)
481 485 return self.wopener(filename, 'w').write(data)
482 486
483 487 def transaction(self):
484 488 tr = self.transhandle
485 489 if tr != None and tr.running():
486 490 return tr.nest()
487 491
488 492 # save dirstate for rollback
489 493 try:
490 494 ds = self.opener("dirstate").read()
491 495 except IOError:
492 496 ds = ""
493 497 self.opener("journal.dirstate", "w").write(ds)
494 498
495 499 renames = [(self.sjoin("journal"), self.sjoin("undo")),
496 500 (self.join("journal.dirstate"), self.join("undo.dirstate"))]
497 501 tr = transaction.transaction(self.ui.warn, self.sopener,
498 502 self.sjoin("journal"),
499 503 aftertrans(renames))
500 504 self.transhandle = tr
501 505 return tr
502 506
503 507 def recover(self):
504 508 l = self.lock()
505 509 if os.path.exists(self.sjoin("journal")):
506 510 self.ui.status(_("rolling back interrupted transaction\n"))
507 511 transaction.rollback(self.sopener, self.sjoin("journal"))
508 512 self.reload()
509 513 return True
510 514 else:
511 515 self.ui.warn(_("no interrupted transaction available\n"))
512 516 return False
513 517
514 518 def rollback(self, wlock=None):
515 519 if not wlock:
516 520 wlock = self.wlock()
517 521 l = self.lock()
518 522 if os.path.exists(self.sjoin("undo")):
519 523 self.ui.status(_("rolling back last transaction\n"))
520 524 transaction.rollback(self.sopener, self.sjoin("undo"))
521 525 util.rename(self.join("undo.dirstate"), self.join("dirstate"))
522 526 self.reload()
523 527 self.wreload()
524 528 else:
525 529 self.ui.warn(_("no rollback information available\n"))
526 530
527 531 def wreload(self):
528 532 self.dirstate.read()
529 533
530 534 def reload(self):
531 535 self.changelog.load()
532 536 self.manifest.load()
533 537 self.tagscache = None
534 538 self.nodetagscache = None
535 539
536 540 def do_lock(self, lockname, wait, releasefn=None, acquirefn=None,
537 541 desc=None):
538 542 try:
539 543 l = lock.lock(lockname, 0, releasefn, desc=desc)
540 544 except lock.LockHeld, inst:
541 545 if not wait:
542 546 raise
543 547 self.ui.warn(_("waiting for lock on %s held by %r\n") %
544 548 (desc, inst.locker))
545 549 # default to 600 seconds timeout
546 550 l = lock.lock(lockname, int(self.ui.config("ui", "timeout", "600")),
547 551 releasefn, desc=desc)
548 552 if acquirefn:
549 553 acquirefn()
550 554 return l
551 555
552 556 def lock(self, wait=1):
553 557 return self.do_lock(self.sjoin("lock"), wait, acquirefn=self.reload,
554 558 desc=_('repository %s') % self.origroot)
555 559
556 560 def wlock(self, wait=1):
557 561 return self.do_lock(self.join("wlock"), wait, self.dirstate.write,
558 562 self.wreload,
559 563 desc=_('working directory of %s') % self.origroot)
560 564
561 565 def filecommit(self, fn, manifest1, manifest2, linkrev, transaction, changelist):
562 566 """
563 567 commit an individual file as part of a larger transaction
564 568 """
565 569
566 570 t = self.wread(fn)
567 571 fl = self.file(fn)
568 572 fp1 = manifest1.get(fn, nullid)
569 573 fp2 = manifest2.get(fn, nullid)
570 574
571 575 meta = {}
572 576 cp = self.dirstate.copied(fn)
573 577 if cp:
574 578 meta["copy"] = cp
575 579 if not manifest2: # not a branch merge
576 580 meta["copyrev"] = hex(manifest1.get(cp, nullid))
577 581 fp2 = nullid
578 582 elif fp2 != nullid: # copied on remote side
579 583 meta["copyrev"] = hex(manifest1.get(cp, nullid))
580 584 elif fp1 != nullid: # copied on local side, reversed
581 585 meta["copyrev"] = hex(manifest2.get(cp))
582 586 fp2 = nullid
583 587 else: # directory rename
584 588 meta["copyrev"] = hex(manifest1.get(cp, nullid))
585 589 self.ui.debug(_(" %s: copy %s:%s\n") %
586 590 (fn, cp, meta["copyrev"]))
587 591 fp1 = nullid
588 592 elif fp2 != nullid:
589 593 # is one parent an ancestor of the other?
590 594 fpa = fl.ancestor(fp1, fp2)
591 595 if fpa == fp1:
592 596 fp1, fp2 = fp2, nullid
593 597 elif fpa == fp2:
594 598 fp2 = nullid
595 599
596 600 # is the file unmodified from the parent? report existing entry
597 601 if fp2 == nullid and not fl.cmp(fp1, t):
598 602 return fp1
599 603
600 604 changelist.append(fn)
601 605 return fl.add(t, meta, transaction, linkrev, fp1, fp2)
602 606
603 607 def rawcommit(self, files, text, user, date, p1=None, p2=None, wlock=None):
604 608 if p1 is None:
605 609 p1, p2 = self.dirstate.parents()
606 610 return self.commit(files=files, text=text, user=user, date=date,
607 611 p1=p1, p2=p2, wlock=wlock)
608 612
609 613 def commit(self, files=None, text="", user=None, date=None,
610 614 match=util.always, force=False, lock=None, wlock=None,
611 615 force_editor=False, p1=None, p2=None, extra={}):
612 616
613 617 commit = []
614 618 remove = []
615 619 changed = []
616 620 use_dirstate = (p1 is None) # not rawcommit
617 621 extra = extra.copy()
618 622
619 623 if use_dirstate:
620 624 if files:
621 625 for f in files:
622 626 s = self.dirstate.state(f)
623 627 if s in 'nmai':
624 628 commit.append(f)
625 629 elif s == 'r':
626 630 remove.append(f)
627 631 else:
628 632 self.ui.warn(_("%s not tracked!\n") % f)
629 633 else:
630 634 changes = self.status(match=match)[:5]
631 635 modified, added, removed, deleted, unknown = changes
632 636 commit = modified + added
633 637 remove = removed
634 638 else:
635 639 commit = files
636 640
637 641 if use_dirstate:
638 642 p1, p2 = self.dirstate.parents()
639 643 update_dirstate = True
640 644 else:
641 645 p1, p2 = p1, p2 or nullid
642 646 update_dirstate = (self.dirstate.parents()[0] == p1)
643 647
644 648 c1 = self.changelog.read(p1)
645 649 c2 = self.changelog.read(p2)
646 650 m1 = self.manifest.read(c1[0]).copy()
647 651 m2 = self.manifest.read(c2[0])
648 652
649 653 if use_dirstate:
650 654 branchname = util.fromlocal(self.workingctx().branch())
651 655 else:
652 656 branchname = ""
653 657
654 658 if use_dirstate:
655 659 oldname = c1[5].get("branch", "") # stored in UTF-8
656 660 if not commit and not remove and not force and p2 == nullid and \
657 661 branchname == oldname:
658 662 self.ui.status(_("nothing changed\n"))
659 663 return None
660 664
661 665 xp1 = hex(p1)
662 666 if p2 == nullid: xp2 = ''
663 667 else: xp2 = hex(p2)
664 668
665 669 self.hook("precommit", throw=True, parent1=xp1, parent2=xp2)
666 670
667 671 if not wlock:
668 672 wlock = self.wlock()
669 673 if not lock:
670 674 lock = self.lock()
671 675 tr = self.transaction()
672 676
673 677 # check in files
674 678 new = {}
675 679 linkrev = self.changelog.count()
676 680 commit.sort()
677 681 for f in commit:
678 682 self.ui.note(f + "\n")
679 683 try:
680 684 new[f] = self.filecommit(f, m1, m2, linkrev, tr, changed)
681 685 m1.set(f, util.is_exec(self.wjoin(f), m1.execf(f)))
682 686 except IOError:
683 687 if use_dirstate:
684 688 self.ui.warn(_("trouble committing %s!\n") % f)
685 689 raise
686 690 else:
687 691 remove.append(f)
688 692
689 693 # update manifest
690 694 m1.update(new)
691 695 remove.sort()
692 696
693 697 for f in remove:
694 698 if f in m1:
695 699 del m1[f]
696 700 mn = self.manifest.add(m1, tr, linkrev, c1[0], c2[0], (new, remove))
697 701
698 702 # add changeset
699 703 new = new.keys()
700 704 new.sort()
701 705
702 706 user = user or self.ui.username()
703 707 if not text or force_editor:
704 708 edittext = []
705 709 if text:
706 710 edittext.append(text)
707 711 edittext.append("")
708 712 edittext.append("HG: user: %s" % user)
709 713 if p2 != nullid:
710 714 edittext.append("HG: branch merge")
711 715 edittext.extend(["HG: changed %s" % f for f in changed])
712 716 edittext.extend(["HG: removed %s" % f for f in remove])
713 717 if not changed and not remove:
714 718 edittext.append("HG: no files changed")
715 719 edittext.append("")
716 720 # run editor in the repository root
717 721 olddir = os.getcwd()
718 722 os.chdir(self.root)
719 723 text = self.ui.edit("\n".join(edittext), user)
720 724 os.chdir(olddir)
721 725
722 726 lines = [line.rstrip() for line in text.rstrip().splitlines()]
723 727 while lines and not lines[0]:
724 728 del lines[0]
725 729 if not lines:
726 730 return None
727 731 text = '\n'.join(lines)
728 732 if branchname:
729 733 extra["branch"] = branchname
730 734 n = self.changelog.add(mn, changed + remove, text, tr, p1, p2,
731 735 user, date, extra)
732 736 self.hook('pretxncommit', throw=True, node=hex(n), parent1=xp1,
733 737 parent2=xp2)
734 738 tr.close()
735 739
736 740 if use_dirstate or update_dirstate:
737 741 self.dirstate.setparents(n)
738 742 if use_dirstate:
739 743 self.dirstate.update(new, "n")
740 744 self.dirstate.forget(remove)
741 745
742 746 self.hook("commit", node=hex(n), parent1=xp1, parent2=xp2)
743 747 return n
744 748
745 749 def walk(self, node=None, files=[], match=util.always, badmatch=None):
746 750 '''
747 751 walk recursively through the directory tree or a given
748 752 changeset, finding all files matched by the match
749 753 function
750 754
751 755 results are yielded in a tuple (src, filename), where src
752 756 is one of:
753 757 'f' the file was found in the directory tree
754 758 'm' the file was only in the dirstate and not in the tree
755 759 'b' file was not found and matched badmatch
756 760 '''
757 761
758 762 if node:
759 763 fdict = dict.fromkeys(files)
760 764 for fn in self.manifest.read(self.changelog.read(node)[0]):
761 765 for ffn in fdict:
762 766 # match if the file is the exact name or a directory
763 767 if ffn == fn or fn.startswith("%s/" % ffn):
764 768 del fdict[ffn]
765 769 break
766 770 if match(fn):
767 771 yield 'm', fn
768 772 for fn in fdict:
769 773 if badmatch and badmatch(fn):
770 774 if match(fn):
771 775 yield 'b', fn
772 776 else:
773 777 self.ui.warn(_('%s: No such file in rev %s\n') % (
774 778 util.pathto(self.getcwd(), fn), short(node)))
775 779 else:
776 780 for src, fn in self.dirstate.walk(files, match, badmatch=badmatch):
777 781 yield src, fn
778 782
779 783 def status(self, node1=None, node2=None, files=[], match=util.always,
780 784 wlock=None, list_ignored=False, list_clean=False):
781 785 """return status of files between two nodes or node and working directory
782 786
783 787 If node1 is None, use the first dirstate parent instead.
784 788 If node2 is None, compare node1 with working directory.
785 789 """
786 790
787 791 def fcmp(fn, mf):
788 792 t1 = self.wread(fn)
789 793 return self.file(fn).cmp(mf.get(fn, nullid), t1)
790 794
791 795 def mfmatches(node):
792 796 change = self.changelog.read(node)
793 797 mf = self.manifest.read(change[0]).copy()
794 798 for fn in mf.keys():
795 799 if not match(fn):
796 800 del mf[fn]
797 801 return mf
798 802
799 803 modified, added, removed, deleted, unknown = [], [], [], [], []
800 804 ignored, clean = [], []
801 805
802 806 compareworking = False
803 807 if not node1 or (not node2 and node1 == self.dirstate.parents()[0]):
804 808 compareworking = True
805 809
806 810 if not compareworking:
807 811 # read the manifest from node1 before the manifest from node2,
808 812 # so that we'll hit the manifest cache if we're going through
809 813 # all the revisions in parent->child order.
810 814 mf1 = mfmatches(node1)
811 815
812 816 # are we comparing the working directory?
813 817 if not node2:
814 818 if not wlock:
815 819 try:
816 820 wlock = self.wlock(wait=0)
817 821 except lock.LockException:
818 822 wlock = None
819 823 (lookup, modified, added, removed, deleted, unknown,
820 824 ignored, clean) = self.dirstate.status(files, match,
821 825 list_ignored, list_clean)
822 826
823 827 # are we comparing working dir against its parent?
824 828 if compareworking:
825 829 if lookup:
826 830 # do a full compare of any files that might have changed
827 831 mf2 = mfmatches(self.dirstate.parents()[0])
828 832 for f in lookup:
829 833 if fcmp(f, mf2):
830 834 modified.append(f)
831 835 else:
832 836 clean.append(f)
833 837 if wlock is not None:
834 838 self.dirstate.update([f], "n")
835 839 else:
836 840 # we are comparing working dir against non-parent
837 841 # generate a pseudo-manifest for the working dir
838 842 # XXX: create it in dirstate.py ?
839 843 mf2 = mfmatches(self.dirstate.parents()[0])
840 844 for f in lookup + modified + added:
841 845 mf2[f] = ""
842 846 mf2.set(f, execf=util.is_exec(self.wjoin(f), mf2.execf(f)))
843 847 for f in removed:
844 848 if f in mf2:
845 849 del mf2[f]
846 850 else:
847 851 # we are comparing two revisions
848 852 mf2 = mfmatches(node2)
849 853
850 854 if not compareworking:
851 855 # flush lists from dirstate before comparing manifests
852 856 modified, added, clean = [], [], []
853 857
854 858 # make sure to sort the files so we talk to the disk in a
855 859 # reasonable order
856 860 mf2keys = mf2.keys()
857 861 mf2keys.sort()
858 862 for fn in mf2keys:
859 863 if mf1.has_key(fn):
860 864 if mf1.flags(fn) != mf2.flags(fn) or \
861 865 (mf1[fn] != mf2[fn] and (mf2[fn] != "" or fcmp(fn, mf1))):
862 866 modified.append(fn)
863 867 elif list_clean:
864 868 clean.append(fn)
865 869 del mf1[fn]
866 870 else:
867 871 added.append(fn)
868 872
869 873 removed = mf1.keys()
870 874
871 875 # sort and return results:
872 876 for l in modified, added, removed, deleted, unknown, ignored, clean:
873 877 l.sort()
874 878 return (modified, added, removed, deleted, unknown, ignored, clean)
875 879
876 880 def add(self, list, wlock=None):
877 881 if not wlock:
878 882 wlock = self.wlock()
879 883 for f in list:
880 884 p = self.wjoin(f)
881 885 if not os.path.exists(p):
882 886 self.ui.warn(_("%s does not exist!\n") % f)
883 887 elif not os.path.isfile(p):
884 888 self.ui.warn(_("%s not added: only files supported currently\n")
885 889 % f)
886 890 elif self.dirstate.state(f) in 'an':
887 891 self.ui.warn(_("%s already tracked!\n") % f)
888 892 else:
889 893 self.dirstate.update([f], "a")
890 894
891 895 def forget(self, list, wlock=None):
892 896 if not wlock:
893 897 wlock = self.wlock()
894 898 for f in list:
895 899 if self.dirstate.state(f) not in 'ai':
896 900 self.ui.warn(_("%s not added!\n") % f)
897 901 else:
898 902 self.dirstate.forget([f])
899 903
900 904 def remove(self, list, unlink=False, wlock=None):
901 905 if unlink:
902 906 for f in list:
903 907 try:
904 908 util.unlink(self.wjoin(f))
905 909 except OSError, inst:
906 910 if inst.errno != errno.ENOENT:
907 911 raise
908 912 if not wlock:
909 913 wlock = self.wlock()
910 914 for f in list:
911 915 p = self.wjoin(f)
912 916 if os.path.exists(p):
913 917 self.ui.warn(_("%s still exists!\n") % f)
914 918 elif self.dirstate.state(f) == 'a':
915 919 self.dirstate.forget([f])
916 920 elif f not in self.dirstate:
917 921 self.ui.warn(_("%s not tracked!\n") % f)
918 922 else:
919 923 self.dirstate.update([f], "r")
920 924
921 925 def undelete(self, list, wlock=None):
922 926 p = self.dirstate.parents()[0]
923 927 mn = self.changelog.read(p)[0]
924 928 m = self.manifest.read(mn)
925 929 if not wlock:
926 930 wlock = self.wlock()
927 931 for f in list:
928 932 if self.dirstate.state(f) not in "r":
929 933 self.ui.warn("%s not removed!\n" % f)
930 934 else:
931 935 t = self.file(f).read(m[f])
932 936 self.wwrite(f, t)
933 937 util.set_exec(self.wjoin(f), m.execf(f))
934 938 self.dirstate.update([f], "n")
935 939
936 940 def copy(self, source, dest, wlock=None):
937 941 p = self.wjoin(dest)
938 942 if not os.path.exists(p):
939 943 self.ui.warn(_("%s does not exist!\n") % dest)
940 944 elif not os.path.isfile(p):
941 945 self.ui.warn(_("copy failed: %s is not a file\n") % dest)
942 946 else:
943 947 if not wlock:
944 948 wlock = self.wlock()
945 949 if self.dirstate.state(dest) == '?':
946 950 self.dirstate.update([dest], "a")
947 951 self.dirstate.copy(source, dest)
948 952
949 953 def heads(self, start=None):
950 954 heads = self.changelog.heads(start)
951 955 # sort the output in rev descending order
952 956 heads = [(-self.changelog.rev(h), h) for h in heads]
953 957 heads.sort()
954 958 return [n for (r, n) in heads]
955 959
956 960 # branchlookup returns a dict giving a list of branches for
957 961 # each head. A branch is defined as the tag of a node or
958 962 # the branch of the node's parents. If a node has multiple
959 963 # branch tags, tags are eliminated if they are visible from other
960 964 # branch tags.
961 965 #
962 966 # So, for this graph: a->b->c->d->e
963 967 # \ /
964 968 # aa -----/
965 969 # a has tag 2.6.12
966 970 # d has tag 2.6.13
967 971 # e would have branch tags for 2.6.12 and 2.6.13. Because the node
968 972 # for 2.6.12 can be reached from the node 2.6.13, that is eliminated
969 973 # from the list.
970 974 #
971 975 # It is possible that more than one head will have the same branch tag.
972 976 # callers need to check the result for multiple heads under the same
973 977 # branch tag if that is a problem for them (ie checkout of a specific
974 978 # branch).
975 979 #
976 980 # passing in a specific branch will limit the depth of the search
977 981 # through the parents. It won't limit the branches returned in the
978 982 # result though.
979 983 def branchlookup(self, heads=None, branch=None):
980 984 if not heads:
981 985 heads = self.heads()
982 986 headt = [ h for h in heads ]
983 987 chlog = self.changelog
984 988 branches = {}
985 989 merges = []
986 990 seenmerge = {}
987 991
988 992 # traverse the tree once for each head, recording in the branches
989 993 # dict which tags are visible from this head. The branches
990 994 # dict also records which tags are visible from each tag
991 995 # while we traverse.
992 996 while headt or merges:
993 997 if merges:
994 998 n, found = merges.pop()
995 999 visit = [n]
996 1000 else:
997 1001 h = headt.pop()
998 1002 visit = [h]
999 1003 found = [h]
1000 1004 seen = {}
1001 1005 while visit:
1002 1006 n = visit.pop()
1003 1007 if n in seen:
1004 1008 continue
1005 1009 pp = chlog.parents(n)
1006 1010 tags = self.nodetags(n)
1007 1011 if tags:
1008 1012 for x in tags:
1009 1013 if x == 'tip':
1010 1014 continue
1011 1015 for f in found:
1012 1016 branches.setdefault(f, {})[n] = 1
1013 1017 branches.setdefault(n, {})[n] = 1
1014 1018 break
1015 1019 if n not in found:
1016 1020 found.append(n)
1017 1021 if branch in tags:
1018 1022 continue
1019 1023 seen[n] = 1
1020 1024 if pp[1] != nullid and n not in seenmerge:
1021 1025 merges.append((pp[1], [x for x in found]))
1022 1026 seenmerge[n] = 1
1023 1027 if pp[0] != nullid:
1024 1028 visit.append(pp[0])
1025 1029 # traverse the branches dict, eliminating branch tags from each
1026 1030 # head that are visible from another branch tag for that head.
1027 1031 out = {}
1028 1032 viscache = {}
1029 1033 for h in heads:
1030 1034 def visible(node):
1031 1035 if node in viscache:
1032 1036 return viscache[node]
1033 1037 ret = {}
1034 1038 visit = [node]
1035 1039 while visit:
1036 1040 x = visit.pop()
1037 1041 if x in viscache:
1038 1042 ret.update(viscache[x])
1039 1043 elif x not in ret:
1040 1044 ret[x] = 1
1041 1045 if x in branches:
1042 1046 visit[len(visit):] = branches[x].keys()
1043 1047 viscache[node] = ret
1044 1048 return ret
1045 1049 if h not in branches:
1046 1050 continue
1047 1051 # O(n^2), but somewhat limited. This only searches the
1048 1052 # tags visible from a specific head, not all the tags in the
1049 1053 # whole repo.
1050 1054 for b in branches[h]:
1051 1055 vis = False
1052 1056 for bb in branches[h].keys():
1053 1057 if b != bb:
1054 1058 if b in visible(bb):
1055 1059 vis = True
1056 1060 break
1057 1061 if not vis:
1058 1062 l = out.setdefault(h, [])
1059 1063 l[len(l):] = self.nodetags(b)
1060 1064 return out
1061 1065
1062 1066 def branches(self, nodes):
1063 1067 if not nodes:
1064 1068 nodes = [self.changelog.tip()]
1065 1069 b = []
1066 1070 for n in nodes:
1067 1071 t = n
1068 1072 while 1:
1069 1073 p = self.changelog.parents(n)
1070 1074 if p[1] != nullid or p[0] == nullid:
1071 1075 b.append((t, n, p[0], p[1]))
1072 1076 break
1073 1077 n = p[0]
1074 1078 return b
1075 1079
1076 1080 def between(self, pairs):
1077 1081 r = []
1078 1082
1079 1083 for top, bottom in pairs:
1080 1084 n, l, i = top, [], 0
1081 1085 f = 1
1082 1086
1083 1087 while n != bottom:
1084 1088 p = self.changelog.parents(n)[0]
1085 1089 if i == f:
1086 1090 l.append(n)
1087 1091 f = f * 2
1088 1092 n = p
1089 1093 i += 1
1090 1094
1091 1095 r.append(l)
1092 1096
1093 1097 return r
1094 1098
1095 1099 def findincoming(self, remote, base=None, heads=None, force=False):
1096 1100 """Return list of roots of the subsets of missing nodes from remote
1097 1101
1098 1102 If base dict is specified, assume that these nodes and their parents
1099 1103 exist on the remote side and that no child of a node of base exists
1100 1104 in both remote and self.
1101 1105 Furthermore base will be updated to include the nodes that exists
1102 1106 in self and remote but no children exists in self and remote.
1103 1107 If a list of heads is specified, return only nodes which are heads
1104 1108 or ancestors of these heads.
1105 1109
1106 1110 All the ancestors of base are in self and in remote.
1107 1111 All the descendants of the list returned are missing in self.
1108 1112 (and so we know that the rest of the nodes are missing in remote, see
1109 1113 outgoing)
1110 1114 """
1111 1115 m = self.changelog.nodemap
1112 1116 search = []
1113 1117 fetch = {}
1114 1118 seen = {}
1115 1119 seenbranch = {}
1116 1120 if base == None:
1117 1121 base = {}
1118 1122
1119 1123 if not heads:
1120 1124 heads = remote.heads()
1121 1125
1122 1126 if self.changelog.tip() == nullid:
1123 1127 base[nullid] = 1
1124 1128 if heads != [nullid]:
1125 1129 return [nullid]
1126 1130 return []
1127 1131
1128 1132 # assume we're closer to the tip than the root
1129 1133 # and start by examining the heads
1130 1134 self.ui.status(_("searching for changes\n"))
1131 1135
1132 1136 unknown = []
1133 1137 for h in heads:
1134 1138 if h not in m:
1135 1139 unknown.append(h)
1136 1140 else:
1137 1141 base[h] = 1
1138 1142
1139 1143 if not unknown:
1140 1144 return []
1141 1145
1142 1146 req = dict.fromkeys(unknown)
1143 1147 reqcnt = 0
1144 1148
1145 1149 # search through remote branches
1146 1150 # a 'branch' here is a linear segment of history, with four parts:
1147 1151 # head, root, first parent, second parent
1148 1152 # (a branch always has two parents (or none) by definition)
1149 1153 unknown = remote.branches(unknown)
1150 1154 while unknown:
1151 1155 r = []
1152 1156 while unknown:
1153 1157 n = unknown.pop(0)
1154 1158 if n[0] in seen:
1155 1159 continue
1156 1160
1157 1161 self.ui.debug(_("examining %s:%s\n")
1158 1162 % (short(n[0]), short(n[1])))
1159 1163 if n[0] == nullid: # found the end of the branch
1160 1164 pass
1161 1165 elif n in seenbranch:
1162 1166 self.ui.debug(_("branch already found\n"))
1163 1167 continue
1164 1168 elif n[1] and n[1] in m: # do we know the base?
1165 1169 self.ui.debug(_("found incomplete branch %s:%s\n")
1166 1170 % (short(n[0]), short(n[1])))
1167 1171 search.append(n) # schedule branch range for scanning
1168 1172 seenbranch[n] = 1
1169 1173 else:
1170 1174 if n[1] not in seen and n[1] not in fetch:
1171 1175 if n[2] in m and n[3] in m:
1172 1176 self.ui.debug(_("found new changeset %s\n") %
1173 1177 short(n[1]))
1174 1178 fetch[n[1]] = 1 # earliest unknown
1175 1179 for p in n[2:4]:
1176 1180 if p in m:
1177 1181 base[p] = 1 # latest known
1178 1182
1179 1183 for p in n[2:4]:
1180 1184 if p not in req and p not in m:
1181 1185 r.append(p)
1182 1186 req[p] = 1
1183 1187 seen[n[0]] = 1
1184 1188
1185 1189 if r:
1186 1190 reqcnt += 1
1187 1191 self.ui.debug(_("request %d: %s\n") %
1188 1192 (reqcnt, " ".join(map(short, r))))
1189 1193 for p in xrange(0, len(r), 10):
1190 1194 for b in remote.branches(r[p:p+10]):
1191 1195 self.ui.debug(_("received %s:%s\n") %
1192 1196 (short(b[0]), short(b[1])))
1193 1197 unknown.append(b)
1194 1198
1195 1199 # do binary search on the branches we found
1196 1200 while search:
1197 1201 n = search.pop(0)
1198 1202 reqcnt += 1
1199 1203 l = remote.between([(n[0], n[1])])[0]
1200 1204 l.append(n[1])
1201 1205 p = n[0]
1202 1206 f = 1
1203 1207 for i in l:
1204 1208 self.ui.debug(_("narrowing %d:%d %s\n") % (f, len(l), short(i)))
1205 1209 if i in m:
1206 1210 if f <= 2:
1207 1211 self.ui.debug(_("found new branch changeset %s\n") %
1208 1212 short(p))
1209 1213 fetch[p] = 1
1210 1214 base[i] = 1
1211 1215 else:
1212 1216 self.ui.debug(_("narrowed branch search to %s:%s\n")
1213 1217 % (short(p), short(i)))
1214 1218 search.append((p, i))
1215 1219 break
1216 1220 p, f = i, f * 2
1217 1221
1218 1222 # sanity check our fetch list
1219 1223 for f in fetch.keys():
1220 1224 if f in m:
1221 1225 raise repo.RepoError(_("already have changeset ") + short(f[:4]))
1222 1226
1223 1227 if base.keys() == [nullid]:
1224 1228 if force:
1225 1229 self.ui.warn(_("warning: repository is unrelated\n"))
1226 1230 else:
1227 1231 raise util.Abort(_("repository is unrelated"))
1228 1232
1229 1233 self.ui.debug(_("found new changesets starting at ") +
1230 1234 " ".join([short(f) for f in fetch]) + "\n")
1231 1235
1232 1236 self.ui.debug(_("%d total queries\n") % reqcnt)
1233 1237
1234 1238 return fetch.keys()
1235 1239
1236 1240 def findoutgoing(self, remote, base=None, heads=None, force=False):
1237 1241 """Return list of nodes that are roots of subsets not in remote
1238 1242
1239 1243 If base dict is specified, assume that these nodes and their parents
1240 1244 exist on the remote side.
1241 1245 If a list of heads is specified, return only nodes which are heads
1242 1246 or ancestors of these heads, and return a second element which
1243 1247 contains all remote heads which get new children.
1244 1248 """
1245 1249 if base == None:
1246 1250 base = {}
1247 1251 self.findincoming(remote, base, heads, force=force)
1248 1252
1249 1253 self.ui.debug(_("common changesets up to ")
1250 1254 + " ".join(map(short, base.keys())) + "\n")
1251 1255
1252 1256 remain = dict.fromkeys(self.changelog.nodemap)
1253 1257
1254 1258 # prune everything remote has from the tree
1255 1259 del remain[nullid]
1256 1260 remove = base.keys()
1257 1261 while remove:
1258 1262 n = remove.pop(0)
1259 1263 if n in remain:
1260 1264 del remain[n]
1261 1265 for p in self.changelog.parents(n):
1262 1266 remove.append(p)
1263 1267
1264 1268 # find every node whose parents have been pruned
1265 1269 subset = []
1266 1270 # find every remote head that will get new children
1267 1271 updated_heads = {}
1268 1272 for n in remain:
1269 1273 p1, p2 = self.changelog.parents(n)
1270 1274 if p1 not in remain and p2 not in remain:
1271 1275 subset.append(n)
1272 1276 if heads:
1273 1277 if p1 in heads:
1274 1278 updated_heads[p1] = True
1275 1279 if p2 in heads:
1276 1280 updated_heads[p2] = True
1277 1281
1278 1282 # this is the set of all roots we have to push
1279 1283 if heads:
1280 1284 return subset, updated_heads.keys()
1281 1285 else:
1282 1286 return subset
1283 1287
1284 1288 def pull(self, remote, heads=None, force=False, lock=None):
1285 1289 mylock = False
1286 1290 if not lock:
1287 1291 lock = self.lock()
1288 1292 mylock = True
1289 1293
1290 1294 try:
1291 1295 fetch = self.findincoming(remote, force=force)
1292 1296 if fetch == [nullid]:
1293 1297 self.ui.status(_("requesting all changes\n"))
1294 1298
1295 1299 if not fetch:
1296 1300 self.ui.status(_("no changes found\n"))
1297 1301 return 0
1298 1302
1299 1303 if heads is None:
1300 1304 cg = remote.changegroup(fetch, 'pull')
1301 1305 else:
1302 1306 if 'changegroupsubset' not in remote.capabilities:
1303 1307 raise util.Abort(_("Partial pull cannot be done because other repository doesn't support changegroupsubset."))
1304 1308 cg = remote.changegroupsubset(fetch, heads, 'pull')
1305 1309 return self.addchangegroup(cg, 'pull', remote.url())
1306 1310 finally:
1307 1311 if mylock:
1308 1312 lock.release()
1309 1313
1310 1314 def push(self, remote, force=False, revs=None):
1311 1315 # there are two ways to push to remote repo:
1312 1316 #
1313 1317 # addchangegroup assumes local user can lock remote
1314 1318 # repo (local filesystem, old ssh servers).
1315 1319 #
1316 1320 # unbundle assumes local user cannot lock remote repo (new ssh
1317 1321 # servers, http servers).
1318 1322
1319 1323 if remote.capable('unbundle'):
1320 1324 return self.push_unbundle(remote, force, revs)
1321 1325 return self.push_addchangegroup(remote, force, revs)
1322 1326
1323 1327 def prepush(self, remote, force, revs):
1324 1328 base = {}
1325 1329 remote_heads = remote.heads()
1326 1330 inc = self.findincoming(remote, base, remote_heads, force=force)
1327 1331
1328 1332 update, updated_heads = self.findoutgoing(remote, base, remote_heads)
1329 1333 if revs is not None:
1330 1334 msng_cl, bases, heads = self.changelog.nodesbetween(update, revs)
1331 1335 else:
1332 1336 bases, heads = update, self.changelog.heads()
1333 1337
1334 1338 if not bases:
1335 1339 self.ui.status(_("no changes found\n"))
1336 1340 return None, 1
1337 1341 elif not force:
1338 1342 # check if we're creating new remote heads
1339 1343 # to be a remote head after push, node must be either
1340 1344 # - unknown locally
1341 1345 # - a local outgoing head descended from update
1342 1346 # - a remote head that's known locally and not
1343 1347 # ancestral to an outgoing head
1344 1348
1345 1349 warn = 0
1346 1350
1347 1351 if remote_heads == [nullid]:
1348 1352 warn = 0
1349 1353 elif not revs and len(heads) > len(remote_heads):
1350 1354 warn = 1
1351 1355 else:
1352 1356 newheads = list(heads)
1353 1357 for r in remote_heads:
1354 1358 if r in self.changelog.nodemap:
1355 1359 desc = self.changelog.heads(r)
1356 1360 l = [h for h in heads if h in desc]
1357 1361 if not l:
1358 1362 newheads.append(r)
1359 1363 else:
1360 1364 newheads.append(r)
1361 1365 if len(newheads) > len(remote_heads):
1362 1366 warn = 1
1363 1367
1364 1368 if warn:
1365 1369 self.ui.warn(_("abort: push creates new remote branches!\n"))
1366 1370 self.ui.status(_("(did you forget to merge?"
1367 1371 " use push -f to force)\n"))
1368 1372 return None, 1
1369 1373 elif inc:
1370 1374 self.ui.warn(_("note: unsynced remote changes!\n"))
1371 1375
1372 1376
1373 1377 if revs is None:
1374 1378 cg = self.changegroup(update, 'push')
1375 1379 else:
1376 1380 cg = self.changegroupsubset(update, revs, 'push')
1377 1381 return cg, remote_heads
1378 1382
1379 1383 def push_addchangegroup(self, remote, force, revs):
1380 1384 lock = remote.lock()
1381 1385
1382 1386 ret = self.prepush(remote, force, revs)
1383 1387 if ret[0] is not None:
1384 1388 cg, remote_heads = ret
1385 1389 return remote.addchangegroup(cg, 'push', self.url())
1386 1390 return ret[1]
1387 1391
1388 1392 def push_unbundle(self, remote, force, revs):
1389 1393 # local repo finds heads on server, finds out what revs it
1390 1394 # must push. once revs transferred, if server finds it has
1391 1395 # different heads (someone else won commit/push race), server
1392 1396 # aborts.
1393 1397
1394 1398 ret = self.prepush(remote, force, revs)
1395 1399 if ret[0] is not None:
1396 1400 cg, remote_heads = ret
1397 1401 if force: remote_heads = ['force']
1398 1402 return remote.unbundle(cg, remote_heads, 'push')
1399 1403 return ret[1]
1400 1404
1401 1405 def changegroupinfo(self, nodes):
1402 1406 self.ui.note(_("%d changesets found\n") % len(nodes))
1403 1407 if self.ui.debugflag:
1404 1408 self.ui.debug(_("List of changesets:\n"))
1405 1409 for node in nodes:
1406 1410 self.ui.debug("%s\n" % hex(node))
1407 1411
1408 1412 def changegroupsubset(self, bases, heads, source):
1409 1413 """This function generates a changegroup consisting of all the nodes
1410 1414 that are descendents of any of the bases, and ancestors of any of
1411 1415 the heads.
1412 1416
1413 1417 It is fairly complex as determining which filenodes and which
1414 1418 manifest nodes need to be included for the changeset to be complete
1415 1419 is non-trivial.
1416 1420
1417 1421 Another wrinkle is doing the reverse, figuring out which changeset in
1418 1422 the changegroup a particular filenode or manifestnode belongs to."""
1419 1423
1420 1424 self.hook('preoutgoing', throw=True, source=source)
1421 1425
1422 1426 # Set up some initial variables
1423 1427 # Make it easy to refer to self.changelog
1424 1428 cl = self.changelog
1425 1429 # msng is short for missing - compute the list of changesets in this
1426 1430 # changegroup.
1427 1431 msng_cl_lst, bases, heads = cl.nodesbetween(bases, heads)
1428 1432 self.changegroupinfo(msng_cl_lst)
1429 1433 # Some bases may turn out to be superfluous, and some heads may be
1430 1434 # too. nodesbetween will return the minimal set of bases and heads
1431 1435 # necessary to re-create the changegroup.
1432 1436
1433 1437 # Known heads are the list of heads that it is assumed the recipient
1434 1438 # of this changegroup will know about.
1435 1439 knownheads = {}
1436 1440 # We assume that all parents of bases are known heads.
1437 1441 for n in bases:
1438 1442 for p in cl.parents(n):
1439 1443 if p != nullid:
1440 1444 knownheads[p] = 1
1441 1445 knownheads = knownheads.keys()
1442 1446 if knownheads:
1443 1447 # Now that we know what heads are known, we can compute which
1444 1448 # changesets are known. The recipient must know about all
1445 1449 # changesets required to reach the known heads from the null
1446 1450 # changeset.
1447 1451 has_cl_set, junk, junk = cl.nodesbetween(None, knownheads)
1448 1452 junk = None
1449 1453 # Transform the list into an ersatz set.
1450 1454 has_cl_set = dict.fromkeys(has_cl_set)
1451 1455 else:
1452 1456 # If there were no known heads, the recipient cannot be assumed to
1453 1457 # know about any changesets.
1454 1458 has_cl_set = {}
1455 1459
1456 1460 # Make it easy to refer to self.manifest
1457 1461 mnfst = self.manifest
1458 1462 # We don't know which manifests are missing yet
1459 1463 msng_mnfst_set = {}
1460 1464 # Nor do we know which filenodes are missing.
1461 1465 msng_filenode_set = {}
1462 1466
1463 1467 junk = mnfst.index[mnfst.count() - 1] # Get around a bug in lazyindex
1464 1468 junk = None
1465 1469
1466 1470 # A changeset always belongs to itself, so the changenode lookup
1467 1471 # function for a changenode is identity.
1468 1472 def identity(x):
1469 1473 return x
1470 1474
1471 1475 # A function generating function. Sets up an environment for the
1472 1476 # inner function.
1473 1477 def cmp_by_rev_func(revlog):
1474 1478 # Compare two nodes by their revision number in the environment's
1475 1479 # revision history. Since the revision number both represents the
1476 1480 # most efficient order to read the nodes in, and represents a
1477 1481 # topological sorting of the nodes, this function is often useful.
1478 1482 def cmp_by_rev(a, b):
1479 1483 return cmp(revlog.rev(a), revlog.rev(b))
1480 1484 return cmp_by_rev
1481 1485
1482 1486 # If we determine that a particular file or manifest node must be a
1483 1487 # node that the recipient of the changegroup will already have, we can
1484 1488 # also assume the recipient will have all the parents. This function
1485 1489 # prunes them from the set of missing nodes.
1486 1490 def prune_parents(revlog, hasset, msngset):
1487 1491 haslst = hasset.keys()
1488 1492 haslst.sort(cmp_by_rev_func(revlog))
1489 1493 for node in haslst:
1490 1494 parentlst = [p for p in revlog.parents(node) if p != nullid]
1491 1495 while parentlst:
1492 1496 n = parentlst.pop()
1493 1497 if n not in hasset:
1494 1498 hasset[n] = 1
1495 1499 p = [p for p in revlog.parents(n) if p != nullid]
1496 1500 parentlst.extend(p)
1497 1501 for n in hasset:
1498 1502 msngset.pop(n, None)
1499 1503
1500 1504 # This is a function generating function used to set up an environment
1501 1505 # for the inner function to execute in.
1502 1506 def manifest_and_file_collector(changedfileset):
1503 1507 # This is an information gathering function that gathers
1504 1508 # information from each changeset node that goes out as part of
1505 1509 # the changegroup. The information gathered is a list of which
1506 1510 # manifest nodes are potentially required (the recipient may
1507 1511 # already have them) and total list of all files which were
1508 1512 # changed in any changeset in the changegroup.
1509 1513 #
1510 1514 # We also remember the first changenode we saw any manifest
1511 1515 # referenced by so we can later determine which changenode 'owns'
1512 1516 # the manifest.
1513 1517 def collect_manifests_and_files(clnode):
1514 1518 c = cl.read(clnode)
1515 1519 for f in c[3]:
1516 1520 # This is to make sure we only have one instance of each
1517 1521 # filename string for each filename.
1518 1522 changedfileset.setdefault(f, f)
1519 1523 msng_mnfst_set.setdefault(c[0], clnode)
1520 1524 return collect_manifests_and_files
1521 1525
1522 1526 # Figure out which manifest nodes (of the ones we think might be part
1523 1527 # of the changegroup) the recipient must know about and remove them
1524 1528 # from the changegroup.
1525 1529 def prune_manifests():
1526 1530 has_mnfst_set = {}
1527 1531 for n in msng_mnfst_set:
1528 1532 # If a 'missing' manifest thinks it belongs to a changenode
1529 1533 # the recipient is assumed to have, obviously the recipient
1530 1534 # must have that manifest.
1531 1535 linknode = cl.node(mnfst.linkrev(n))
1532 1536 if linknode in has_cl_set:
1533 1537 has_mnfst_set[n] = 1
1534 1538 prune_parents(mnfst, has_mnfst_set, msng_mnfst_set)
1535 1539
1536 1540 # Use the information collected in collect_manifests_and_files to say
1537 1541 # which changenode any manifestnode belongs to.
1538 1542 def lookup_manifest_link(mnfstnode):
1539 1543 return msng_mnfst_set[mnfstnode]
1540 1544
1541 1545 # A function generating function that sets up the initial environment
1542 1546 # the inner function.
1543 1547 def filenode_collector(changedfiles):
1544 1548 next_rev = [0]
1545 1549 # This gathers information from each manifestnode included in the
1546 1550 # changegroup about which filenodes the manifest node references
1547 1551 # so we can include those in the changegroup too.
1548 1552 #
1549 1553 # It also remembers which changenode each filenode belongs to. It
1550 1554 # does this by assuming the a filenode belongs to the changenode
1551 1555 # the first manifest that references it belongs to.
1552 1556 def collect_msng_filenodes(mnfstnode):
1553 1557 r = mnfst.rev(mnfstnode)
1554 1558 if r == next_rev[0]:
1555 1559 # If the last rev we looked at was the one just previous,
1556 1560 # we only need to see a diff.
1557 1561 delta = mdiff.patchtext(mnfst.delta(mnfstnode))
1558 1562 # For each line in the delta
1559 1563 for dline in delta.splitlines():
1560 1564 # get the filename and filenode for that line
1561 1565 f, fnode = dline.split('\0')
1562 1566 fnode = bin(fnode[:40])
1563 1567 f = changedfiles.get(f, None)
1564 1568 # And if the file is in the list of files we care
1565 1569 # about.
1566 1570 if f is not None:
1567 1571 # Get the changenode this manifest belongs to
1568 1572 clnode = msng_mnfst_set[mnfstnode]
1569 1573 # Create the set of filenodes for the file if
1570 1574 # there isn't one already.
1571 1575 ndset = msng_filenode_set.setdefault(f, {})
1572 1576 # And set the filenode's changelog node to the
1573 1577 # manifest's if it hasn't been set already.
1574 1578 ndset.setdefault(fnode, clnode)
1575 1579 else:
1576 1580 # Otherwise we need a full manifest.
1577 1581 m = mnfst.read(mnfstnode)
1578 1582 # For every file in we care about.
1579 1583 for f in changedfiles:
1580 1584 fnode = m.get(f, None)
1581 1585 # If it's in the manifest
1582 1586 if fnode is not None:
1583 1587 # See comments above.
1584 1588 clnode = msng_mnfst_set[mnfstnode]
1585 1589 ndset = msng_filenode_set.setdefault(f, {})
1586 1590 ndset.setdefault(fnode, clnode)
1587 1591 # Remember the revision we hope to see next.
1588 1592 next_rev[0] = r + 1
1589 1593 return collect_msng_filenodes
1590 1594
1591 1595 # We have a list of filenodes we think we need for a file, lets remove
1592 1596 # all those we now the recipient must have.
1593 1597 def prune_filenodes(f, filerevlog):
1594 1598 msngset = msng_filenode_set[f]
1595 1599 hasset = {}
1596 1600 # If a 'missing' filenode thinks it belongs to a changenode we
1597 1601 # assume the recipient must have, then the recipient must have
1598 1602 # that filenode.
1599 1603 for n in msngset:
1600 1604 clnode = cl.node(filerevlog.linkrev(n))
1601 1605 if clnode in has_cl_set:
1602 1606 hasset[n] = 1
1603 1607 prune_parents(filerevlog, hasset, msngset)
1604 1608
1605 1609 # A function generator function that sets up the a context for the
1606 1610 # inner function.
1607 1611 def lookup_filenode_link_func(fname):
1608 1612 msngset = msng_filenode_set[fname]
1609 1613 # Lookup the changenode the filenode belongs to.
1610 1614 def lookup_filenode_link(fnode):
1611 1615 return msngset[fnode]
1612 1616 return lookup_filenode_link
1613 1617
1614 1618 # Now that we have all theses utility functions to help out and
1615 1619 # logically divide up the task, generate the group.
1616 1620 def gengroup():
1617 1621 # The set of changed files starts empty.
1618 1622 changedfiles = {}
1619 1623 # Create a changenode group generator that will call our functions
1620 1624 # back to lookup the owning changenode and collect information.
1621 1625 group = cl.group(msng_cl_lst, identity,
1622 1626 manifest_and_file_collector(changedfiles))
1623 1627 for chnk in group:
1624 1628 yield chnk
1625 1629
1626 1630 # The list of manifests has been collected by the generator
1627 1631 # calling our functions back.
1628 1632 prune_manifests()
1629 1633 msng_mnfst_lst = msng_mnfst_set.keys()
1630 1634 # Sort the manifestnodes by revision number.
1631 1635 msng_mnfst_lst.sort(cmp_by_rev_func(mnfst))
1632 1636 # Create a generator for the manifestnodes that calls our lookup
1633 1637 # and data collection functions back.
1634 1638 group = mnfst.group(msng_mnfst_lst, lookup_manifest_link,
1635 1639 filenode_collector(changedfiles))
1636 1640 for chnk in group:
1637 1641 yield chnk
1638 1642
1639 1643 # These are no longer needed, dereference and toss the memory for
1640 1644 # them.
1641 1645 msng_mnfst_lst = None
1642 1646 msng_mnfst_set.clear()
1643 1647
1644 1648 changedfiles = changedfiles.keys()
1645 1649 changedfiles.sort()
1646 1650 # Go through all our files in order sorted by name.
1647 1651 for fname in changedfiles:
1648 1652 filerevlog = self.file(fname)
1649 1653 # Toss out the filenodes that the recipient isn't really
1650 1654 # missing.
1651 1655 if msng_filenode_set.has_key(fname):
1652 1656 prune_filenodes(fname, filerevlog)
1653 1657 msng_filenode_lst = msng_filenode_set[fname].keys()
1654 1658 else:
1655 1659 msng_filenode_lst = []
1656 1660 # If any filenodes are left, generate the group for them,
1657 1661 # otherwise don't bother.
1658 1662 if len(msng_filenode_lst) > 0:
1659 1663 yield changegroup.genchunk(fname)
1660 1664 # Sort the filenodes by their revision #
1661 1665 msng_filenode_lst.sort(cmp_by_rev_func(filerevlog))
1662 1666 # Create a group generator and only pass in a changenode
1663 1667 # lookup function as we need to collect no information
1664 1668 # from filenodes.
1665 1669 group = filerevlog.group(msng_filenode_lst,
1666 1670 lookup_filenode_link_func(fname))
1667 1671 for chnk in group:
1668 1672 yield chnk
1669 1673 if msng_filenode_set.has_key(fname):
1670 1674 # Don't need this anymore, toss it to free memory.
1671 1675 del msng_filenode_set[fname]
1672 1676 # Signal that no more groups are left.
1673 1677 yield changegroup.closechunk()
1674 1678
1675 1679 if msng_cl_lst:
1676 1680 self.hook('outgoing', node=hex(msng_cl_lst[0]), source=source)
1677 1681
1678 1682 return util.chunkbuffer(gengroup())
1679 1683
1680 1684 def changegroup(self, basenodes, source):
1681 1685 """Generate a changegroup of all nodes that we have that a recipient
1682 1686 doesn't.
1683 1687
1684 1688 This is much easier than the previous function as we can assume that
1685 1689 the recipient has any changenode we aren't sending them."""
1686 1690
1687 1691 self.hook('preoutgoing', throw=True, source=source)
1688 1692
1689 1693 cl = self.changelog
1690 1694 nodes = cl.nodesbetween(basenodes, None)[0]
1691 1695 revset = dict.fromkeys([cl.rev(n) for n in nodes])
1692 1696 self.changegroupinfo(nodes)
1693 1697
1694 1698 def identity(x):
1695 1699 return x
1696 1700
1697 1701 def gennodelst(revlog):
1698 1702 for r in xrange(0, revlog.count()):
1699 1703 n = revlog.node(r)
1700 1704 if revlog.linkrev(n) in revset:
1701 1705 yield n
1702 1706
1703 1707 def changed_file_collector(changedfileset):
1704 1708 def collect_changed_files(clnode):
1705 1709 c = cl.read(clnode)
1706 1710 for fname in c[3]:
1707 1711 changedfileset[fname] = 1
1708 1712 return collect_changed_files
1709 1713
1710 1714 def lookuprevlink_func(revlog):
1711 1715 def lookuprevlink(n):
1712 1716 return cl.node(revlog.linkrev(n))
1713 1717 return lookuprevlink
1714 1718
1715 1719 def gengroup():
1716 1720 # construct a list of all changed files
1717 1721 changedfiles = {}
1718 1722
1719 1723 for chnk in cl.group(nodes, identity,
1720 1724 changed_file_collector(changedfiles)):
1721 1725 yield chnk
1722 1726 changedfiles = changedfiles.keys()
1723 1727 changedfiles.sort()
1724 1728
1725 1729 mnfst = self.manifest
1726 1730 nodeiter = gennodelst(mnfst)
1727 1731 for chnk in mnfst.group(nodeiter, lookuprevlink_func(mnfst)):
1728 1732 yield chnk
1729 1733
1730 1734 for fname in changedfiles:
1731 1735 filerevlog = self.file(fname)
1732 1736 nodeiter = gennodelst(filerevlog)
1733 1737 nodeiter = list(nodeiter)
1734 1738 if nodeiter:
1735 1739 yield changegroup.genchunk(fname)
1736 1740 lookup = lookuprevlink_func(filerevlog)
1737 1741 for chnk in filerevlog.group(nodeiter, lookup):
1738 1742 yield chnk
1739 1743
1740 1744 yield changegroup.closechunk()
1741 1745
1742 1746 if nodes:
1743 1747 self.hook('outgoing', node=hex(nodes[0]), source=source)
1744 1748
1745 1749 return util.chunkbuffer(gengroup())
1746 1750
1747 1751 def addchangegroup(self, source, srctype, url):
1748 1752 """add changegroup to repo.
1749 1753
1750 1754 return values:
1751 1755 - nothing changed or no source: 0
1752 1756 - more heads than before: 1+added heads (2..n)
1753 1757 - less heads than before: -1-removed heads (-2..-n)
1754 1758 - number of heads stays the same: 1
1755 1759 """
1756 1760 def csmap(x):
1757 1761 self.ui.debug(_("add changeset %s\n") % short(x))
1758 1762 return cl.count()
1759 1763
1760 1764 def revmap(x):
1761 1765 return cl.rev(x)
1762 1766
1763 1767 if not source:
1764 1768 return 0
1765 1769
1766 1770 self.hook('prechangegroup', throw=True, source=srctype, url=url)
1767 1771
1768 1772 changesets = files = revisions = 0
1769 1773
1770 1774 tr = self.transaction()
1771 1775
1772 1776 # write changelog data to temp files so concurrent readers will not see
1773 1777 # inconsistent view
1774 1778 cl = None
1775 1779 try:
1776 1780 cl = appendfile.appendchangelog(self.sopener,
1777 1781 self.changelog.version)
1778 1782
1779 1783 oldheads = len(cl.heads())
1780 1784
1781 1785 # pull off the changeset group
1782 1786 self.ui.status(_("adding changesets\n"))
1783 1787 cor = cl.count() - 1
1784 1788 chunkiter = changegroup.chunkiter(source)
1785 1789 if cl.addgroup(chunkiter, csmap, tr, 1) is None:
1786 1790 raise util.Abort(_("received changelog group is empty"))
1787 1791 cnr = cl.count() - 1
1788 1792 changesets = cnr - cor
1789 1793
1790 1794 # pull off the manifest group
1791 1795 self.ui.status(_("adding manifests\n"))
1792 1796 chunkiter = changegroup.chunkiter(source)
1793 1797 # no need to check for empty manifest group here:
1794 1798 # if the result of the merge of 1 and 2 is the same in 3 and 4,
1795 1799 # no new manifest will be created and the manifest group will
1796 1800 # be empty during the pull
1797 1801 self.manifest.addgroup(chunkiter, revmap, tr)
1798 1802
1799 1803 # process the files
1800 1804 self.ui.status(_("adding file changes\n"))
1801 1805 while 1:
1802 1806 f = changegroup.getchunk(source)
1803 1807 if not f:
1804 1808 break
1805 1809 self.ui.debug(_("adding %s revisions\n") % f)
1806 1810 fl = self.file(f)
1807 1811 o = fl.count()
1808 1812 chunkiter = changegroup.chunkiter(source)
1809 1813 if fl.addgroup(chunkiter, revmap, tr) is None:
1810 1814 raise util.Abort(_("received file revlog group is empty"))
1811 1815 revisions += fl.count() - o
1812 1816 files += 1
1813 1817
1814 1818 cl.writedata()
1815 1819 finally:
1816 1820 if cl:
1817 1821 cl.cleanup()
1818 1822
1819 1823 # make changelog see real files again
1820 1824 self.changelog = changelog.changelog(self.sopener,
1821 1825 self.changelog.version)
1822 1826 self.changelog.checkinlinesize(tr)
1823 1827
1824 1828 newheads = len(self.changelog.heads())
1825 1829 heads = ""
1826 1830 if oldheads and newheads != oldheads:
1827 1831 heads = _(" (%+d heads)") % (newheads - oldheads)
1828 1832
1829 1833 self.ui.status(_("added %d changesets"
1830 1834 " with %d changes to %d files%s\n")
1831 1835 % (changesets, revisions, files, heads))
1832 1836
1833 1837 if changesets > 0:
1834 1838 self.hook('pretxnchangegroup', throw=True,
1835 1839 node=hex(self.changelog.node(cor+1)), source=srctype,
1836 1840 url=url)
1837 1841
1838 1842 tr.close()
1839 1843
1840 1844 if changesets > 0:
1841 1845 self.hook("changegroup", node=hex(self.changelog.node(cor+1)),
1842 1846 source=srctype, url=url)
1843 1847
1844 1848 for i in xrange(cor + 1, cnr + 1):
1845 1849 self.hook("incoming", node=hex(self.changelog.node(i)),
1846 1850 source=srctype, url=url)
1847 1851
1848 1852 # never return 0 here:
1849 1853 if newheads < oldheads:
1850 1854 return newheads - oldheads - 1
1851 1855 else:
1852 1856 return newheads - oldheads + 1
1853 1857
1854 1858
1855 1859 def stream_in(self, remote):
1856 1860 fp = remote.stream_out()
1857 1861 l = fp.readline()
1858 1862 try:
1859 1863 resp = int(l)
1860 1864 except ValueError:
1861 1865 raise util.UnexpectedOutput(
1862 1866 _('Unexpected response from remote server:'), l)
1863 1867 if resp == 1:
1864 1868 raise util.Abort(_('operation forbidden by server'))
1865 1869 elif resp == 2:
1866 1870 raise util.Abort(_('locking the remote repository failed'))
1867 1871 elif resp != 0:
1868 1872 raise util.Abort(_('the server sent an unknown error code'))
1869 1873 self.ui.status(_('streaming all changes\n'))
1870 1874 l = fp.readline()
1871 1875 try:
1872 1876 total_files, total_bytes = map(int, l.split(' ', 1))
1873 1877 except ValueError, TypeError:
1874 1878 raise util.UnexpectedOutput(
1875 1879 _('Unexpected response from remote server:'), l)
1876 1880 self.ui.status(_('%d files to transfer, %s of data\n') %
1877 1881 (total_files, util.bytecount(total_bytes)))
1878 1882 start = time.time()
1879 1883 for i in xrange(total_files):
1880 1884 # XXX doesn't support '\n' or '\r' in filenames
1881 1885 l = fp.readline()
1882 1886 try:
1883 1887 name, size = l.split('\0', 1)
1884 1888 size = int(size)
1885 1889 except ValueError, TypeError:
1886 1890 raise util.UnexpectedOutput(
1887 1891 _('Unexpected response from remote server:'), l)
1888 1892 self.ui.debug('adding %s (%s)\n' % (name, util.bytecount(size)))
1889 1893 ofp = self.sopener(name, 'w')
1890 1894 for chunk in util.filechunkiter(fp, limit=size):
1891 1895 ofp.write(chunk)
1892 1896 ofp.close()
1893 1897 elapsed = time.time() - start
1894 1898 self.ui.status(_('transferred %s in %.1f seconds (%s/sec)\n') %
1895 1899 (util.bytecount(total_bytes), elapsed,
1896 1900 util.bytecount(total_bytes / elapsed)))
1897 1901 self.reload()
1898 1902 return len(self.heads()) + 1
1899 1903
1900 1904 def clone(self, remote, heads=[], stream=False):
1901 1905 '''clone remote repository.
1902 1906
1903 1907 keyword arguments:
1904 1908 heads: list of revs to clone (forces use of pull)
1905 1909 stream: use streaming clone if possible'''
1906 1910
1907 1911 # now, all clients that can request uncompressed clones can
1908 1912 # read repo formats supported by all servers that can serve
1909 1913 # them.
1910 1914
1911 1915 # if revlog format changes, client will have to check version
1912 1916 # and format flags on "stream" capability, and use
1913 1917 # uncompressed only if compatible.
1914 1918
1915 1919 if stream and not heads and remote.capable('stream'):
1916 1920 return self.stream_in(remote)
1917 1921 return self.pull(remote, heads)
1918 1922
1919 1923 # used to avoid circular references so destructors work
1920 1924 def aftertrans(files):
1921 1925 renamefiles = [tuple(t) for t in files]
1922 1926 def a():
1923 1927 for src, dest in renamefiles:
1924 1928 util.rename(src, dest)
1925 1929 return a
1926 1930
1927 1931 def instance(ui, path, create):
1928 1932 return localrepository(ui, util.drop_scheme('file', path), create)
1929 1933
1930 1934 def islocal(path):
1931 1935 return True
@@ -1,1284 +1,1285
1 1 """
2 2 util.py - Mercurial utility functions and platform specfic implementations
3 3
4 4 Copyright 2005 K. Thananchayan <thananck@yahoo.com>
5 5 Copyright 2005, 2006 Matt Mackall <mpm@selenic.com>
6 6 Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
7 7
8 8 This software may be used and distributed according to the terms
9 9 of the GNU General Public License, incorporated herein by reference.
10 10
11 11 This contains helper routines that are independent of the SCM core and hide
12 12 platform-specific details from the core.
13 13 """
14 14
15 15 from i18n import gettext as _
16 16 from demandload import *
17 17 demandload(globals(), "cStringIO errno getpass popen2 re shutil sys tempfile")
18 18 demandload(globals(), "os threading time calendar ConfigParser locale")
19 19
20 20 _encoding = os.environ.get("HGENCODING") or locale.getpreferredencoding()
21 21 _encodingmode = os.environ.get("HGENCODINGMODE", "strict")
22 _fallbackencoding = 'ISO-8859-1'
22 23
23 24 def tolocal(s):
24 25 """
25 26 Convert a string from internal UTF-8 to local encoding
26 27
27 28 All internal strings should be UTF-8 but some repos before the
28 29 implementation of locale support may contain latin1 or possibly
29 30 other character sets. We attempt to decode everything strictly
30 31 using UTF-8, then Latin-1, and failing that, we use UTF-8 and
31 32 replace unknown characters.
32 33 """
33 for e in "utf-8 latin1".split():
34 for e in ('UTF-8', _fallbackencoding):
34 35 try:
35 36 u = s.decode(e) # attempt strict decoding
36 37 return u.encode(_encoding, "replace")
37 38 except UnicodeDecodeError:
38 39 pass
39 40 u = s.decode("utf-8", "replace") # last ditch
40 41 return u.encode(_encoding, "replace")
41 42
42 43 def fromlocal(s):
43 44 """
44 45 Convert a string from the local character encoding to UTF-8
45 46
46 47 We attempt to decode strings using the encoding mode set by
47 48 HG_ENCODINGMODE, which defaults to 'strict'. In this mode, unknown
48 49 characters will cause an error message. Other modes include
49 50 'replace', which replaces unknown characters with a special
50 51 Unicode character, and 'ignore', which drops the character.
51 52 """
52 53 try:
53 54 return s.decode(_encoding, _encodingmode).encode("utf-8")
54 55 except UnicodeDecodeError, inst:
55 56 sub = s[max(0, inst.start-10):inst.start+10]
56 57 raise Abort("decoding near '%s': %s!\n" % (sub, inst))
57 58
58 59 def locallen(s):
59 60 """Find the length in characters of a local string"""
60 61 return len(s.decode(_encoding, "replace"))
61 62
62 63 def localsub(s, a, b=None):
63 64 try:
64 65 u = s.decode(_encoding, _encodingmode)
65 66 if b is not None:
66 67 u = u[a:b]
67 68 else:
68 69 u = u[:a]
69 70 return u.encode(_encoding, _encodingmode)
70 71 except UnicodeDecodeError, inst:
71 72 sub = s[max(0, inst.start-10), inst.start+10]
72 73 raise Abort("decoding near '%s': %s!\n" % (sub, inst))
73 74
74 75 # used by parsedate
75 76 defaultdateformats = (
76 77 '%Y-%m-%d %H:%M:%S',
77 78 '%Y-%m-%d %I:%M:%S%p',
78 79 '%Y-%m-%d %H:%M',
79 80 '%Y-%m-%d %I:%M%p',
80 81 '%Y-%m-%d',
81 82 '%m-%d',
82 83 '%m/%d',
83 84 '%m/%d/%y',
84 85 '%m/%d/%Y',
85 86 '%a %b %d %H:%M:%S %Y',
86 87 '%a %b %d %I:%M:%S%p %Y',
87 88 '%b %d %H:%M:%S %Y',
88 89 '%b %d %I:%M:%S%p %Y',
89 90 '%b %d %H:%M:%S',
90 91 '%b %d %I:%M:%S%p',
91 92 '%b %d %H:%M',
92 93 '%b %d %I:%M%p',
93 94 '%b %d %Y',
94 95 '%b %d',
95 96 '%H:%M:%S',
96 97 '%I:%M:%SP',
97 98 '%H:%M',
98 99 '%I:%M%p',
99 100 )
100 101
101 102 extendeddateformats = defaultdateformats + (
102 103 "%Y",
103 104 "%Y-%m",
104 105 "%b",
105 106 "%b %Y",
106 107 )
107 108
108 109 class SignalInterrupt(Exception):
109 110 """Exception raised on SIGTERM and SIGHUP."""
110 111
111 112 # like SafeConfigParser but with case-sensitive keys
112 113 class configparser(ConfigParser.SafeConfigParser):
113 114 def optionxform(self, optionstr):
114 115 return optionstr
115 116
116 117 def cachefunc(func):
117 118 '''cache the result of function calls'''
118 119 # XXX doesn't handle keywords args
119 120 cache = {}
120 121 if func.func_code.co_argcount == 1:
121 122 # we gain a small amount of time because
122 123 # we don't need to pack/unpack the list
123 124 def f(arg):
124 125 if arg not in cache:
125 126 cache[arg] = func(arg)
126 127 return cache[arg]
127 128 else:
128 129 def f(*args):
129 130 if args not in cache:
130 131 cache[args] = func(*args)
131 132 return cache[args]
132 133
133 134 return f
134 135
135 136 def pipefilter(s, cmd):
136 137 '''filter string S through command CMD, returning its output'''
137 138 (pout, pin) = popen2.popen2(cmd, -1, 'b')
138 139 def writer():
139 140 try:
140 141 pin.write(s)
141 142 pin.close()
142 143 except IOError, inst:
143 144 if inst.errno != errno.EPIPE:
144 145 raise
145 146
146 147 # we should use select instead on UNIX, but this will work on most
147 148 # systems, including Windows
148 149 w = threading.Thread(target=writer)
149 150 w.start()
150 151 f = pout.read()
151 152 pout.close()
152 153 w.join()
153 154 return f
154 155
155 156 def tempfilter(s, cmd):
156 157 '''filter string S through a pair of temporary files with CMD.
157 158 CMD is used as a template to create the real command to be run,
158 159 with the strings INFILE and OUTFILE replaced by the real names of
159 160 the temporary files generated.'''
160 161 inname, outname = None, None
161 162 try:
162 163 infd, inname = tempfile.mkstemp(prefix='hg-filter-in-')
163 164 fp = os.fdopen(infd, 'wb')
164 165 fp.write(s)
165 166 fp.close()
166 167 outfd, outname = tempfile.mkstemp(prefix='hg-filter-out-')
167 168 os.close(outfd)
168 169 cmd = cmd.replace('INFILE', inname)
169 170 cmd = cmd.replace('OUTFILE', outname)
170 171 code = os.system(cmd)
171 172 if code: raise Abort(_("command '%s' failed: %s") %
172 173 (cmd, explain_exit(code)))
173 174 return open(outname, 'rb').read()
174 175 finally:
175 176 try:
176 177 if inname: os.unlink(inname)
177 178 except: pass
178 179 try:
179 180 if outname: os.unlink(outname)
180 181 except: pass
181 182
182 183 filtertable = {
183 184 'tempfile:': tempfilter,
184 185 'pipe:': pipefilter,
185 186 }
186 187
187 188 def filter(s, cmd):
188 189 "filter a string through a command that transforms its input to its output"
189 190 for name, fn in filtertable.iteritems():
190 191 if cmd.startswith(name):
191 192 return fn(s, cmd[len(name):].lstrip())
192 193 return pipefilter(s, cmd)
193 194
194 195 def find_in_path(name, path, default=None):
195 196 '''find name in search path. path can be string (will be split
196 197 with os.pathsep), or iterable thing that returns strings. if name
197 198 found, return path to name. else return default.'''
198 199 if isinstance(path, str):
199 200 path = path.split(os.pathsep)
200 201 for p in path:
201 202 p_name = os.path.join(p, name)
202 203 if os.path.exists(p_name):
203 204 return p_name
204 205 return default
205 206
206 207 def binary(s):
207 208 """return true if a string is binary data using diff's heuristic"""
208 209 if s and '\0' in s[:4096]:
209 210 return True
210 211 return False
211 212
212 213 def unique(g):
213 214 """return the uniq elements of iterable g"""
214 215 seen = {}
215 216 l = []
216 217 for f in g:
217 218 if f not in seen:
218 219 seen[f] = 1
219 220 l.append(f)
220 221 return l
221 222
222 223 class Abort(Exception):
223 224 """Raised if a command needs to print an error and exit."""
224 225
225 226 class UnexpectedOutput(Abort):
226 227 """Raised to print an error with part of output and exit."""
227 228
228 229 def always(fn): return True
229 230 def never(fn): return False
230 231
231 232 def patkind(name, dflt_pat='glob'):
232 233 """Split a string into an optional pattern kind prefix and the
233 234 actual pattern."""
234 235 for prefix in 're', 'glob', 'path', 'relglob', 'relpath', 'relre':
235 236 if name.startswith(prefix + ':'): return name.split(':', 1)
236 237 return dflt_pat, name
237 238
238 239 def globre(pat, head='^', tail='$'):
239 240 "convert a glob pattern into a regexp"
240 241 i, n = 0, len(pat)
241 242 res = ''
242 243 group = False
243 244 def peek(): return i < n and pat[i]
244 245 while i < n:
245 246 c = pat[i]
246 247 i = i+1
247 248 if c == '*':
248 249 if peek() == '*':
249 250 i += 1
250 251 res += '.*'
251 252 else:
252 253 res += '[^/]*'
253 254 elif c == '?':
254 255 res += '.'
255 256 elif c == '[':
256 257 j = i
257 258 if j < n and pat[j] in '!]':
258 259 j += 1
259 260 while j < n and pat[j] != ']':
260 261 j += 1
261 262 if j >= n:
262 263 res += '\\['
263 264 else:
264 265 stuff = pat[i:j].replace('\\','\\\\')
265 266 i = j + 1
266 267 if stuff[0] == '!':
267 268 stuff = '^' + stuff[1:]
268 269 elif stuff[0] == '^':
269 270 stuff = '\\' + stuff
270 271 res = '%s[%s]' % (res, stuff)
271 272 elif c == '{':
272 273 group = True
273 274 res += '(?:'
274 275 elif c == '}' and group:
275 276 res += ')'
276 277 group = False
277 278 elif c == ',' and group:
278 279 res += '|'
279 280 elif c == '\\':
280 281 p = peek()
281 282 if p:
282 283 i += 1
283 284 res += re.escape(p)
284 285 else:
285 286 res += re.escape(c)
286 287 else:
287 288 res += re.escape(c)
288 289 return head + res + tail
289 290
290 291 _globchars = {'[': 1, '{': 1, '*': 1, '?': 1}
291 292
292 293 def pathto(n1, n2):
293 294 '''return the relative path from one place to another.
294 295 n1 should use os.sep to separate directories
295 296 n2 should use "/" to separate directories
296 297 returns an os.sep-separated path.
297 298 '''
298 299 if not n1: return localpath(n2)
299 300 a, b = n1.split(os.sep), n2.split('/')
300 301 a.reverse()
301 302 b.reverse()
302 303 while a and b and a[-1] == b[-1]:
303 304 a.pop()
304 305 b.pop()
305 306 b.reverse()
306 307 return os.sep.join((['..'] * len(a)) + b)
307 308
308 309 def canonpath(root, cwd, myname):
309 310 """return the canonical path of myname, given cwd and root"""
310 311 if root == os.sep:
311 312 rootsep = os.sep
312 313 elif root.endswith(os.sep):
313 314 rootsep = root
314 315 else:
315 316 rootsep = root + os.sep
316 317 name = myname
317 318 if not os.path.isabs(name):
318 319 name = os.path.join(root, cwd, name)
319 320 name = os.path.normpath(name)
320 321 if name != rootsep and name.startswith(rootsep):
321 322 name = name[len(rootsep):]
322 323 audit_path(name)
323 324 return pconvert(name)
324 325 elif name == root:
325 326 return ''
326 327 else:
327 328 # Determine whether `name' is in the hierarchy at or beneath `root',
328 329 # by iterating name=dirname(name) until that causes no change (can't
329 330 # check name == '/', because that doesn't work on windows). For each
330 331 # `name', compare dev/inode numbers. If they match, the list `rel'
331 332 # holds the reversed list of components making up the relative file
332 333 # name we want.
333 334 root_st = os.stat(root)
334 335 rel = []
335 336 while True:
336 337 try:
337 338 name_st = os.stat(name)
338 339 except OSError:
339 340 break
340 341 if samestat(name_st, root_st):
341 342 rel.reverse()
342 343 name = os.path.join(*rel)
343 344 audit_path(name)
344 345 return pconvert(name)
345 346 dirname, basename = os.path.split(name)
346 347 rel.append(basename)
347 348 if dirname == name:
348 349 break
349 350 name = dirname
350 351
351 352 raise Abort('%s not under root' % myname)
352 353
353 354 def matcher(canonroot, cwd='', names=['.'], inc=[], exc=[], head='', src=None):
354 355 return _matcher(canonroot, cwd, names, inc, exc, head, 'glob', src)
355 356
356 357 def cmdmatcher(canonroot, cwd='', names=['.'], inc=[], exc=[], head='', src=None):
357 358 if os.name == 'nt':
358 359 dflt_pat = 'glob'
359 360 else:
360 361 dflt_pat = 'relpath'
361 362 return _matcher(canonroot, cwd, names, inc, exc, head, dflt_pat, src)
362 363
363 364 def _matcher(canonroot, cwd, names, inc, exc, head, dflt_pat, src):
364 365 """build a function to match a set of file patterns
365 366
366 367 arguments:
367 368 canonroot - the canonical root of the tree you're matching against
368 369 cwd - the current working directory, if relevant
369 370 names - patterns to find
370 371 inc - patterns to include
371 372 exc - patterns to exclude
372 373 head - a regex to prepend to patterns to control whether a match is rooted
373 374
374 375 a pattern is one of:
375 376 'glob:<rooted glob>'
376 377 're:<rooted regexp>'
377 378 'path:<rooted path>'
378 379 'relglob:<relative glob>'
379 380 'relpath:<relative path>'
380 381 'relre:<relative regexp>'
381 382 '<rooted path or regexp>'
382 383
383 384 returns:
384 385 a 3-tuple containing
385 386 - list of explicit non-pattern names passed in
386 387 - a bool match(filename) function
387 388 - a bool indicating if any patterns were passed in
388 389
389 390 todo:
390 391 make head regex a rooted bool
391 392 """
392 393
393 394 def contains_glob(name):
394 395 for c in name:
395 396 if c in _globchars: return True
396 397 return False
397 398
398 399 def regex(kind, name, tail):
399 400 '''convert a pattern into a regular expression'''
400 401 if kind == 're':
401 402 return name
402 403 elif kind == 'path':
403 404 return '^' + re.escape(name) + '(?:/|$)'
404 405 elif kind == 'relglob':
405 406 return head + globre(name, '(?:|.*/)', tail)
406 407 elif kind == 'relpath':
407 408 return head + re.escape(name) + tail
408 409 elif kind == 'relre':
409 410 if name.startswith('^'):
410 411 return name
411 412 return '.*' + name
412 413 return head + globre(name, '', tail)
413 414
414 415 def matchfn(pats, tail):
415 416 """build a matching function from a set of patterns"""
416 417 if not pats:
417 418 return
418 419 matches = []
419 420 for k, p in pats:
420 421 try:
421 422 pat = '(?:%s)' % regex(k, p, tail)
422 423 matches.append(re.compile(pat).match)
423 424 except re.error:
424 425 if src: raise Abort("%s: invalid pattern (%s): %s" % (src, k, p))
425 426 else: raise Abort("invalid pattern (%s): %s" % (k, p))
426 427
427 428 def buildfn(text):
428 429 for m in matches:
429 430 r = m(text)
430 431 if r:
431 432 return r
432 433
433 434 return buildfn
434 435
435 436 def globprefix(pat):
436 437 '''return the non-glob prefix of a path, e.g. foo/* -> foo'''
437 438 root = []
438 439 for p in pat.split(os.sep):
439 440 if contains_glob(p): break
440 441 root.append(p)
441 442 return '/'.join(root)
442 443
443 444 pats = []
444 445 files = []
445 446 roots = []
446 447 for kind, name in [patkind(p, dflt_pat) for p in names]:
447 448 if kind in ('glob', 'relpath'):
448 449 name = canonpath(canonroot, cwd, name)
449 450 if name == '':
450 451 kind, name = 'glob', '**'
451 452 if kind in ('glob', 'path', 're'):
452 453 pats.append((kind, name))
453 454 if kind == 'glob':
454 455 root = globprefix(name)
455 456 if root: roots.append(root)
456 457 elif kind == 'relpath':
457 458 files.append((kind, name))
458 459 roots.append(name)
459 460
460 461 patmatch = matchfn(pats, '$') or always
461 462 filematch = matchfn(files, '(?:/|$)') or always
462 463 incmatch = always
463 464 if inc:
464 465 inckinds = [patkind(canonpath(canonroot, cwd, i)) for i in inc]
465 466 incmatch = matchfn(inckinds, '(?:/|$)')
466 467 excmatch = lambda fn: False
467 468 if exc:
468 469 exckinds = [patkind(canonpath(canonroot, cwd, x)) for x in exc]
469 470 excmatch = matchfn(exckinds, '(?:/|$)')
470 471
471 472 return (roots,
472 473 lambda fn: (incmatch(fn) and not excmatch(fn) and
473 474 (fn.endswith('/') or
474 475 (not pats and not files) or
475 476 (pats and patmatch(fn)) or
476 477 (files and filematch(fn)))),
477 478 (inc or exc or (pats and pats != [('glob', '**')])) and True)
478 479
479 480 def system(cmd, environ={}, cwd=None, onerr=None, errprefix=None):
480 481 '''enhanced shell command execution.
481 482 run with environment maybe modified, maybe in different dir.
482 483
483 484 if command fails and onerr is None, return status. if ui object,
484 485 print error message and return status, else raise onerr object as
485 486 exception.'''
486 487 def py2shell(val):
487 488 'convert python object into string that is useful to shell'
488 489 if val in (None, False):
489 490 return '0'
490 491 if val == True:
491 492 return '1'
492 493 return str(val)
493 494 oldenv = {}
494 495 for k in environ:
495 496 oldenv[k] = os.environ.get(k)
496 497 if cwd is not None:
497 498 oldcwd = os.getcwd()
498 499 try:
499 500 for k, v in environ.iteritems():
500 501 os.environ[k] = py2shell(v)
501 502 if cwd is not None and oldcwd != cwd:
502 503 os.chdir(cwd)
503 504 rc = os.system(cmd)
504 505 if rc and onerr:
505 506 errmsg = '%s %s' % (os.path.basename(cmd.split(None, 1)[0]),
506 507 explain_exit(rc)[0])
507 508 if errprefix:
508 509 errmsg = '%s: %s' % (errprefix, errmsg)
509 510 try:
510 511 onerr.warn(errmsg + '\n')
511 512 except AttributeError:
512 513 raise onerr(errmsg)
513 514 return rc
514 515 finally:
515 516 for k, v in oldenv.iteritems():
516 517 if v is None:
517 518 del os.environ[k]
518 519 else:
519 520 os.environ[k] = v
520 521 if cwd is not None and oldcwd != cwd:
521 522 os.chdir(oldcwd)
522 523
523 524 def rename(src, dst):
524 525 """forcibly rename a file"""
525 526 try:
526 527 os.rename(src, dst)
527 528 except OSError, err:
528 529 # on windows, rename to existing file is not allowed, so we
529 530 # must delete destination first. but if file is open, unlink
530 531 # schedules it for delete but does not delete it. rename
531 532 # happens immediately even for open files, so we create
532 533 # temporary file, delete it, rename destination to that name,
533 534 # then delete that. then rename is safe to do.
534 535 fd, temp = tempfile.mkstemp(dir=os.path.dirname(dst) or '.')
535 536 os.close(fd)
536 537 os.unlink(temp)
537 538 os.rename(dst, temp)
538 539 os.unlink(temp)
539 540 os.rename(src, dst)
540 541
541 542 def unlink(f):
542 543 """unlink and remove the directory if it is empty"""
543 544 os.unlink(f)
544 545 # try removing directories that might now be empty
545 546 try:
546 547 os.removedirs(os.path.dirname(f))
547 548 except OSError:
548 549 pass
549 550
550 551 def copyfile(src, dest):
551 552 "copy a file, preserving mode"
552 553 try:
553 554 shutil.copyfile(src, dest)
554 555 shutil.copymode(src, dest)
555 556 except shutil.Error, inst:
556 557 raise util.Abort(str(inst))
557 558
558 559 def copyfiles(src, dst, hardlink=None):
559 560 """Copy a directory tree using hardlinks if possible"""
560 561
561 562 if hardlink is None:
562 563 hardlink = (os.stat(src).st_dev ==
563 564 os.stat(os.path.dirname(dst)).st_dev)
564 565
565 566 if os.path.isdir(src):
566 567 os.mkdir(dst)
567 568 for name in os.listdir(src):
568 569 srcname = os.path.join(src, name)
569 570 dstname = os.path.join(dst, name)
570 571 copyfiles(srcname, dstname, hardlink)
571 572 else:
572 573 if hardlink:
573 574 try:
574 575 os_link(src, dst)
575 576 except (IOError, OSError):
576 577 hardlink = False
577 578 shutil.copy(src, dst)
578 579 else:
579 580 shutil.copy(src, dst)
580 581
581 582 def audit_path(path):
582 583 """Abort if path contains dangerous components"""
583 584 parts = os.path.normcase(path).split(os.sep)
584 585 if (os.path.splitdrive(path)[0] or parts[0] in ('.hg', '')
585 586 or os.pardir in parts):
586 587 raise Abort(_("path contains illegal component: %s\n") % path)
587 588
588 589 def _makelock_file(info, pathname):
589 590 ld = os.open(pathname, os.O_CREAT | os.O_WRONLY | os.O_EXCL)
590 591 os.write(ld, info)
591 592 os.close(ld)
592 593
593 594 def _readlock_file(pathname):
594 595 return posixfile(pathname).read()
595 596
596 597 def nlinks(pathname):
597 598 """Return number of hardlinks for the given file."""
598 599 return os.lstat(pathname).st_nlink
599 600
600 601 if hasattr(os, 'link'):
601 602 os_link = os.link
602 603 else:
603 604 def os_link(src, dst):
604 605 raise OSError(0, _("Hardlinks not supported"))
605 606
606 607 def fstat(fp):
607 608 '''stat file object that may not have fileno method.'''
608 609 try:
609 610 return os.fstat(fp.fileno())
610 611 except AttributeError:
611 612 return os.stat(fp.name)
612 613
613 614 posixfile = file
614 615
615 616 def is_win_9x():
616 617 '''return true if run on windows 95, 98 or me.'''
617 618 try:
618 619 return sys.getwindowsversion()[3] == 1
619 620 except AttributeError:
620 621 return os.name == 'nt' and 'command' in os.environ.get('comspec', '')
621 622
622 623 getuser_fallback = None
623 624
624 625 def getuser():
625 626 '''return name of current user'''
626 627 try:
627 628 return getpass.getuser()
628 629 except ImportError:
629 630 # import of pwd will fail on windows - try fallback
630 631 if getuser_fallback:
631 632 return getuser_fallback()
632 633 # raised if win32api not available
633 634 raise Abort(_('user name not available - set USERNAME '
634 635 'environment variable'))
635 636
636 637 def username(uid=None):
637 638 """Return the name of the user with the given uid.
638 639
639 640 If uid is None, return the name of the current user."""
640 641 try:
641 642 import pwd
642 643 if uid is None:
643 644 uid = os.getuid()
644 645 try:
645 646 return pwd.getpwuid(uid)[0]
646 647 except KeyError:
647 648 return str(uid)
648 649 except ImportError:
649 650 return None
650 651
651 652 def groupname(gid=None):
652 653 """Return the name of the group with the given gid.
653 654
654 655 If gid is None, return the name of the current group."""
655 656 try:
656 657 import grp
657 658 if gid is None:
658 659 gid = os.getgid()
659 660 try:
660 661 return grp.getgrgid(gid)[0]
661 662 except KeyError:
662 663 return str(gid)
663 664 except ImportError:
664 665 return None
665 666
666 667 # File system features
667 668
668 669 def checkfolding(path):
669 670 """
670 671 Check whether the given path is on a case-sensitive filesystem
671 672
672 673 Requires a path (like /foo/.hg) ending with a foldable final
673 674 directory component.
674 675 """
675 676 s1 = os.stat(path)
676 677 d, b = os.path.split(path)
677 678 p2 = os.path.join(d, b.upper())
678 679 if path == p2:
679 680 p2 = os.path.join(d, b.lower())
680 681 try:
681 682 s2 = os.stat(p2)
682 683 if s2 == s1:
683 684 return False
684 685 return True
685 686 except:
686 687 return True
687 688
688 689 # Platform specific variants
689 690 if os.name == 'nt':
690 691 demandload(globals(), "msvcrt")
691 692 nulldev = 'NUL:'
692 693
693 694 class winstdout:
694 695 '''stdout on windows misbehaves if sent through a pipe'''
695 696
696 697 def __init__(self, fp):
697 698 self.fp = fp
698 699
699 700 def __getattr__(self, key):
700 701 return getattr(self.fp, key)
701 702
702 703 def close(self):
703 704 try:
704 705 self.fp.close()
705 706 except: pass
706 707
707 708 def write(self, s):
708 709 try:
709 710 return self.fp.write(s)
710 711 except IOError, inst:
711 712 if inst.errno != 0: raise
712 713 self.close()
713 714 raise IOError(errno.EPIPE, 'Broken pipe')
714 715
715 716 sys.stdout = winstdout(sys.stdout)
716 717
717 718 def system_rcpath():
718 719 try:
719 720 return system_rcpath_win32()
720 721 except:
721 722 return [r'c:\mercurial\mercurial.ini']
722 723
723 724 def os_rcpath():
724 725 '''return default os-specific hgrc search path'''
725 726 path = system_rcpath()
726 727 path.append(user_rcpath())
727 728 userprofile = os.environ.get('USERPROFILE')
728 729 if userprofile:
729 730 path.append(os.path.join(userprofile, 'mercurial.ini'))
730 731 return path
731 732
732 733 def user_rcpath():
733 734 '''return os-specific hgrc search path to the user dir'''
734 735 return os.path.join(os.path.expanduser('~'), 'mercurial.ini')
735 736
736 737 def parse_patch_output(output_line):
737 738 """parses the output produced by patch and returns the file name"""
738 739 pf = output_line[14:]
739 740 if pf[0] == '`':
740 741 pf = pf[1:-1] # Remove the quotes
741 742 return pf
742 743
743 744 def testpid(pid):
744 745 '''return False if pid dead, True if running or not known'''
745 746 return True
746 747
747 748 def is_exec(f, last):
748 749 return last
749 750
750 751 def set_exec(f, mode):
751 752 pass
752 753
753 754 def set_binary(fd):
754 755 msvcrt.setmode(fd.fileno(), os.O_BINARY)
755 756
756 757 def pconvert(path):
757 758 return path.replace("\\", "/")
758 759
759 760 def localpath(path):
760 761 return path.replace('/', '\\')
761 762
762 763 def normpath(path):
763 764 return pconvert(os.path.normpath(path))
764 765
765 766 makelock = _makelock_file
766 767 readlock = _readlock_file
767 768
768 769 def samestat(s1, s2):
769 770 return False
770 771
771 772 def shellquote(s):
772 773 return '"%s"' % s.replace('"', '\\"')
773 774
774 775 def explain_exit(code):
775 776 return _("exited with status %d") % code, code
776 777
777 778 # if you change this stub into a real check, please try to implement the
778 779 # username and groupname functions above, too.
779 780 def isowner(fp, st=None):
780 781 return True
781 782
782 783 try:
783 784 # override functions with win32 versions if possible
784 785 from util_win32 import *
785 786 if not is_win_9x():
786 787 posixfile = posixfile_nt
787 788 except ImportError:
788 789 pass
789 790
790 791 else:
791 792 nulldev = '/dev/null'
792 793
793 794 def rcfiles(path):
794 795 rcs = [os.path.join(path, 'hgrc')]
795 796 rcdir = os.path.join(path, 'hgrc.d')
796 797 try:
797 798 rcs.extend([os.path.join(rcdir, f) for f in os.listdir(rcdir)
798 799 if f.endswith(".rc")])
799 800 except OSError:
800 801 pass
801 802 return rcs
802 803
803 804 def os_rcpath():
804 805 '''return default os-specific hgrc search path'''
805 806 path = []
806 807 # old mod_python does not set sys.argv
807 808 if len(getattr(sys, 'argv', [])) > 0:
808 809 path.extend(rcfiles(os.path.dirname(sys.argv[0]) +
809 810 '/../etc/mercurial'))
810 811 path.extend(rcfiles('/etc/mercurial'))
811 812 path.append(os.path.expanduser('~/.hgrc'))
812 813 path = [os.path.normpath(f) for f in path]
813 814 return path
814 815
815 816 def parse_patch_output(output_line):
816 817 """parses the output produced by patch and returns the file name"""
817 818 pf = output_line[14:]
818 819 if pf.startswith("'") and pf.endswith("'") and " " in pf:
819 820 pf = pf[1:-1] # Remove the quotes
820 821 return pf
821 822
822 823 def is_exec(f, last):
823 824 """check whether a file is executable"""
824 825 return (os.lstat(f).st_mode & 0100 != 0)
825 826
826 827 def set_exec(f, mode):
827 828 s = os.lstat(f).st_mode
828 829 if (s & 0100 != 0) == mode:
829 830 return
830 831 if mode:
831 832 # Turn on +x for every +r bit when making a file executable
832 833 # and obey umask.
833 834 umask = os.umask(0)
834 835 os.umask(umask)
835 836 os.chmod(f, s | (s & 0444) >> 2 & ~umask)
836 837 else:
837 838 os.chmod(f, s & 0666)
838 839
839 840 def set_binary(fd):
840 841 pass
841 842
842 843 def pconvert(path):
843 844 return path
844 845
845 846 def localpath(path):
846 847 return path
847 848
848 849 normpath = os.path.normpath
849 850 samestat = os.path.samestat
850 851
851 852 def makelock(info, pathname):
852 853 try:
853 854 os.symlink(info, pathname)
854 855 except OSError, why:
855 856 if why.errno == errno.EEXIST:
856 857 raise
857 858 else:
858 859 _makelock_file(info, pathname)
859 860
860 861 def readlock(pathname):
861 862 try:
862 863 return os.readlink(pathname)
863 864 except OSError, why:
864 865 if why.errno == errno.EINVAL:
865 866 return _readlock_file(pathname)
866 867 else:
867 868 raise
868 869
869 870 def shellquote(s):
870 871 return "'%s'" % s.replace("'", "'\\''")
871 872
872 873 def testpid(pid):
873 874 '''return False if pid dead, True if running or not sure'''
874 875 try:
875 876 os.kill(pid, 0)
876 877 return True
877 878 except OSError, inst:
878 879 return inst.errno != errno.ESRCH
879 880
880 881 def explain_exit(code):
881 882 """return a 2-tuple (desc, code) describing a process's status"""
882 883 if os.WIFEXITED(code):
883 884 val = os.WEXITSTATUS(code)
884 885 return _("exited with status %d") % val, val
885 886 elif os.WIFSIGNALED(code):
886 887 val = os.WTERMSIG(code)
887 888 return _("killed by signal %d") % val, val
888 889 elif os.WIFSTOPPED(code):
889 890 val = os.WSTOPSIG(code)
890 891 return _("stopped by signal %d") % val, val
891 892 raise ValueError(_("invalid exit code"))
892 893
893 894 def isowner(fp, st=None):
894 895 """Return True if the file object f belongs to the current user.
895 896
896 897 The return value of a util.fstat(f) may be passed as the st argument.
897 898 """
898 899 if st is None:
899 900 st = fstat(f)
900 901 return st.st_uid == os.getuid()
901 902
902 903
903 904 def opener(base, audit=True):
904 905 """
905 906 return a function that opens files relative to base
906 907
907 908 this function is used to hide the details of COW semantics and
908 909 remote file access from higher level code.
909 910 """
910 911 p = base
911 912 audit_p = audit
912 913
913 914 def mktempcopy(name):
914 915 d, fn = os.path.split(name)
915 916 fd, temp = tempfile.mkstemp(prefix='.%s-' % fn, dir=d)
916 917 os.close(fd)
917 918 ofp = posixfile(temp, "wb")
918 919 try:
919 920 try:
920 921 ifp = posixfile(name, "rb")
921 922 except IOError, inst:
922 923 if not getattr(inst, 'filename', None):
923 924 inst.filename = name
924 925 raise
925 926 for chunk in filechunkiter(ifp):
926 927 ofp.write(chunk)
927 928 ifp.close()
928 929 ofp.close()
929 930 except:
930 931 try: os.unlink(temp)
931 932 except: pass
932 933 raise
933 934 st = os.lstat(name)
934 935 os.chmod(temp, st.st_mode)
935 936 return temp
936 937
937 938 class atomictempfile(posixfile):
938 939 """the file will only be copied when rename is called"""
939 940 def __init__(self, name, mode):
940 941 self.__name = name
941 942 self.temp = mktempcopy(name)
942 943 posixfile.__init__(self, self.temp, mode)
943 944 def rename(self):
944 945 if not self.closed:
945 946 posixfile.close(self)
946 947 rename(self.temp, localpath(self.__name))
947 948 def __del__(self):
948 949 if not self.closed:
949 950 try:
950 951 os.unlink(self.temp)
951 952 except: pass
952 953 posixfile.close(self)
953 954
954 955 class atomicfile(atomictempfile):
955 956 """the file will only be copied on close"""
956 957 def __init__(self, name, mode):
957 958 atomictempfile.__init__(self, name, mode)
958 959 def close(self):
959 960 self.rename()
960 961 def __del__(self):
961 962 self.rename()
962 963
963 964 def o(path, mode="r", text=False, atomic=False, atomictemp=False):
964 965 if audit_p:
965 966 audit_path(path)
966 967 f = os.path.join(p, path)
967 968
968 969 if not text:
969 970 mode += "b" # for that other OS
970 971
971 972 if mode[0] != "r":
972 973 try:
973 974 nlink = nlinks(f)
974 975 except OSError:
975 976 d = os.path.dirname(f)
976 977 if not os.path.isdir(d):
977 978 os.makedirs(d)
978 979 else:
979 980 if atomic:
980 981 return atomicfile(f, mode)
981 982 elif atomictemp:
982 983 return atomictempfile(f, mode)
983 984 if nlink > 1:
984 985 rename(mktempcopy(f), f)
985 986 return posixfile(f, mode)
986 987
987 988 return o
988 989
989 990 class chunkbuffer(object):
990 991 """Allow arbitrary sized chunks of data to be efficiently read from an
991 992 iterator over chunks of arbitrary size."""
992 993
993 994 def __init__(self, in_iter, targetsize = 2**16):
994 995 """in_iter is the iterator that's iterating over the input chunks.
995 996 targetsize is how big a buffer to try to maintain."""
996 997 self.in_iter = iter(in_iter)
997 998 self.buf = ''
998 999 self.targetsize = int(targetsize)
999 1000 if self.targetsize <= 0:
1000 1001 raise ValueError(_("targetsize must be greater than 0, was %d") %
1001 1002 targetsize)
1002 1003 self.iterempty = False
1003 1004
1004 1005 def fillbuf(self):
1005 1006 """Ignore target size; read every chunk from iterator until empty."""
1006 1007 if not self.iterempty:
1007 1008 collector = cStringIO.StringIO()
1008 1009 collector.write(self.buf)
1009 1010 for ch in self.in_iter:
1010 1011 collector.write(ch)
1011 1012 self.buf = collector.getvalue()
1012 1013 self.iterempty = True
1013 1014
1014 1015 def read(self, l):
1015 1016 """Read L bytes of data from the iterator of chunks of data.
1016 1017 Returns less than L bytes if the iterator runs dry."""
1017 1018 if l > len(self.buf) and not self.iterempty:
1018 1019 # Clamp to a multiple of self.targetsize
1019 1020 targetsize = self.targetsize * ((l // self.targetsize) + 1)
1020 1021 collector = cStringIO.StringIO()
1021 1022 collector.write(self.buf)
1022 1023 collected = len(self.buf)
1023 1024 for chunk in self.in_iter:
1024 1025 collector.write(chunk)
1025 1026 collected += len(chunk)
1026 1027 if collected >= targetsize:
1027 1028 break
1028 1029 if collected < targetsize:
1029 1030 self.iterempty = True
1030 1031 self.buf = collector.getvalue()
1031 1032 s, self.buf = self.buf[:l], buffer(self.buf, l)
1032 1033 return s
1033 1034
1034 1035 def filechunkiter(f, size=65536, limit=None):
1035 1036 """Create a generator that produces the data in the file size
1036 1037 (default 65536) bytes at a time, up to optional limit (default is
1037 1038 to read all data). Chunks may be less than size bytes if the
1038 1039 chunk is the last chunk in the file, or the file is a socket or
1039 1040 some other type of file that sometimes reads less data than is
1040 1041 requested."""
1041 1042 assert size >= 0
1042 1043 assert limit is None or limit >= 0
1043 1044 while True:
1044 1045 if limit is None: nbytes = size
1045 1046 else: nbytes = min(limit, size)
1046 1047 s = nbytes and f.read(nbytes)
1047 1048 if not s: break
1048 1049 if limit: limit -= len(s)
1049 1050 yield s
1050 1051
1051 1052 def makedate():
1052 1053 lt = time.localtime()
1053 1054 if lt[8] == 1 and time.daylight:
1054 1055 tz = time.altzone
1055 1056 else:
1056 1057 tz = time.timezone
1057 1058 return time.mktime(lt), tz
1058 1059
1059 1060 def datestr(date=None, format='%a %b %d %H:%M:%S %Y', timezone=True):
1060 1061 """represent a (unixtime, offset) tuple as a localized time.
1061 1062 unixtime is seconds since the epoch, and offset is the time zone's
1062 1063 number of seconds away from UTC. if timezone is false, do not
1063 1064 append time zone to string."""
1064 1065 t, tz = date or makedate()
1065 1066 s = time.strftime(format, time.gmtime(float(t) - tz))
1066 1067 if timezone:
1067 1068 s += " %+03d%02d" % (-tz / 3600, ((-tz % 3600) / 60))
1068 1069 return s
1069 1070
1070 1071 def strdate(string, format, defaults):
1071 1072 """parse a localized time string and return a (unixtime, offset) tuple.
1072 1073 if the string cannot be parsed, ValueError is raised."""
1073 1074 def timezone(string):
1074 1075 tz = string.split()[-1]
1075 1076 if tz[0] in "+-" and len(tz) == 5 and tz[1:].isdigit():
1076 1077 tz = int(tz)
1077 1078 offset = - 3600 * (tz / 100) - 60 * (tz % 100)
1078 1079 return offset
1079 1080 if tz == "GMT" or tz == "UTC":
1080 1081 return 0
1081 1082 return None
1082 1083
1083 1084 # NOTE: unixtime = localunixtime + offset
1084 1085 offset, date = timezone(string), string
1085 1086 if offset != None:
1086 1087 date = " ".join(string.split()[:-1])
1087 1088
1088 1089 # add missing elements from defaults
1089 1090 for part in defaults:
1090 1091 found = [True for p in part if ("%"+p) in format]
1091 1092 if not found:
1092 1093 date += "@" + defaults[part]
1093 1094 format += "@%" + part[0]
1094 1095
1095 1096 timetuple = time.strptime(date, format)
1096 1097 localunixtime = int(calendar.timegm(timetuple))
1097 1098 if offset is None:
1098 1099 # local timezone
1099 1100 unixtime = int(time.mktime(timetuple))
1100 1101 offset = unixtime - localunixtime
1101 1102 else:
1102 1103 unixtime = localunixtime + offset
1103 1104 return unixtime, offset
1104 1105
1105 1106 def parsedate(string, formats=None, defaults=None):
1106 1107 """parse a localized time string and return a (unixtime, offset) tuple.
1107 1108 The date may be a "unixtime offset" string or in one of the specified
1108 1109 formats."""
1109 1110 if not string:
1110 1111 return 0, 0
1111 1112 if not formats:
1112 1113 formats = defaultdateformats
1113 1114 string = string.strip()
1114 1115 try:
1115 1116 when, offset = map(int, string.split(' '))
1116 1117 except ValueError:
1117 1118 # fill out defaults
1118 1119 if not defaults:
1119 1120 defaults = {}
1120 1121 now = makedate()
1121 1122 for part in "d mb yY HI M S".split():
1122 1123 if part not in defaults:
1123 1124 if part[0] in "HMS":
1124 1125 defaults[part] = "00"
1125 1126 elif part[0] in "dm":
1126 1127 defaults[part] = "1"
1127 1128 else:
1128 1129 defaults[part] = datestr(now, "%" + part[0], False)
1129 1130
1130 1131 for format in formats:
1131 1132 try:
1132 1133 when, offset = strdate(string, format, defaults)
1133 1134 except ValueError:
1134 1135 pass
1135 1136 else:
1136 1137 break
1137 1138 else:
1138 1139 raise Abort(_('invalid date: %r ') % string)
1139 1140 # validate explicit (probably user-specified) date and
1140 1141 # time zone offset. values must fit in signed 32 bits for
1141 1142 # current 32-bit linux runtimes. timezones go from UTC-12
1142 1143 # to UTC+14
1143 1144 if abs(when) > 0x7fffffff:
1144 1145 raise Abort(_('date exceeds 32 bits: %d') % when)
1145 1146 if offset < -50400 or offset > 43200:
1146 1147 raise Abort(_('impossible time zone offset: %d') % offset)
1147 1148 return when, offset
1148 1149
1149 1150 def matchdate(date):
1150 1151 """Return a function that matches a given date match specifier
1151 1152
1152 1153 Formats include:
1153 1154
1154 1155 '{date}' match a given date to the accuracy provided
1155 1156
1156 1157 '<{date}' on or before a given date
1157 1158
1158 1159 '>{date}' on or after a given date
1159 1160
1160 1161 """
1161 1162
1162 1163 def lower(date):
1163 1164 return parsedate(date, extendeddateformats)[0]
1164 1165
1165 1166 def upper(date):
1166 1167 d = dict(mb="12", HI="23", M="59", S="59")
1167 1168 for days in "31 30 29".split():
1168 1169 try:
1169 1170 d["d"] = days
1170 1171 return parsedate(date, extendeddateformats, d)[0]
1171 1172 except:
1172 1173 pass
1173 1174 d["d"] = "28"
1174 1175 return parsedate(date, extendeddateformats, d)[0]
1175 1176
1176 1177 if date[0] == "<":
1177 1178 when = upper(date[1:])
1178 1179 return lambda x: x <= when
1179 1180 elif date[0] == ">":
1180 1181 when = lower(date[1:])
1181 1182 return lambda x: x >= when
1182 1183 elif date[0] == "-":
1183 1184 try:
1184 1185 days = int(date[1:])
1185 1186 except ValueError:
1186 1187 raise Abort(_("invalid day spec: %s") % date[1:])
1187 1188 when = makedate()[0] - days * 3600 * 24
1188 1189 return lambda x: x >= when
1189 1190 elif " to " in date:
1190 1191 a, b = date.split(" to ")
1191 1192 start, stop = lower(a), upper(b)
1192 1193 return lambda x: x >= start and x <= stop
1193 1194 else:
1194 1195 start, stop = lower(date), upper(date)
1195 1196 return lambda x: x >= start and x <= stop
1196 1197
1197 1198 def shortuser(user):
1198 1199 """Return a short representation of a user name or email address."""
1199 1200 f = user.find('@')
1200 1201 if f >= 0:
1201 1202 user = user[:f]
1202 1203 f = user.find('<')
1203 1204 if f >= 0:
1204 1205 user = user[f+1:]
1205 1206 f = user.find(' ')
1206 1207 if f >= 0:
1207 1208 user = user[:f]
1208 1209 f = user.find('.')
1209 1210 if f >= 0:
1210 1211 user = user[:f]
1211 1212 return user
1212 1213
1213 1214 def ellipsis(text, maxlength=400):
1214 1215 """Trim string to at most maxlength (default: 400) characters."""
1215 1216 if len(text) <= maxlength:
1216 1217 return text
1217 1218 else:
1218 1219 return "%s..." % (text[:maxlength-3])
1219 1220
1220 1221 def walkrepos(path):
1221 1222 '''yield every hg repository under path, recursively.'''
1222 1223 def errhandler(err):
1223 1224 if err.filename == path:
1224 1225 raise err
1225 1226
1226 1227 for root, dirs, files in os.walk(path, onerror=errhandler):
1227 1228 for d in dirs:
1228 1229 if d == '.hg':
1229 1230 yield root
1230 1231 dirs[:] = []
1231 1232 break
1232 1233
1233 1234 _rcpath = None
1234 1235
1235 1236 def rcpath():
1236 1237 '''return hgrc search path. if env var HGRCPATH is set, use it.
1237 1238 for each item in path, if directory, use files ending in .rc,
1238 1239 else use item.
1239 1240 make HGRCPATH empty to only look in .hg/hgrc of current repo.
1240 1241 if no HGRCPATH, use default os-specific path.'''
1241 1242 global _rcpath
1242 1243 if _rcpath is None:
1243 1244 if 'HGRCPATH' in os.environ:
1244 1245 _rcpath = []
1245 1246 for p in os.environ['HGRCPATH'].split(os.pathsep):
1246 1247 if not p: continue
1247 1248 if os.path.isdir(p):
1248 1249 for f in os.listdir(p):
1249 1250 if f.endswith('.rc'):
1250 1251 _rcpath.append(os.path.join(p, f))
1251 1252 else:
1252 1253 _rcpath.append(p)
1253 1254 else:
1254 1255 _rcpath = os_rcpath()
1255 1256 return _rcpath
1256 1257
1257 1258 def bytecount(nbytes):
1258 1259 '''return byte count formatted as readable string, with units'''
1259 1260
1260 1261 units = (
1261 1262 (100, 1<<30, _('%.0f GB')),
1262 1263 (10, 1<<30, _('%.1f GB')),
1263 1264 (1, 1<<30, _('%.2f GB')),
1264 1265 (100, 1<<20, _('%.0f MB')),
1265 1266 (10, 1<<20, _('%.1f MB')),
1266 1267 (1, 1<<20, _('%.2f MB')),
1267 1268 (100, 1<<10, _('%.0f KB')),
1268 1269 (10, 1<<10, _('%.1f KB')),
1269 1270 (1, 1<<10, _('%.2f KB')),
1270 1271 (1, 1, _('%.0f bytes')),
1271 1272 )
1272 1273
1273 1274 for multiplier, divisor, format in units:
1274 1275 if nbytes >= divisor * multiplier:
1275 1276 return format % (nbytes / float(divisor))
1276 1277 return units[-1][2] % nbytes
1277 1278
1278 1279 def drop_scheme(scheme, path):
1279 1280 sc = scheme + ':'
1280 1281 if path.startswith(sc):
1281 1282 path = path[len(sc):]
1282 1283 if path.startswith('//'):
1283 1284 path = path[2:]
1284 1285 return path
1 NO CONTENT: modified file, binary diff hidden
@@ -1,49 +1,54
1 1 #!/bin/sh
2 2
3 3 hg init t
4 4 cd t
5 5
6 6 # we need a repo with some legacy latin-1 changesets
7 7 hg unbundle $TESTDIR/legacy-encoding.hg
8 8 hg co
9 9
10 10 python << EOF
11 11 f = file('latin-1', 'w'); f.write("latin-1 e' encoded: \xe9"); f.close()
12 12 f = file('utf-8', 'w'); f.write("utf-8 e' encoded: \xc3\xa9"); f.close()
13 13 f = file('latin-1-tag', 'w'); f.write("\xe9"); f.close()
14 14 EOF
15 15
16 16 echo % should fail with encoding error
17 17 echo "plain old ascii" > a
18 18 hg st
19 19 HGENCODING=ascii hg ci -l latin-1 -d "0 0"
20 20
21 21 echo % these should work
22 22 echo "latin-1" > a
23 23 HGENCODING=latin-1 hg ci -l latin-1 -d "0 0"
24 24 echo "utf-8" > a
25 25 HGENCODING=utf-8 hg ci -l utf-8 -d "0 0"
26 26
27 27 HGENCODING=latin-1 hg tag -d "0 0" `cat latin-1-tag`
28 28 cp latin-1-tag .hg/branch
29 29 HGENCODING=latin-1 hg ci -d "0 0" -m 'latin1 branch'
30 30 rm .hg/branch
31 31
32 32 echo % ascii
33 33 hg --encoding ascii log
34 34 echo % latin-1
35 35 hg --encoding latin-1 log
36 36 echo % utf-8
37 37 hg --encoding utf-8 log
38 38 echo % ascii
39 39 HGENCODING=ascii hg tags
40 40 echo % latin-1
41 41 HGENCODING=latin-1 hg tags
42 42 echo % utf-8
43 43 HGENCODING=utf-8 hg tags
44 44 echo % ascii
45 45 HGENCODING=ascii hg branches
46 46 echo % latin-1
47 47 HGENCODING=latin-1 hg branches
48 48 echo % utf-8
49 49 HGENCODING=utf-8 hg branches
50
51 echo '[ui]' >> .hg/hgrc
52 echo 'fallbackencoding = euc-jp' >> .hg/hgrc
53 echo % utf-8
54 HGENCODING=utf-8 hg log
@@ -1,118 +1,167
1 1 adding changesets
2 2 adding manifests
3 3 adding file changes
4 added 1 changesets with 1 changes to 1 files
4 added 2 changesets with 2 changes to 1 files
5 5 (run 'hg update' to get a working copy)
6 6 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
7 7 % should fail with encoding error
8 8 M a
9 9 ? latin-1
10 10 ? latin-1-tag
11 11 ? utf-8
12 12 abort: decoding near ' encoded: �': 'ascii' codec can't decode byte 0xe9 in position 20: ordinal not in range(128)!
13 13
14 14 transaction abort!
15 15 rollback completed
16 16 % these should work
17 17 % ascii
18 changeset: 4:d8a5d9eaf41e
18 changeset: 5:e4ed49b8a8f0
19 19 branch: ?
20 20 tag: tip
21 21 user: test
22 22 date: Thu Jan 01 00:00:00 1970 +0000
23 23 summary: latin1 branch
24 24
25 changeset: 3:5edfc7acb541
25 changeset: 4:a02ca5a58e99
26 26 user: test
27 27 date: Thu Jan 01 00:00:00 1970 +0000
28 summary: Added tag ? for changeset 91878608adb3
28 summary: Added tag ? for changeset d47908dab82f
29 29
30 changeset: 2:91878608adb3
30 changeset: 3:d47908dab82f
31 31 tag: ?
32 32 user: test
33 33 date: Thu Jan 01 00:00:00 1970 +0000
34 34 summary: utf-8 e' encoded: ?
35 35
36 changeset: 1:6355cacf842e
36 changeset: 2:9db1985f3097
37 37 user: test
38 38 date: Thu Jan 01 00:00:00 1970 +0000
39 39 summary: latin-1 e' encoded: ?
40 40
41 changeset: 1:af6e0db4427c
42 user: test
43 date: Thu Jan 01 00:00:00 1970 +0000
44 summary: euc-jp: ?????? = u'\u65e5\u672c\u8a9e'
45
41 46 changeset: 0:60aad1dd20a9
42 47 user: test
43 48 date: Thu Jan 01 00:00:00 1970 +0000
44 49 summary: latin-1 e': ?
45 50
46 51 % latin-1
47 changeset: 4:d8a5d9eaf41e
52 changeset: 5:e4ed49b8a8f0
48 53 branch: �
49 54 tag: tip
50 55 user: test
51 56 date: Thu Jan 01 00:00:00 1970 +0000
52 57 summary: latin1 branch
53 58
54 changeset: 3:5edfc7acb541
59 changeset: 4:a02ca5a58e99
55 60 user: test
56 61 date: Thu Jan 01 00:00:00 1970 +0000
57 summary: Added tag � for changeset 91878608adb3
62 summary: Added tag � for changeset d47908dab82f
58 63
59 changeset: 2:91878608adb3
64 changeset: 3:d47908dab82f
60 65 tag: �
61 66 user: test
62 67 date: Thu Jan 01 00:00:00 1970 +0000
63 68 summary: utf-8 e' encoded: �
64 69
65 changeset: 1:6355cacf842e
70 changeset: 2:9db1985f3097
66 71 user: test
67 72 date: Thu Jan 01 00:00:00 1970 +0000
68 73 summary: latin-1 e' encoded: �
69 74
75 changeset: 1:af6e0db4427c
76 user: test
77 date: Thu Jan 01 00:00:00 1970 +0000
78 summary: euc-jp: ���ܸ� = u'\u65e5\u672c\u8a9e'
79
70 80 changeset: 0:60aad1dd20a9
71 81 user: test
72 82 date: Thu Jan 01 00:00:00 1970 +0000
73 83 summary: latin-1 e': �
74 84
75 85 % utf-8
76 changeset: 4:d8a5d9eaf41e
86 changeset: 5:e4ed49b8a8f0
77 87 branch: é
78 88 tag: tip
79 89 user: test
80 90 date: Thu Jan 01 00:00:00 1970 +0000
81 91 summary: latin1 branch
82 92
83 changeset: 3:5edfc7acb541
93 changeset: 4:a02ca5a58e99
84 94 user: test
85 95 date: Thu Jan 01 00:00:00 1970 +0000
86 summary: Added tag é for changeset 91878608adb3
96 summary: Added tag é for changeset d47908dab82f
87 97
88 changeset: 2:91878608adb3
98 changeset: 3:d47908dab82f
89 99 tag: é
90 100 user: test
91 101 date: Thu Jan 01 00:00:00 1970 +0000
92 102 summary: utf-8 e' encoded: é
93 103
94 changeset: 1:6355cacf842e
104 changeset: 2:9db1985f3097
95 105 user: test
96 106 date: Thu Jan 01 00:00:00 1970 +0000
97 107 summary: latin-1 e' encoded: é
98 108
109 changeset: 1:af6e0db4427c
110 user: test
111 date: Thu Jan 01 00:00:00 1970 +0000
112 summary: euc-jp: ÆüËܸì = u'\u65e5\u672c\u8a9e'
113
99 114 changeset: 0:60aad1dd20a9
100 115 user: test
101 116 date: Thu Jan 01 00:00:00 1970 +0000
102 117 summary: latin-1 e': é
103 118
104 119 % ascii
105 tip 4:d8a5d9eaf41e
106 ? 2:91878608adb3
120 tip 5:e4ed49b8a8f0
121 ? 3:d47908dab82f
107 122 % latin-1
108 tip 4:d8a5d9eaf41e
109 2:91878608adb3
123 tip 5:e4ed49b8a8f0
124 3:d47908dab82f
125 % utf-8
126 tip 5:e4ed49b8a8f0
127 é 3:d47908dab82f
128 % ascii
129 ? 5:e4ed49b8a8f0
130 % latin-1
131 � 5:e4ed49b8a8f0
132 % utf-8
133 é 5:e4ed49b8a8f0
110 134 % utf-8
111 tip 4:d8a5d9eaf41e
112 é 2:91878608adb3
113 % ascii
114 ? 4:d8a5d9eaf41e
115 % latin-1
116 � 4:d8a5d9eaf41e
117 % utf-8
118 é 4:d8a5d9eaf41e
135 changeset: 5:e4ed49b8a8f0
136 branch: é
137 tag: tip
138 user: test
139 date: Thu Jan 01 00:00:00 1970 +0000
140 summary: latin1 branch
141
142 changeset: 4:a02ca5a58e99
143 user: test
144 date: Thu Jan 01 00:00:00 1970 +0000
145 summary: Added tag é for changeset d47908dab82f
146
147 changeset: 3:d47908dab82f
148 tag: é
149 user: test
150 date: Thu Jan 01 00:00:00 1970 +0000
151 summary: utf-8 e' encoded: é
152
153 changeset: 2:9db1985f3097
154 user: test
155 date: Thu Jan 01 00:00:00 1970 +0000
156 summary: latin-1 e' encoded: é
157
158 changeset: 1:af6e0db4427c
159 user: test
160 date: Thu Jan 01 00:00:00 1970 +0000
161 summary: euc-jp: 日本語 = u'\u65e5\u672c\u8a9e'
162
163 changeset: 0:60aad1dd20a9
164 user: test
165 date: Thu Jan 01 00:00:00 1970 +0000
166 summary: latin-1 e': �
167
General Comments 0
You need to be logged in to leave comments. Login now