##// END OF EJS Templates
merge
Vadim Gelfer -
r2638:8dadff05 merge default
parent child Browse files
Show More
@@ -1,452 +1,464 b''
1 1 HGRC(5)
2 2 =======
3 3 Bryan O'Sullivan <bos@serpentine.com>
4 4
5 5 NAME
6 6 ----
7 7 hgrc - configuration files for Mercurial
8 8
9 9 SYNOPSIS
10 10 --------
11 11
12 12 The Mercurial system uses a set of configuration files to control
13 13 aspects of its behaviour.
14 14
15 15 FILES
16 16 -----
17 17
18 18 Mercurial reads configuration data from several files, if they exist.
19 19 The names of these files depend on the system on which Mercurial is
20 20 installed.
21 21
22 22 (Unix) <install-root>/etc/mercurial/hgrc.d/*.rc::
23 23 (Unix) <install-root>/etc/mercurial/hgrc::
24 24 Per-installation configuration files, searched for in the
25 25 directory where Mercurial is installed. For example, if installed
26 26 in /shared/tools, Mercurial will look in
27 27 /shared/tools/etc/mercurial/hgrc. Options in these files apply to
28 28 all Mercurial commands executed by any user in any directory.
29 29
30 30 (Unix) /etc/mercurial/hgrc.d/*.rc::
31 31 (Unix) /etc/mercurial/hgrc::
32 32 (Windows) C:\Mercurial\Mercurial.ini::
33 33 Per-system configuration files, for the system on which Mercurial
34 34 is running. Options in these files apply to all Mercurial
35 35 commands executed by any user in any directory. Options in these
36 36 files override per-installation options.
37 37
38 38 (Unix) $HOME/.hgrc::
39 39 (Windows) C:\Documents and Settings\USERNAME\Mercurial.ini::
40 40 (Windows) $HOME\Mercurial.ini::
41 41 Per-user configuration file, for the user running Mercurial.
42 42 Options in this file apply to all Mercurial commands executed by
43 43 any user in any directory. Options in this file override
44 44 per-installation and per-system options.
45 45 On Windows system, one of these is chosen exclusively according
46 46 to definition of HOME environment variable.
47 47
48 48 (Unix, Windows) <repo>/.hg/hgrc::
49 49 Per-repository configuration options that only apply in a
50 50 particular repository. This file is not version-controlled, and
51 51 will not get transferred during a "clone" operation. Options in
52 52 this file override options in all other configuration files.
53 53
54 54 SYNTAX
55 55 ------
56 56
57 57 A configuration file consists of sections, led by a "[section]" header
58 58 and followed by "name: value" entries; "name=value" is also accepted.
59 59
60 60 [spam]
61 61 eggs=ham
62 62 green=
63 63 eggs
64 64
65 65 Each line contains one entry. If the lines that follow are indented,
66 66 they are treated as continuations of that entry.
67 67
68 68 Leading whitespace is removed from values. Empty lines are skipped.
69 69
70 70 The optional values can contain format strings which refer to other
71 71 values in the same section, or values in a special DEFAULT section.
72 72
73 73 Lines beginning with "#" or ";" are ignored and may be used to provide
74 74 comments.
75 75
76 76 SECTIONS
77 77 --------
78 78
79 79 This section describes the different sections that may appear in a
80 80 Mercurial "hgrc" file, the purpose of each section, its possible
81 81 keys, and their possible values.
82 82
83 83 decode/encode::
84 84 Filters for transforming files on checkout/checkin. This would
85 85 typically be used for newline processing or other
86 86 localization/canonicalization of files.
87 87
88 88 Filters consist of a filter pattern followed by a filter command.
89 89 Filter patterns are globs by default, rooted at the repository
90 90 root. For example, to match any file ending in ".txt" in the root
91 91 directory only, use the pattern "*.txt". To match any file ending
92 92 in ".c" anywhere in the repository, use the pattern "**.c".
93 93
94 94 The filter command can start with a specifier, either "pipe:" or
95 95 "tempfile:". If no specifier is given, "pipe:" is used by default.
96 96
97 97 A "pipe:" command must accept data on stdin and return the
98 98 transformed data on stdout.
99 99
100 100 Pipe example:
101 101
102 102 [encode]
103 103 # uncompress gzip files on checkin to improve delta compression
104 104 # note: not necessarily a good idea, just an example
105 105 *.gz = pipe: gunzip
106 106
107 107 [decode]
108 108 # recompress gzip files when writing them to the working dir (we
109 109 # can safely omit "pipe:", because it's the default)
110 110 *.gz = gzip
111 111
112 112 A "tempfile:" command is a template. The string INFILE is replaced
113 113 with the name of a temporary file that contains the data to be
114 114 filtered by the command. The string OUTFILE is replaced with the
115 115 name of an empty temporary file, where the filtered data must be
116 116 written by the command.
117 117
118 118 NOTE: the tempfile mechanism is recommended for Windows systems,
119 119 where the standard shell I/O redirection operators often have
120 120 strange effects. In particular, if you are doing line ending
121 121 conversion on Windows using the popular dos2unix and unix2dos
122 122 programs, you *must* use the tempfile mechanism, as using pipes will
123 123 corrupt the contents of your files.
124 124
125 125 Tempfile example:
126 126
127 127 [encode]
128 128 # convert files to unix line ending conventions on checkin
129 129 **.txt = tempfile: dos2unix -n INFILE OUTFILE
130 130
131 131 [decode]
132 132 # convert files to windows line ending conventions when writing
133 133 # them to the working dir
134 134 **.txt = tempfile: unix2dos -n INFILE OUTFILE
135 135
136 136 email::
137 137 Settings for extensions that send email messages.
138 138 from;;
139 139 Optional. Email address to use in "From" header and SMTP envelope
140 140 of outgoing messages.
141 141 method;;
142 142 Optional. Method to use to send email messages. If value is
143 143 "smtp" (default), use SMTP (see section "[mail]" for
144 144 configuration). Otherwise, use as name of program to run that
145 145 acts like sendmail (takes "-f" option for sender, list of
146 146 recipients on command line, message on stdin). Normally, setting
147 147 this to "sendmail" or "/usr/sbin/sendmail" is enough to use
148 148 sendmail to send messages.
149 149
150 150 Email example:
151 151
152 152 [email]
153 153 from = Joseph User <joe.user@example.com>
154 154 method = /usr/sbin/sendmail
155 155
156 156 extensions::
157 157 Mercurial has an extension mechanism for adding new features. To
158 158 enable an extension, create an entry for it in this section.
159 159
160 160 If you know that the extension is already in Python's search path,
161 161 you can give the name of the module, followed by "=", with nothing
162 162 after the "=".
163 163
164 164 Otherwise, give a name that you choose, followed by "=", followed by
165 165 the path to the ".py" file (including the file name extension) that
166 166 defines the extension.
167 167
168 168 Example for ~/.hgrc:
169 169
170 170 [extensions]
171 171 # (the mq extension will get loaded from mercurial's path)
172 172 hgext.mq =
173 173 # (this extension will get loaded from the file specified)
174 174 myfeature = ~/.hgext/myfeature.py
175 175
176 176 hooks::
177 177 Commands or Python functions that get automatically executed by
178 178 various actions such as starting or finishing a commit. Multiple
179 179 hooks can be run for the same action by appending a suffix to the
180 180 action. Overriding a site-wide hook can be done by changing its
181 181 value or setting it to an empty string.
182 182
183 183 Example .hg/hgrc:
184 184
185 185 [hooks]
186 186 # do not use the site-wide hook
187 187 incoming =
188 188 incoming.email = /my/email/hook
189 189 incoming.autobuild = /my/build/hook
190 190
191 191 Most hooks are run with environment variables set that give added
192 192 useful information. For each hook below, the environment variables
193 193 it is passed are listed with names of the form "$HG_foo".
194 194
195 195 changegroup;;
196 196 Run after a changegroup has been added via push, pull or
197 197 unbundle. ID of the first new changeset is in $HG_NODE.
198 198 commit;;
199 199 Run after a changeset has been created in the local repository.
200 200 ID of the newly created changeset is in $HG_NODE. Parent
201 201 changeset IDs are in $HG_PARENT1 and $HG_PARENT2.
202 202 incoming;;
203 203 Run after a changeset has been pulled, pushed, or unbundled into
204 204 the local repository. The ID of the newly arrived changeset is in
205 205 $HG_NODE.
206 206 outgoing;;
207 207 Run after sending changes from local repository to another. ID of
208 208 first changeset sent is in $HG_NODE. Source of operation is in
209 209 $HG_SOURCE; see "preoutgoing" hook for description.
210 210 prechangegroup;;
211 211 Run before a changegroup is added via push, pull or unbundle.
212 212 Exit status 0 allows the changegroup to proceed. Non-zero status
213 213 will cause the push, pull or unbundle to fail.
214 214 precommit;;
215 215 Run before starting a local commit. Exit status 0 allows the
216 216 commit to proceed. Non-zero status will cause the commit to fail.
217 217 Parent changeset IDs are in $HG_PARENT1 and $HG_PARENT2.
218 218 preoutgoing;;
219 219 Run before computing changes to send from the local repository to
220 220 another. Non-zero status will cause failure. This lets you
221 221 prevent pull over http or ssh. Also prevents against local pull,
222 222 push (outbound) or bundle commands, but not effective, since you
223 223 can just copy files instead then. Source of operation is in
224 224 $HG_SOURCE. If "serve", operation is happening on behalf of
225 225 remote ssh or http repository. If "push", "pull" or "bundle",
226 226 operation is happening on behalf of repository on same system.
227 227 pretag;;
228 228 Run before creating a tag. Exit status 0 allows the tag to be
229 229 created. Non-zero status will cause the tag to fail. ID of
230 230 changeset to tag is in $HG_NODE. Name of tag is in $HG_TAG. Tag
231 231 is local if $HG_LOCAL=1, in repo if $HG_LOCAL=0.
232 232 pretxnchangegroup;;
233 233 Run after a changegroup has been added via push, pull or unbundle,
234 234 but before the transaction has been committed. Changegroup is
235 235 visible to hook program. This lets you validate incoming changes
236 236 before accepting them. Passed the ID of the first new changeset
237 237 in $HG_NODE. Exit status 0 allows the transaction to commit.
238 238 Non-zero status will cause the transaction to be rolled back and
239 239 the push, pull or unbundle will fail.
240 240 pretxncommit;;
241 241 Run after a changeset has been created but the transaction not yet
242 242 committed. Changeset is visible to hook program. This lets you
243 243 validate commit message and changes. Exit status 0 allows the
244 244 commit to proceed. Non-zero status will cause the transaction to
245 245 be rolled back. ID of changeset is in $HG_NODE. Parent changeset
246 246 IDs are in $HG_PARENT1 and $HG_PARENT2.
247 247 preupdate;;
248 248 Run before updating the working directory. Exit status 0 allows
249 249 the update to proceed. Non-zero status will prevent the update.
250 250 Changeset ID of first new parent is in $HG_PARENT1. If merge, ID
251 251 of second new parent is in $HG_PARENT2.
252 252 tag;;
253 253 Run after a tag is created. ID of tagged changeset is in
254 254 $HG_NODE. Name of tag is in $HG_TAG. Tag is local if
255 255 $HG_LOCAL=1, in repo if $HG_LOCAL=0.
256 256 update;;
257 257 Run after updating the working directory. Changeset ID of first
258 258 new parent is in $HG_PARENT1. If merge, ID of second new parent
259 259 is in $HG_PARENT2. If update succeeded, $HG_ERROR=0. If update
260 260 failed (e.g. because conflicts not resolved), $HG_ERROR=1.
261 261
262 262 Note: In earlier releases, the names of hook environment variables
263 263 did not have a "HG_" prefix. The old unprefixed names are no longer
264 264 provided in the environment.
265 265
266 266 The syntax for Python hooks is as follows:
267 267
268 268 hookname = python:modulename.submodule.callable
269 269
270 270 Python hooks are run within the Mercurial process. Each hook is
271 271 called with at least three keyword arguments: a ui object (keyword
272 272 "ui"), a repository object (keyword "repo"), and a "hooktype"
273 273 keyword that tells what kind of hook is used. Arguments listed as
274 274 environment variables above are passed as keyword arguments, with no
275 275 "HG_" prefix, and names in lower case.
276 276
277 277 A Python hook must return a "true" value to succeed. Returning a
278 278 "false" value or raising an exception is treated as failure of the
279 279 hook.
280 280
281 281 http_proxy::
282 282 Used to access web-based Mercurial repositories through a HTTP
283 283 proxy.
284 284 host;;
285 285 Host name and (optional) port of the proxy server, for example
286 286 "myproxy:8000".
287 287 no;;
288 288 Optional. Comma-separated list of host names that should bypass
289 289 the proxy.
290 290 passwd;;
291 291 Optional. Password to authenticate with at the proxy server.
292 292 user;;
293 293 Optional. User name to authenticate with at the proxy server.
294 294
295 295 smtp::
296 296 Configuration for extensions that need to send email messages.
297 297 host;;
298 298 Optional. Host name of mail server. Default: "mail".
299 299 port;;
300 300 Optional. Port to connect to on mail server. Default: 25.
301 301 tls;;
302 302 Optional. Whether to connect to mail server using TLS. True or
303 303 False. Default: False.
304 304 username;;
305 305 Optional. User name to authenticate to SMTP server with.
306 306 If username is specified, password must also be specified.
307 307 Default: none.
308 308 password;;
309 309 Optional. Password to authenticate to SMTP server with.
310 310 If username is specified, password must also be specified.
311 311 Default: none.
312 312 local_hostname;;
313 313 Optional. It's the hostname that the sender can use to identify itself
314 314 to the MTA.
315 315
316 316 paths::
317 317 Assigns symbolic names to repositories. The left side is the
318 318 symbolic name, and the right gives the directory or URL that is the
319 319 location of the repository. Default paths can be declared by
320 setting the following entries.
320 setting the following entries.
321 321 default;;
322 322 Directory or URL to use when pulling if no source is specified.
323 323 Default is set to repository from which the current repository
324 324 was cloned.
325 325 default-push;;
326 326 Optional. Directory or URL to use when pushing if no destination
327 327 is specified.
328 328
329 server::
330 Controls generic server settings.
331 uncompressed;;
332 Whether to allow clients to clone a repo using the uncompressed
333 streaming protocol. This transfers about 40% more data than a
334 regular clone, but uses less memory and CPU on both server and
335 client. Over a LAN (100Mbps or better) or a very fast WAN, an
336 uncompressed streaming clone is a lot faster (~10x) than a regular
337 clone. Over most WAN connections (anything slower than about
338 6Mbps), uncompressed streaming is slower, because of the extra
339 data transfer overhead. Default is False.
340
329 341 ui::
330 342 User interface controls.
331 343 debug;;
332 344 Print debugging information. True or False. Default is False.
333 345 editor;;
334 346 The editor to use during a commit. Default is $EDITOR or "vi".
335 347 ignore;;
336 348 A file to read per-user ignore patterns from. This file should be in
337 349 the same format as a repository-wide .hgignore file. This option
338 350 supports hook syntax, so if you want to specify multiple ignore
339 351 files, you can do so by setting something like
340 352 "ignore.other = ~/.hgignore2". For details of the ignore file
341 353 format, see the hgignore(5) man page.
342 354 interactive;;
343 355 Allow to prompt the user. True or False. Default is True.
344 356 logtemplate;;
345 357 Template string for commands that print changesets.
346 358 style;;
347 359 Name of style to use for command output.
348 360 merge;;
349 361 The conflict resolution program to use during a manual merge.
350 362 Default is "hgmerge".
351 363 quiet;;
352 364 Reduce the amount of output printed. True or False. Default is False.
353 365 remotecmd;;
354 366 remote command to use for clone/push/pull operations. Default is 'hg'.
355 367 ssh;;
356 368 command to use for SSH connections. Default is 'ssh'.
357 369 timeout;;
358 370 The timeout used when a lock is held (in seconds), a negative value
359 371 means no timeout. Default is 600.
360 372 username;;
361 373 The committer of a changeset created when running "commit".
362 374 Typically a person's name and email address, e.g. "Fred Widget
363 375 <fred@example.com>". Default is $EMAIL or username@hostname, unless
364 376 username is set to an empty string, which enforces specifying the
365 377 username manually.
366 378 verbose;;
367 379 Increase the amount of output printed. True or False. Default is False.
368 380
369 381
370 382 web::
371 383 Web interface configuration.
372 384 accesslog;;
373 385 Where to output the access log. Default is stdout.
374 386 address;;
375 387 Interface address to bind to. Default is all.
376 388 allow_archive;;
377 389 List of archive format (bz2, gz, zip) allowed for downloading.
378 390 Default is empty.
379 391 allowbz2;;
380 392 (DEPRECATED) Whether to allow .tar.bz2 downloading of repo revisions.
381 393 Default is false.
382 394 allowgz;;
383 395 (DEPRECATED) Whether to allow .tar.gz downloading of repo revisions.
384 396 Default is false.
385 397 allowpull;;
386 398 Whether to allow pulling from the repository. Default is true.
387 399 allow_push;;
388 400 Whether to allow pushing to the repository. If empty or not set,
389 401 push is not allowed. If the special value "*", any remote user
390 402 can push, including unauthenticated users. Otherwise, the remote
391 403 user must have been authenticated, and the authenticated user name
392 404 must be present in this list (separated by whitespace or ",").
393 405 The contents of the allow_push list are examined after the
394 406 deny_push list.
395 407 allowzip;;
396 408 (DEPRECATED) Whether to allow .zip downloading of repo revisions.
397 409 Default is false. This feature creates temporary files.
398 410 baseurl;;
399 411 Base URL to use when publishing URLs in other locations, so
400 412 third-party tools like email notification hooks can construct URLs.
401 413 Example: "http://hgserver/repos/"
402 414 contact;;
403 415 Name or email address of the person in charge of the repository.
404 416 Default is "unknown".
405 417 deny_push;;
406 418 Whether to deny pushing to the repository. If empty or not set,
407 419 push is not denied. If the special value "*", all remote users
408 420 are denied push. Otherwise, unauthenticated users are all denied,
409 421 and any authenticated user name present in this list (separated by
410 422 whitespace or ",") is also denied. The contents of the deny_push
411 423 list are examined before the allow_push list.
412 424 description;;
413 425 Textual description of the repository's purpose or contents.
414 426 Default is "unknown".
415 427 errorlog;;
416 428 Where to output the error log. Default is stderr.
417 429 ipv6;;
418 430 Whether to use IPv6. Default is false.
419 431 name;;
420 432 Repository name to use in the web interface. Default is current
421 433 working directory.
422 434 maxchanges;;
423 435 Maximum number of changes to list on the changelog. Default is 10.
424 436 maxfiles;;
425 437 Maximum number of files to list per changeset. Default is 10.
426 438 port;;
427 439 Port to listen on. Default is 8000.
428 440 push_ssl;;
429 441 Whether to require that inbound pushes be transported over SSL to
430 442 prevent password sniffing. Default is true.
431 443 style;;
432 444 Which template map style to use.
433 445 templates;;
434 446 Where to find the HTML templates. Default is install path.
435 447
436 448
437 449 AUTHOR
438 450 ------
439 451 Bryan O'Sullivan <bos@serpentine.com>.
440 452
441 453 Mercurial was written by Matt Mackall <mpm@selenic.com>.
442 454
443 455 SEE ALSO
444 456 --------
445 457 hg(1), hgignore(5)
446 458
447 459 COPYING
448 460 -------
449 461 This manual page is copyright 2005 Bryan O'Sullivan.
450 462 Mercurial is copyright 2005, 2006 Matt Mackall.
451 463 Free use of this software is granted under the terms of the GNU General
452 464 Public License (GPL).
@@ -1,3521 +1,3528 b''
1 1 # commands.py - command processing for mercurial
2 2 #
3 3 # Copyright 2005 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms
6 6 # of the GNU General Public License, incorporated herein by reference.
7 7
8 8 from demandload import demandload
9 9 from node import *
10 10 from i18n import gettext as _
11 11 demandload(globals(), "os re sys signal shutil imp urllib pdb")
12 12 demandload(globals(), "fancyopts ui hg util lock revlog templater bundlerepo")
13 13 demandload(globals(), "fnmatch mdiff random signal tempfile time")
14 14 demandload(globals(), "traceback errno socket version struct atexit sets bz2")
15 15 demandload(globals(), "archival cStringIO changegroup email.Parser")
16 16 demandload(globals(), "hgweb.server sshserver")
17 17
18 18 class UnknownCommand(Exception):
19 19 """Exception raised if command is not in the command table."""
20 20 class AmbiguousCommand(Exception):
21 21 """Exception raised if command shortcut matches more than one command."""
22 22
23 23 def bail_if_changed(repo):
24 24 modified, added, removed, deleted, unknown = repo.changes()
25 25 if modified or added or removed or deleted:
26 26 raise util.Abort(_("outstanding uncommitted changes"))
27 27
28 28 def filterfiles(filters, files):
29 29 l = [x for x in files if x in filters]
30 30
31 31 for t in filters:
32 32 if t and t[-1] != "/":
33 33 t += "/"
34 34 l += [x for x in files if x.startswith(t)]
35 35 return l
36 36
37 37 def relpath(repo, args):
38 38 cwd = repo.getcwd()
39 39 if cwd:
40 40 return [util.normpath(os.path.join(cwd, x)) for x in args]
41 41 return args
42 42
43 43 def matchpats(repo, pats=[], opts={}, head=''):
44 44 cwd = repo.getcwd()
45 45 if not pats and cwd:
46 46 opts['include'] = [os.path.join(cwd, i) for i in opts['include']]
47 47 opts['exclude'] = [os.path.join(cwd, x) for x in opts['exclude']]
48 48 cwd = ''
49 49 return util.cmdmatcher(repo.root, cwd, pats or ['.'], opts.get('include'),
50 50 opts.get('exclude'), head)
51 51
52 52 def makewalk(repo, pats, opts, node=None, head='', badmatch=None):
53 53 files, matchfn, anypats = matchpats(repo, pats, opts, head)
54 54 exact = dict(zip(files, files))
55 55 def walk():
56 56 for src, fn in repo.walk(node=node, files=files, match=matchfn,
57 57 badmatch=badmatch):
58 58 yield src, fn, util.pathto(repo.getcwd(), fn), fn in exact
59 59 return files, matchfn, walk()
60 60
61 61 def walk(repo, pats, opts, node=None, head='', badmatch=None):
62 62 files, matchfn, results = makewalk(repo, pats, opts, node, head, badmatch)
63 63 for r in results:
64 64 yield r
65 65
66 66 def walkchangerevs(ui, repo, pats, opts):
67 67 '''Iterate over files and the revs they changed in.
68 68
69 69 Callers most commonly need to iterate backwards over the history
70 70 it is interested in. Doing so has awful (quadratic-looking)
71 71 performance, so we use iterators in a "windowed" way.
72 72
73 73 We walk a window of revisions in the desired order. Within the
74 74 window, we first walk forwards to gather data, then in the desired
75 75 order (usually backwards) to display it.
76 76
77 77 This function returns an (iterator, getchange, matchfn) tuple. The
78 78 getchange function returns the changelog entry for a numeric
79 79 revision. The iterator yields 3-tuples. They will be of one of
80 80 the following forms:
81 81
82 82 "window", incrementing, lastrev: stepping through a window,
83 83 positive if walking forwards through revs, last rev in the
84 84 sequence iterated over - use to reset state for the current window
85 85
86 86 "add", rev, fns: out-of-order traversal of the given file names
87 87 fns, which changed during revision rev - use to gather data for
88 88 possible display
89 89
90 90 "iter", rev, None: in-order traversal of the revs earlier iterated
91 91 over with "add" - use to display data'''
92 92
93 93 def increasing_windows(start, end, windowsize=8, sizelimit=512):
94 94 if start < end:
95 95 while start < end:
96 96 yield start, min(windowsize, end-start)
97 97 start += windowsize
98 98 if windowsize < sizelimit:
99 99 windowsize *= 2
100 100 else:
101 101 while start > end:
102 102 yield start, min(windowsize, start-end-1)
103 103 start -= windowsize
104 104 if windowsize < sizelimit:
105 105 windowsize *= 2
106 106
107 107
108 108 files, matchfn, anypats = matchpats(repo, pats, opts)
109 109
110 110 if repo.changelog.count() == 0:
111 111 return [], False, matchfn
112 112
113 113 revs = map(int, revrange(ui, repo, opts['rev'] or ['tip:0']))
114 114 wanted = {}
115 115 slowpath = anypats
116 116 fncache = {}
117 117
118 118 chcache = {}
119 119 def getchange(rev):
120 120 ch = chcache.get(rev)
121 121 if ch is None:
122 122 chcache[rev] = ch = repo.changelog.read(repo.lookup(str(rev)))
123 123 return ch
124 124
125 125 if not slowpath and not files:
126 126 # No files, no patterns. Display all revs.
127 127 wanted = dict(zip(revs, revs))
128 128 if not slowpath:
129 129 # Only files, no patterns. Check the history of each file.
130 130 def filerevgen(filelog):
131 cl_count = repo.changelog.count()
131 132 for i, window in increasing_windows(filelog.count()-1, -1):
132 133 revs = []
133 134 for j in xrange(i - window, i + 1):
134 135 revs.append(filelog.linkrev(filelog.node(j)))
135 136 revs.reverse()
136 137 for rev in revs:
137 yield rev
138 # only yield rev for which we have the changelog, it can
139 # happen while doing "hg log" during a pull or commit
140 if rev < cl_count:
141 yield rev
138 142
139 143 minrev, maxrev = min(revs), max(revs)
140 144 for file_ in files:
141 145 filelog = repo.file(file_)
142 146 # A zero count may be a directory or deleted file, so
143 147 # try to find matching entries on the slow path.
144 148 if filelog.count() == 0:
145 149 slowpath = True
146 150 break
147 151 for rev in filerevgen(filelog):
148 152 if rev <= maxrev:
149 153 if rev < minrev:
150 154 break
151 155 fncache.setdefault(rev, [])
152 156 fncache[rev].append(file_)
153 157 wanted[rev] = 1
154 158 if slowpath:
155 159 # The slow path checks files modified in every changeset.
156 160 def changerevgen():
157 161 for i, window in increasing_windows(repo.changelog.count()-1, -1):
158 162 for j in xrange(i - window, i + 1):
159 163 yield j, getchange(j)[3]
160 164
161 165 for rev, changefiles in changerevgen():
162 166 matches = filter(matchfn, changefiles)
163 167 if matches:
164 168 fncache[rev] = matches
165 169 wanted[rev] = 1
166 170
167 171 def iterate():
168 172 for i, window in increasing_windows(0, len(revs)):
169 173 yield 'window', revs[0] < revs[-1], revs[-1]
170 174 nrevs = [rev for rev in revs[i:i+window]
171 175 if rev in wanted]
172 176 srevs = list(nrevs)
173 177 srevs.sort()
174 178 for rev in srevs:
175 179 fns = fncache.get(rev) or filter(matchfn, getchange(rev)[3])
176 180 yield 'add', rev, fns
177 181 for rev in nrevs:
178 182 yield 'iter', rev, None
179 183 return iterate(), getchange, matchfn
180 184
181 185 revrangesep = ':'
182 186
183 187 def revfix(repo, val, defval):
184 188 '''turn user-level id of changeset into rev number.
185 189 user-level id can be tag, changeset, rev number, or negative rev
186 190 number relative to number of revs (-1 is tip, etc).'''
187 191 if not val:
188 192 return defval
189 193 try:
190 194 num = int(val)
191 195 if str(num) != val:
192 196 raise ValueError
193 197 if num < 0:
194 198 num += repo.changelog.count()
195 199 if num < 0:
196 200 num = 0
197 201 elif num >= repo.changelog.count():
198 202 raise ValueError
199 203 except ValueError:
200 204 try:
201 205 num = repo.changelog.rev(repo.lookup(val))
202 206 except KeyError:
203 207 raise util.Abort(_('invalid revision identifier %s'), val)
204 208 return num
205 209
206 210 def revpair(ui, repo, revs):
207 211 '''return pair of nodes, given list of revisions. second item can
208 212 be None, meaning use working dir.'''
209 213 if not revs:
210 214 return repo.dirstate.parents()[0], None
211 215 end = None
212 216 if len(revs) == 1:
213 217 start = revs[0]
214 218 if revrangesep in start:
215 219 start, end = start.split(revrangesep, 1)
216 220 start = revfix(repo, start, 0)
217 221 end = revfix(repo, end, repo.changelog.count() - 1)
218 222 else:
219 223 start = revfix(repo, start, None)
220 224 elif len(revs) == 2:
221 225 if revrangesep in revs[0] or revrangesep in revs[1]:
222 226 raise util.Abort(_('too many revisions specified'))
223 227 start = revfix(repo, revs[0], None)
224 228 end = revfix(repo, revs[1], None)
225 229 else:
226 230 raise util.Abort(_('too many revisions specified'))
227 231 if end is not None: end = repo.lookup(str(end))
228 232 return repo.lookup(str(start)), end
229 233
230 234 def revrange(ui, repo, revs):
231 235 """Yield revision as strings from a list of revision specifications."""
232 236 seen = {}
233 237 for spec in revs:
234 238 if revrangesep in spec:
235 239 start, end = spec.split(revrangesep, 1)
236 240 start = revfix(repo, start, 0)
237 241 end = revfix(repo, end, repo.changelog.count() - 1)
238 242 step = start > end and -1 or 1
239 243 for rev in xrange(start, end+step, step):
240 244 if rev in seen:
241 245 continue
242 246 seen[rev] = 1
243 247 yield str(rev)
244 248 else:
245 249 rev = revfix(repo, spec, None)
246 250 if rev in seen:
247 251 continue
248 252 seen[rev] = 1
249 253 yield str(rev)
250 254
251 255 def make_filename(repo, pat, node,
252 256 total=None, seqno=None, revwidth=None, pathname=None):
253 257 node_expander = {
254 258 'H': lambda: hex(node),
255 259 'R': lambda: str(repo.changelog.rev(node)),
256 260 'h': lambda: short(node),
257 261 }
258 262 expander = {
259 263 '%': lambda: '%',
260 264 'b': lambda: os.path.basename(repo.root),
261 265 }
262 266
263 267 try:
264 268 if node:
265 269 expander.update(node_expander)
266 270 if node and revwidth is not None:
267 271 expander['r'] = lambda: str(r.rev(node)).zfill(revwidth)
268 272 if total is not None:
269 273 expander['N'] = lambda: str(total)
270 274 if seqno is not None:
271 275 expander['n'] = lambda: str(seqno)
272 276 if total is not None and seqno is not None:
273 277 expander['n'] = lambda:str(seqno).zfill(len(str(total)))
274 278 if pathname is not None:
275 279 expander['s'] = lambda: os.path.basename(pathname)
276 280 expander['d'] = lambda: os.path.dirname(pathname) or '.'
277 281 expander['p'] = lambda: pathname
278 282
279 283 newname = []
280 284 patlen = len(pat)
281 285 i = 0
282 286 while i < patlen:
283 287 c = pat[i]
284 288 if c == '%':
285 289 i += 1
286 290 c = pat[i]
287 291 c = expander[c]()
288 292 newname.append(c)
289 293 i += 1
290 294 return ''.join(newname)
291 295 except KeyError, inst:
292 296 raise util.Abort(_("invalid format spec '%%%s' in output file name"),
293 297 inst.args[0])
294 298
295 299 def make_file(repo, pat, node=None,
296 300 total=None, seqno=None, revwidth=None, mode='wb', pathname=None):
297 301 if not pat or pat == '-':
298 302 return 'w' in mode and sys.stdout or sys.stdin
299 303 if hasattr(pat, 'write') and 'w' in mode:
300 304 return pat
301 305 if hasattr(pat, 'read') and 'r' in mode:
302 306 return pat
303 307 return open(make_filename(repo, pat, node, total, seqno, revwidth,
304 308 pathname),
305 309 mode)
306 310
307 311 def write_bundle(cg, filename=None, compress=True):
308 312 """Write a bundle file and return its filename.
309 313
310 314 Existing files will not be overwritten.
311 315 If no filename is specified, a temporary file is created.
312 316 bz2 compression can be turned off.
313 317 The bundle file will be deleted in case of errors.
314 318 """
315 319 class nocompress(object):
316 320 def compress(self, x):
317 321 return x
318 322 def flush(self):
319 323 return ""
320 324
321 325 fh = None
322 326 cleanup = None
323 327 try:
324 328 if filename:
325 329 if os.path.exists(filename):
326 330 raise util.Abort(_("file '%s' already exists"), filename)
327 331 fh = open(filename, "wb")
328 332 else:
329 333 fd, filename = tempfile.mkstemp(prefix="hg-bundle-", suffix=".hg")
330 334 fh = os.fdopen(fd, "wb")
331 335 cleanup = filename
332 336
333 337 if compress:
334 338 fh.write("HG10")
335 339 z = bz2.BZ2Compressor(9)
336 340 else:
337 341 fh.write("HG10UN")
338 342 z = nocompress()
339 343 # parse the changegroup data, otherwise we will block
340 344 # in case of sshrepo because we don't know the end of the stream
341 345
342 346 # an empty chunkiter is the end of the changegroup
343 347 empty = False
344 348 while not empty:
345 349 empty = True
346 350 for chunk in changegroup.chunkiter(cg):
347 351 empty = False
348 352 fh.write(z.compress(changegroup.genchunk(chunk)))
349 353 fh.write(z.compress(changegroup.closechunk()))
350 354 fh.write(z.flush())
351 355 cleanup = None
352 356 return filename
353 357 finally:
354 358 if fh is not None:
355 359 fh.close()
356 360 if cleanup is not None:
357 361 os.unlink(cleanup)
358 362
359 363 def dodiff(fp, ui, repo, node1, node2, files=None, match=util.always,
360 364 changes=None, text=False, opts={}):
361 365 if not node1:
362 366 node1 = repo.dirstate.parents()[0]
363 367 # reading the data for node1 early allows it to play nicely
364 368 # with repo.changes and the revlog cache.
365 369 change = repo.changelog.read(node1)
366 370 mmap = repo.manifest.read(change[0])
367 371 date1 = util.datestr(change[2])
368 372
369 373 if not changes:
370 374 changes = repo.changes(node1, node2, files, match=match)
371 375 modified, added, removed, deleted, unknown = changes
372 376 if files:
373 377 modified, added, removed = map(lambda x: filterfiles(files, x),
374 378 (modified, added, removed))
375 379
376 380 if not modified and not added and not removed:
377 381 return
378 382
379 383 if node2:
380 384 change = repo.changelog.read(node2)
381 385 mmap2 = repo.manifest.read(change[0])
382 386 _date2 = util.datestr(change[2])
383 387 def date2(f):
384 388 return _date2
385 389 def read(f):
386 390 return repo.file(f).read(mmap2[f])
387 391 else:
388 392 tz = util.makedate()[1]
389 393 _date2 = util.datestr()
390 394 def date2(f):
391 395 try:
392 396 return util.datestr((os.lstat(repo.wjoin(f)).st_mtime, tz))
393 397 except OSError, err:
394 398 if err.errno != errno.ENOENT: raise
395 399 return _date2
396 400 def read(f):
397 401 return repo.wread(f)
398 402
399 403 if ui.quiet:
400 404 r = None
401 405 else:
402 406 hexfunc = ui.verbose and hex or short
403 407 r = [hexfunc(node) for node in [node1, node2] if node]
404 408
405 409 diffopts = ui.diffopts()
406 410 showfunc = opts.get('show_function') or diffopts['showfunc']
407 411 ignorews = opts.get('ignore_all_space') or diffopts['ignorews']
408 412 ignorewsamount = opts.get('ignore_space_change') or \
409 413 diffopts['ignorewsamount']
410 414 ignoreblanklines = opts.get('ignore_blank_lines') or \
411 415 diffopts['ignoreblanklines']
412 416 for f in modified:
413 417 to = None
414 418 if f in mmap:
415 419 to = repo.file(f).read(mmap[f])
416 420 tn = read(f)
417 421 fp.write(mdiff.unidiff(to, date1, tn, date2(f), f, r, text=text,
418 422 showfunc=showfunc, ignorews=ignorews,
419 423 ignorewsamount=ignorewsamount,
420 424 ignoreblanklines=ignoreblanklines))
421 425 for f in added:
422 426 to = None
423 427 tn = read(f)
424 428 fp.write(mdiff.unidiff(to, date1, tn, date2(f), f, r, text=text,
425 429 showfunc=showfunc, ignorews=ignorews,
426 430 ignorewsamount=ignorewsamount,
427 431 ignoreblanklines=ignoreblanklines))
428 432 for f in removed:
429 433 to = repo.file(f).read(mmap[f])
430 434 tn = None
431 435 fp.write(mdiff.unidiff(to, date1, tn, date2(f), f, r, text=text,
432 436 showfunc=showfunc, ignorews=ignorews,
433 437 ignorewsamount=ignorewsamount,
434 438 ignoreblanklines=ignoreblanklines))
435 439
436 440 def trimuser(ui, name, rev, revcache):
437 441 """trim the name of the user who committed a change"""
438 442 user = revcache.get(rev)
439 443 if user is None:
440 444 user = revcache[rev] = ui.shortuser(name)
441 445 return user
442 446
443 447 class changeset_printer(object):
444 448 '''show changeset information when templating not requested.'''
445 449
446 450 def __init__(self, ui, repo):
447 451 self.ui = ui
448 452 self.repo = repo
449 453
450 454 def show(self, rev=0, changenode=None, brinfo=None):
451 455 '''show a single changeset or file revision'''
452 456 log = self.repo.changelog
453 457 if changenode is None:
454 458 changenode = log.node(rev)
455 459 elif not rev:
456 460 rev = log.rev(changenode)
457 461
458 462 if self.ui.quiet:
459 463 self.ui.write("%d:%s\n" % (rev, short(changenode)))
460 464 return
461 465
462 466 changes = log.read(changenode)
463 467 date = util.datestr(changes[2])
464 468
465 469 parents = [(log.rev(p), self.ui.verbose and hex(p) or short(p))
466 470 for p in log.parents(changenode)
467 471 if self.ui.debugflag or p != nullid]
468 472 if (not self.ui.debugflag and len(parents) == 1 and
469 473 parents[0][0] == rev-1):
470 474 parents = []
471 475
472 476 if self.ui.verbose:
473 477 self.ui.write(_("changeset: %d:%s\n") % (rev, hex(changenode)))
474 478 else:
475 479 self.ui.write(_("changeset: %d:%s\n") % (rev, short(changenode)))
476 480
477 481 for tag in self.repo.nodetags(changenode):
478 482 self.ui.status(_("tag: %s\n") % tag)
479 483 for parent in parents:
480 484 self.ui.write(_("parent: %d:%s\n") % parent)
481 485
482 486 if brinfo and changenode in brinfo:
483 487 br = brinfo[changenode]
484 488 self.ui.write(_("branch: %s\n") % " ".join(br))
485 489
486 490 self.ui.debug(_("manifest: %d:%s\n") %
487 491 (self.repo.manifest.rev(changes[0]), hex(changes[0])))
488 492 self.ui.status(_("user: %s\n") % changes[1])
489 493 self.ui.status(_("date: %s\n") % date)
490 494
491 495 if self.ui.debugflag:
492 496 files = self.repo.changes(log.parents(changenode)[0], changenode)
493 497 for key, value in zip([_("files:"), _("files+:"), _("files-:")],
494 498 files):
495 499 if value:
496 500 self.ui.note("%-12s %s\n" % (key, " ".join(value)))
497 501 else:
498 502 self.ui.note(_("files: %s\n") % " ".join(changes[3]))
499 503
500 504 description = changes[4].strip()
501 505 if description:
502 506 if self.ui.verbose:
503 507 self.ui.status(_("description:\n"))
504 508 self.ui.status(description)
505 509 self.ui.status("\n\n")
506 510 else:
507 511 self.ui.status(_("summary: %s\n") %
508 512 description.splitlines()[0])
509 513 self.ui.status("\n")
510 514
511 515 def show_changeset(ui, repo, opts):
512 516 '''show one changeset. uses template or regular display. caller
513 517 can pass in 'style' and 'template' options in opts.'''
514 518
515 519 tmpl = opts.get('template')
516 520 if tmpl:
517 521 tmpl = templater.parsestring(tmpl, quoted=False)
518 522 else:
519 523 tmpl = ui.config('ui', 'logtemplate')
520 524 if tmpl: tmpl = templater.parsestring(tmpl)
521 525 mapfile = opts.get('style') or ui.config('ui', 'style')
522 526 if tmpl or mapfile:
523 527 if mapfile:
524 528 if not os.path.isfile(mapfile):
525 529 mapname = templater.templatepath('map-cmdline.' + mapfile)
526 530 if not mapname: mapname = templater.templatepath(mapfile)
527 531 if mapname: mapfile = mapname
528 532 try:
529 533 t = templater.changeset_templater(ui, repo, mapfile)
530 534 except SyntaxError, inst:
531 535 raise util.Abort(inst.args[0])
532 536 if tmpl: t.use_template(tmpl)
533 537 return t
534 538 return changeset_printer(ui, repo)
535 539
536 540 def show_version(ui):
537 541 """output version and copyright information"""
538 542 ui.write(_("Mercurial Distributed SCM (version %s)\n")
539 543 % version.get_version())
540 544 ui.status(_(
541 545 "\nCopyright (C) 2005 Matt Mackall <mpm@selenic.com>\n"
542 546 "This is free software; see the source for copying conditions. "
543 547 "There is NO\nwarranty; "
544 548 "not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n"
545 549 ))
546 550
547 551 def help_(ui, name=None, with_version=False):
548 552 """show help for a command, extension, or list of commands
549 553
550 554 With no arguments, print a list of commands and short help.
551 555
552 556 Given a command name, print help for that command.
553 557
554 558 Given an extension name, print help for that extension, and the
555 559 commands it provides."""
556 560 option_lists = []
557 561
558 562 def helpcmd(name):
559 563 if with_version:
560 564 show_version(ui)
561 565 ui.write('\n')
562 566 aliases, i = findcmd(name)
563 567 # synopsis
564 568 ui.write("%s\n\n" % i[2])
565 569
566 570 # description
567 571 doc = i[0].__doc__
568 572 if not doc:
569 573 doc = _("(No help text available)")
570 574 if ui.quiet:
571 575 doc = doc.splitlines(0)[0]
572 576 ui.write("%s\n" % doc.rstrip())
573 577
574 578 if not ui.quiet:
575 579 # aliases
576 580 if len(aliases) > 1:
577 581 ui.write(_("\naliases: %s\n") % ', '.join(aliases[1:]))
578 582
579 583 # options
580 584 if i[1]:
581 585 option_lists.append(("options", i[1]))
582 586
583 587 def helplist(select=None):
584 588 h = {}
585 589 cmds = {}
586 590 for c, e in table.items():
587 591 f = c.split("|", 1)[0]
588 592 if select and not select(f):
589 593 continue
590 594 if name == "shortlist" and not f.startswith("^"):
591 595 continue
592 596 f = f.lstrip("^")
593 597 if not ui.debugflag and f.startswith("debug"):
594 598 continue
595 599 doc = e[0].__doc__
596 600 if not doc:
597 601 doc = _("(No help text available)")
598 602 h[f] = doc.splitlines(0)[0].rstrip()
599 603 cmds[f] = c.lstrip("^")
600 604
601 605 fns = h.keys()
602 606 fns.sort()
603 607 m = max(map(len, fns))
604 608 for f in fns:
605 609 if ui.verbose:
606 610 commands = cmds[f].replace("|",", ")
607 611 ui.write(" %s:\n %s\n"%(commands, h[f]))
608 612 else:
609 613 ui.write(' %-*s %s\n' % (m, f, h[f]))
610 614
611 615 def helpext(name):
612 616 try:
613 617 mod = findext(name)
614 618 except KeyError:
615 619 raise UnknownCommand(name)
616 620
617 621 doc = (mod.__doc__ or _('No help text available')).splitlines(0)
618 622 ui.write(_('%s extension - %s\n') % (name.split('.')[-1], doc[0]))
619 623 for d in doc[1:]:
620 624 ui.write(d, '\n')
621 625
622 626 ui.status('\n')
623 627 if ui.verbose:
624 628 ui.status(_('list of commands:\n\n'))
625 629 else:
626 630 ui.status(_('list of commands (use "hg help -v %s" '
627 631 'to show aliases and global options):\n\n') % name)
628 632
629 633 modcmds = dict.fromkeys([c.split('|', 1)[0] for c in mod.cmdtable])
630 634 helplist(modcmds.has_key)
631 635
632 636 if name and name != 'shortlist':
633 637 try:
634 638 helpcmd(name)
635 639 except UnknownCommand:
636 640 helpext(name)
637 641
638 642 else:
639 643 # program name
640 644 if ui.verbose or with_version:
641 645 show_version(ui)
642 646 else:
643 647 ui.status(_("Mercurial Distributed SCM\n"))
644 648 ui.status('\n')
645 649
646 650 # list of commands
647 651 if name == "shortlist":
648 652 ui.status(_('basic commands (use "hg help" '
649 653 'for the full list or option "-v" for details):\n\n'))
650 654 elif ui.verbose:
651 655 ui.status(_('list of commands:\n\n'))
652 656 else:
653 657 ui.status(_('list of commands (use "hg help -v" '
654 658 'to show aliases and global options):\n\n'))
655 659
656 660 helplist()
657 661
658 662 # global options
659 663 if ui.verbose:
660 664 option_lists.append(("global options", globalopts))
661 665
662 666 # list all option lists
663 667 opt_output = []
664 668 for title, options in option_lists:
665 669 opt_output.append(("\n%s:\n" % title, None))
666 670 for shortopt, longopt, default, desc in options:
667 671 opt_output.append(("%2s%s" % (shortopt and "-%s" % shortopt,
668 672 longopt and " --%s" % longopt),
669 673 "%s%s" % (desc,
670 674 default
671 675 and _(" (default: %s)") % default
672 676 or "")))
673 677
674 678 if opt_output:
675 679 opts_len = max([len(line[0]) for line in opt_output if line[1]])
676 680 for first, second in opt_output:
677 681 if second:
678 682 ui.write(" %-*s %s\n" % (opts_len, first, second))
679 683 else:
680 684 ui.write("%s\n" % first)
681 685
682 686 # Commands start here, listed alphabetically
683 687
684 688 def add(ui, repo, *pats, **opts):
685 689 """add the specified files on the next commit
686 690
687 691 Schedule files to be version controlled and added to the repository.
688 692
689 693 The files will be added to the repository at the next commit.
690 694
691 695 If no names are given, add all files in the repository.
692 696 """
693 697
694 698 names = []
695 699 for src, abs, rel, exact in walk(repo, pats, opts):
696 700 if exact:
697 701 if ui.verbose:
698 702 ui.status(_('adding %s\n') % rel)
699 703 names.append(abs)
700 704 elif repo.dirstate.state(abs) == '?':
701 705 ui.status(_('adding %s\n') % rel)
702 706 names.append(abs)
703 707 if not opts.get('dry_run'):
704 708 repo.add(names)
705 709
706 710 def addremove(ui, repo, *pats, **opts):
707 711 """add all new files, delete all missing files (DEPRECATED)
708 712
709 713 (DEPRECATED)
710 714 Add all new files and remove all missing files from the repository.
711 715
712 716 New files are ignored if they match any of the patterns in .hgignore. As
713 717 with add, these changes take effect at the next commit.
714 718
715 719 This command is now deprecated and will be removed in a future
716 720 release. Please use add and remove --after instead.
717 721 """
718 722 ui.warn(_('(the addremove command is deprecated; use add and remove '
719 723 '--after instead)\n'))
720 724 return addremove_lock(ui, repo, pats, opts)
721 725
722 726 def addremove_lock(ui, repo, pats, opts, wlock=None):
723 727 add, remove = [], []
724 728 for src, abs, rel, exact in walk(repo, pats, opts):
725 729 if src == 'f' and repo.dirstate.state(abs) == '?':
726 730 add.append(abs)
727 731 if ui.verbose or not exact:
728 732 ui.status(_('adding %s\n') % ((pats and rel) or abs))
729 733 if repo.dirstate.state(abs) != 'r' and not os.path.exists(rel):
730 734 remove.append(abs)
731 735 if ui.verbose or not exact:
732 736 ui.status(_('removing %s\n') % ((pats and rel) or abs))
733 737 if not opts.get('dry_run'):
734 738 repo.add(add, wlock=wlock)
735 739 repo.remove(remove, wlock=wlock)
736 740
737 741 def annotate(ui, repo, *pats, **opts):
738 742 """show changeset information per file line
739 743
740 744 List changes in files, showing the revision id responsible for each line
741 745
742 746 This command is useful to discover who did a change or when a change took
743 747 place.
744 748
745 749 Without the -a option, annotate will avoid processing files it
746 750 detects as binary. With -a, annotate will generate an annotation
747 751 anyway, probably with undesirable results.
748 752 """
749 753 def getnode(rev):
750 754 return short(repo.changelog.node(rev))
751 755
752 756 ucache = {}
753 757 def getname(rev):
754 758 try:
755 759 return ucache[rev]
756 760 except:
757 761 u = trimuser(ui, repo.changectx(rev).user(), rev, ucache)
758 762 ucache[rev] = u
759 763 return u
760 764
761 765 dcache = {}
762 766 def getdate(rev):
763 767 datestr = dcache.get(rev)
764 768 if datestr is None:
765 769 datestr = dcache[rev] = util.datestr(repo.changectx(rev).date())
766 770 return datestr
767 771
768 772 if not pats:
769 773 raise util.Abort(_('at least one file name or pattern required'))
770 774
771 775 opmap = [['user', getname], ['number', str], ['changeset', getnode],
772 776 ['date', getdate]]
773 777 if not opts['user'] and not opts['changeset'] and not opts['date']:
774 778 opts['number'] = 1
775 779
776 780 ctx = repo.changectx(opts['rev'] or repo.dirstate.parents()[0])
777 781
778 782 for src, abs, rel, exact in walk(repo, pats, opts, node=ctx.node()):
779 783 fctx = ctx.filectx(abs)
780 784 if not opts['text'] and util.binary(fctx.data()):
781 785 ui.write(_("%s: binary file\n") % ((pats and rel) or abs))
782 786 continue
783 787
784 788 lines = fctx.annotate()
785 789 pieces = []
786 790
787 791 for o, f in opmap:
788 792 if opts[o]:
789 793 l = [f(n) for n, dummy in lines]
790 794 if l:
791 795 m = max(map(len, l))
792 796 pieces.append(["%*s" % (m, x) for x in l])
793 797
794 798 if pieces:
795 799 for p, l in zip(zip(*pieces), lines):
796 800 ui.write("%s: %s" % (" ".join(p), l[1]))
797 801
798 802 def archive(ui, repo, dest, **opts):
799 803 '''create unversioned archive of a repository revision
800 804
801 805 By default, the revision used is the parent of the working
802 806 directory; use "-r" to specify a different revision.
803 807
804 808 To specify the type of archive to create, use "-t". Valid
805 809 types are:
806 810
807 811 "files" (default): a directory full of files
808 812 "tar": tar archive, uncompressed
809 813 "tbz2": tar archive, compressed using bzip2
810 814 "tgz": tar archive, compressed using gzip
811 815 "uzip": zip archive, uncompressed
812 816 "zip": zip archive, compressed using deflate
813 817
814 818 The exact name of the destination archive or directory is given
815 819 using a format string; see "hg help export" for details.
816 820
817 821 Each member added to an archive file has a directory prefix
818 822 prepended. Use "-p" to specify a format string for the prefix.
819 823 The default is the basename of the archive, with suffixes removed.
820 824 '''
821 825
822 826 if opts['rev']:
823 827 node = repo.lookup(opts['rev'])
824 828 else:
825 829 node, p2 = repo.dirstate.parents()
826 830 if p2 != nullid:
827 831 raise util.Abort(_('uncommitted merge - please provide a '
828 832 'specific revision'))
829 833
830 834 dest = make_filename(repo, dest, node)
831 835 if os.path.realpath(dest) == repo.root:
832 836 raise util.Abort(_('repository root cannot be destination'))
833 837 dummy, matchfn, dummy = matchpats(repo, [], opts)
834 838 kind = opts.get('type') or 'files'
835 839 prefix = opts['prefix']
836 840 if dest == '-':
837 841 if kind == 'files':
838 842 raise util.Abort(_('cannot archive plain files to stdout'))
839 843 dest = sys.stdout
840 844 if not prefix: prefix = os.path.basename(repo.root) + '-%h'
841 845 prefix = make_filename(repo, prefix, node)
842 846 archival.archive(repo, dest, node, kind, not opts['no_decode'],
843 847 matchfn, prefix)
844 848
845 849 def backout(ui, repo, rev, **opts):
846 850 '''reverse effect of earlier changeset
847 851
848 852 Commit the backed out changes as a new changeset. The new
849 853 changeset is a child of the backed out changeset.
850 854
851 855 If you back out a changeset other than the tip, a new head is
852 856 created. This head is the parent of the working directory. If
853 857 you back out an old changeset, your working directory will appear
854 858 old after the backout. You should merge the backout changeset
855 859 with another head.
856 860
857 861 The --merge option remembers the parent of the working directory
858 862 before starting the backout, then merges the new head with that
859 863 changeset afterwards. This saves you from doing the merge by
860 864 hand. The result of this merge is not committed, as for a normal
861 865 merge.'''
862 866
863 867 bail_if_changed(repo)
864 868 op1, op2 = repo.dirstate.parents()
865 869 if op2 != nullid:
866 870 raise util.Abort(_('outstanding uncommitted merge'))
867 871 node = repo.lookup(rev)
868 872 p1, p2 = repo.changelog.parents(node)
869 873 if p1 == nullid:
870 874 raise util.Abort(_('cannot back out a change with no parents'))
871 875 if p2 != nullid:
872 876 if not opts['parent']:
873 877 raise util.Abort(_('cannot back out a merge changeset without '
874 878 '--parent'))
875 879 p = repo.lookup(opts['parent'])
876 880 if p not in (p1, p2):
877 881 raise util.Abort(_('%s is not a parent of %s' %
878 882 (short(p), short(node))))
879 883 parent = p
880 884 else:
881 885 if opts['parent']:
882 886 raise util.Abort(_('cannot use --parent on non-merge changeset'))
883 887 parent = p1
884 888 repo.update(node, force=True, show_stats=False)
885 889 revert_opts = opts.copy()
886 890 revert_opts['rev'] = hex(parent)
887 891 revert(ui, repo, **revert_opts)
888 892 commit_opts = opts.copy()
889 893 commit_opts['addremove'] = False
890 894 if not commit_opts['message'] and not commit_opts['logfile']:
891 895 commit_opts['message'] = _("Backed out changeset %s") % (hex(node))
892 896 commit_opts['force_editor'] = True
893 897 commit(ui, repo, **commit_opts)
894 898 def nice(node):
895 899 return '%d:%s' % (repo.changelog.rev(node), short(node))
896 900 ui.status(_('changeset %s backs out changeset %s\n') %
897 901 (nice(repo.changelog.tip()), nice(node)))
898 902 if op1 != node:
899 903 if opts['merge']:
900 904 ui.status(_('merging with changeset %s\n') % nice(op1))
901 905 doupdate(ui, repo, hex(op1), **opts)
902 906 else:
903 907 ui.status(_('the backout changeset is a new head - '
904 908 'do not forget to merge\n'))
905 909 ui.status(_('(use "backout -m" if you want to auto-merge)\n'))
906 910
907 911 def bundle(ui, repo, fname, dest=None, **opts):
908 912 """create a changegroup file
909 913
910 914 Generate a compressed changegroup file collecting all changesets
911 915 not found in the other repository.
912 916
913 917 This file can then be transferred using conventional means and
914 918 applied to another repository with the unbundle command. This is
915 919 useful when native push and pull are not available or when
916 920 exporting an entire repository is undesirable. The standard file
917 921 extension is ".hg".
918 922
919 923 Unlike import/export, this exactly preserves all changeset
920 924 contents including permissions, rename data, and revision history.
921 925 """
922 926 dest = ui.expandpath(dest or 'default-push', dest or 'default')
923 927 other = hg.repository(ui, dest)
924 928 o = repo.findoutgoing(other, force=opts['force'])
925 929 cg = repo.changegroup(o, 'bundle')
926 930 write_bundle(cg, fname)
927 931
928 932 def cat(ui, repo, file1, *pats, **opts):
929 933 """output the latest or given revisions of files
930 934
931 935 Print the specified files as they were at the given revision.
932 936 If no revision is given then the tip is used.
933 937
934 938 Output may be to a file, in which case the name of the file is
935 939 given using a format string. The formatting rules are the same as
936 940 for the export command, with the following additions:
937 941
938 942 %s basename of file being printed
939 943 %d dirname of file being printed, or '.' if in repo root
940 944 %p root-relative path name of file being printed
941 945 """
942 946 ctx = repo.changectx(opts['rev'] or -1)
943 947 for src, abs, rel, exact in walk(repo, (file1,) + pats, opts, ctx.node()):
944 948 fp = make_file(repo, opts['output'], ctx.node(), pathname=abs)
945 949 fp.write(ctx.filectx(abs).data())
946 950
947 951 def clone(ui, source, dest=None, **opts):
948 952 """make a copy of an existing repository
949 953
950 954 Create a copy of an existing repository in a new directory.
951 955
952 956 If no destination directory name is specified, it defaults to the
953 957 basename of the source.
954 958
955 959 The location of the source is added to the new repository's
956 960 .hg/hgrc file, as the default to be used for future pulls.
957 961
958 962 For efficiency, hardlinks are used for cloning whenever the source
959 963 and destination are on the same filesystem. Some filesystems,
960 964 such as AFS, implement hardlinking incorrectly, but do not report
961 965 errors. In these cases, use the --pull option to avoid
962 966 hardlinking.
963 967
964 968 See pull for valid source format details.
965 969
966 970 It is possible to specify an ssh:// URL as the destination, but no
967 971 .hg/hgrc will be created on the remote side. Look at the help text
968 972 for the pull command for important details about ssh:// URLs.
969 973 """
970 974 ui.setconfig_remoteopts(**opts)
971 975 hg.clone(ui, ui.expandpath(source), dest,
972 976 pull=opts['pull'],
973 stream=opts['stream'],
977 stream=opts['uncompressed'],
974 978 rev=opts['rev'],
975 979 update=not opts['noupdate'])
976 980
977 981 def commit(ui, repo, *pats, **opts):
978 982 """commit the specified files or all outstanding changes
979 983
980 984 Commit changes to the given files into the repository.
981 985
982 986 If a list of files is omitted, all changes reported by "hg status"
983 987 will be committed.
984 988
985 989 If no commit message is specified, the editor configured in your hgrc
986 990 or in the EDITOR environment variable is started to enter a message.
987 991 """
988 992 message = opts['message']
989 993 logfile = opts['logfile']
990 994
991 995 if message and logfile:
992 996 raise util.Abort(_('options --message and --logfile are mutually '
993 997 'exclusive'))
994 998 if not message and logfile:
995 999 try:
996 1000 if logfile == '-':
997 1001 message = sys.stdin.read()
998 1002 else:
999 1003 message = open(logfile).read()
1000 1004 except IOError, inst:
1001 1005 raise util.Abort(_("can't read commit message '%s': %s") %
1002 1006 (logfile, inst.strerror))
1003 1007
1004 1008 if opts['addremove']:
1005 1009 addremove_lock(ui, repo, pats, opts)
1006 1010 fns, match, anypats = matchpats(repo, pats, opts)
1007 1011 if pats:
1008 1012 modified, added, removed, deleted, unknown = (
1009 1013 repo.changes(files=fns, match=match))
1010 1014 files = modified + added + removed
1011 1015 else:
1012 1016 files = []
1013 1017 try:
1014 1018 repo.commit(files, message, opts['user'], opts['date'], match,
1015 1019 force_editor=opts.get('force_editor'))
1016 1020 except ValueError, inst:
1017 1021 raise util.Abort(str(inst))
1018 1022
1019 1023 def docopy(ui, repo, pats, opts, wlock):
1020 1024 # called with the repo lock held
1021 1025 cwd = repo.getcwd()
1022 1026 errors = 0
1023 1027 copied = []
1024 1028 targets = {}
1025 1029
1026 1030 def okaytocopy(abs, rel, exact):
1027 1031 reasons = {'?': _('is not managed'),
1028 1032 'a': _('has been marked for add'),
1029 1033 'r': _('has been marked for remove')}
1030 1034 state = repo.dirstate.state(abs)
1031 1035 reason = reasons.get(state)
1032 1036 if reason:
1033 1037 if state == 'a':
1034 1038 origsrc = repo.dirstate.copied(abs)
1035 1039 if origsrc is not None:
1036 1040 return origsrc
1037 1041 if exact:
1038 1042 ui.warn(_('%s: not copying - file %s\n') % (rel, reason))
1039 1043 else:
1040 1044 return abs
1041 1045
1042 1046 def copy(origsrc, abssrc, relsrc, target, exact):
1043 1047 abstarget = util.canonpath(repo.root, cwd, target)
1044 1048 reltarget = util.pathto(cwd, abstarget)
1045 1049 prevsrc = targets.get(abstarget)
1046 1050 if prevsrc is not None:
1047 1051 ui.warn(_('%s: not overwriting - %s collides with %s\n') %
1048 1052 (reltarget, abssrc, prevsrc))
1049 1053 return
1050 1054 if (not opts['after'] and os.path.exists(reltarget) or
1051 1055 opts['after'] and repo.dirstate.state(abstarget) not in '?r'):
1052 1056 if not opts['force']:
1053 1057 ui.warn(_('%s: not overwriting - file exists\n') %
1054 1058 reltarget)
1055 1059 return
1056 1060 if not opts['after'] and not opts.get('dry_run'):
1057 1061 os.unlink(reltarget)
1058 1062 if opts['after']:
1059 1063 if not os.path.exists(reltarget):
1060 1064 return
1061 1065 else:
1062 1066 targetdir = os.path.dirname(reltarget) or '.'
1063 1067 if not os.path.isdir(targetdir) and not opts.get('dry_run'):
1064 1068 os.makedirs(targetdir)
1065 1069 try:
1066 1070 restore = repo.dirstate.state(abstarget) == 'r'
1067 1071 if restore and not opts.get('dry_run'):
1068 1072 repo.undelete([abstarget], wlock)
1069 1073 try:
1070 1074 if not opts.get('dry_run'):
1071 1075 shutil.copyfile(relsrc, reltarget)
1072 1076 shutil.copymode(relsrc, reltarget)
1073 1077 restore = False
1074 1078 finally:
1075 1079 if restore:
1076 1080 repo.remove([abstarget], wlock)
1077 1081 except shutil.Error, inst:
1078 1082 raise util.Abort(str(inst))
1079 1083 except IOError, inst:
1080 1084 if inst.errno == errno.ENOENT:
1081 1085 ui.warn(_('%s: deleted in working copy\n') % relsrc)
1082 1086 else:
1083 1087 ui.warn(_('%s: cannot copy - %s\n') %
1084 1088 (relsrc, inst.strerror))
1085 1089 errors += 1
1086 1090 return
1087 1091 if ui.verbose or not exact:
1088 1092 ui.status(_('copying %s to %s\n') % (relsrc, reltarget))
1089 1093 targets[abstarget] = abssrc
1090 1094 if abstarget != origsrc and not opts.get('dry_run'):
1091 1095 repo.copy(origsrc, abstarget, wlock)
1092 1096 copied.append((abssrc, relsrc, exact))
1093 1097
1094 1098 def targetpathfn(pat, dest, srcs):
1095 1099 if os.path.isdir(pat):
1096 1100 abspfx = util.canonpath(repo.root, cwd, pat)
1097 1101 if destdirexists:
1098 1102 striplen = len(os.path.split(abspfx)[0])
1099 1103 else:
1100 1104 striplen = len(abspfx)
1101 1105 if striplen:
1102 1106 striplen += len(os.sep)
1103 1107 res = lambda p: os.path.join(dest, p[striplen:])
1104 1108 elif destdirexists:
1105 1109 res = lambda p: os.path.join(dest, os.path.basename(p))
1106 1110 else:
1107 1111 res = lambda p: dest
1108 1112 return res
1109 1113
1110 1114 def targetpathafterfn(pat, dest, srcs):
1111 1115 if util.patkind(pat, None)[0]:
1112 1116 # a mercurial pattern
1113 1117 res = lambda p: os.path.join(dest, os.path.basename(p))
1114 1118 else:
1115 1119 abspfx = util.canonpath(repo.root, cwd, pat)
1116 1120 if len(abspfx) < len(srcs[0][0]):
1117 1121 # A directory. Either the target path contains the last
1118 1122 # component of the source path or it does not.
1119 1123 def evalpath(striplen):
1120 1124 score = 0
1121 1125 for s in srcs:
1122 1126 t = os.path.join(dest, s[0][striplen:])
1123 1127 if os.path.exists(t):
1124 1128 score += 1
1125 1129 return score
1126 1130
1127 1131 striplen = len(abspfx)
1128 1132 if striplen:
1129 1133 striplen += len(os.sep)
1130 1134 if os.path.isdir(os.path.join(dest, os.path.split(abspfx)[1])):
1131 1135 score = evalpath(striplen)
1132 1136 striplen1 = len(os.path.split(abspfx)[0])
1133 1137 if striplen1:
1134 1138 striplen1 += len(os.sep)
1135 1139 if evalpath(striplen1) > score:
1136 1140 striplen = striplen1
1137 1141 res = lambda p: os.path.join(dest, p[striplen:])
1138 1142 else:
1139 1143 # a file
1140 1144 if destdirexists:
1141 1145 res = lambda p: os.path.join(dest, os.path.basename(p))
1142 1146 else:
1143 1147 res = lambda p: dest
1144 1148 return res
1145 1149
1146 1150
1147 1151 pats = list(pats)
1148 1152 if not pats:
1149 1153 raise util.Abort(_('no source or destination specified'))
1150 1154 if len(pats) == 1:
1151 1155 raise util.Abort(_('no destination specified'))
1152 1156 dest = pats.pop()
1153 1157 destdirexists = os.path.isdir(dest)
1154 1158 if (len(pats) > 1 or util.patkind(pats[0], None)[0]) and not destdirexists:
1155 1159 raise util.Abort(_('with multiple sources, destination must be an '
1156 1160 'existing directory'))
1157 1161 if opts['after']:
1158 1162 tfn = targetpathafterfn
1159 1163 else:
1160 1164 tfn = targetpathfn
1161 1165 copylist = []
1162 1166 for pat in pats:
1163 1167 srcs = []
1164 1168 for tag, abssrc, relsrc, exact in walk(repo, [pat], opts):
1165 1169 origsrc = okaytocopy(abssrc, relsrc, exact)
1166 1170 if origsrc:
1167 1171 srcs.append((origsrc, abssrc, relsrc, exact))
1168 1172 if not srcs:
1169 1173 continue
1170 1174 copylist.append((tfn(pat, dest, srcs), srcs))
1171 1175 if not copylist:
1172 1176 raise util.Abort(_('no files to copy'))
1173 1177
1174 1178 for targetpath, srcs in copylist:
1175 1179 for origsrc, abssrc, relsrc, exact in srcs:
1176 1180 copy(origsrc, abssrc, relsrc, targetpath(abssrc), exact)
1177 1181
1178 1182 if errors:
1179 1183 ui.warn(_('(consider using --after)\n'))
1180 1184 return errors, copied
1181 1185
1182 1186 def copy(ui, repo, *pats, **opts):
1183 1187 """mark files as copied for the next commit
1184 1188
1185 1189 Mark dest as having copies of source files. If dest is a
1186 1190 directory, copies are put in that directory. If dest is a file,
1187 1191 there can only be one source.
1188 1192
1189 1193 By default, this command copies the contents of files as they
1190 1194 stand in the working directory. If invoked with --after, the
1191 1195 operation is recorded, but no copying is performed.
1192 1196
1193 1197 This command takes effect in the next commit.
1194 1198
1195 1199 NOTE: This command should be treated as experimental. While it
1196 1200 should properly record copied files, this information is not yet
1197 1201 fully used by merge, nor fully reported by log.
1198 1202 """
1199 1203 wlock = repo.wlock(0)
1200 1204 errs, copied = docopy(ui, repo, pats, opts, wlock)
1201 1205 return errs
1202 1206
1203 1207 def debugancestor(ui, index, rev1, rev2):
1204 1208 """find the ancestor revision of two revisions in a given index"""
1205 1209 r = revlog.revlog(util.opener(os.getcwd(), audit=False), index, "", 0)
1206 1210 a = r.ancestor(r.lookup(rev1), r.lookup(rev2))
1207 1211 ui.write("%d:%s\n" % (r.rev(a), hex(a)))
1208 1212
1209 1213 def debugcomplete(ui, cmd='', **opts):
1210 1214 """returns the completion list associated with the given command"""
1211 1215
1212 1216 if opts['options']:
1213 1217 options = []
1214 1218 otables = [globalopts]
1215 1219 if cmd:
1216 1220 aliases, entry = findcmd(cmd)
1217 1221 otables.append(entry[1])
1218 1222 for t in otables:
1219 1223 for o in t:
1220 1224 if o[0]:
1221 1225 options.append('-%s' % o[0])
1222 1226 options.append('--%s' % o[1])
1223 1227 ui.write("%s\n" % "\n".join(options))
1224 1228 return
1225 1229
1226 1230 clist = findpossible(cmd).keys()
1227 1231 clist.sort()
1228 1232 ui.write("%s\n" % "\n".join(clist))
1229 1233
1230 1234 def debugrebuildstate(ui, repo, rev=None):
1231 1235 """rebuild the dirstate as it would look like for the given revision"""
1232 1236 if not rev:
1233 1237 rev = repo.changelog.tip()
1234 1238 else:
1235 1239 rev = repo.lookup(rev)
1236 1240 change = repo.changelog.read(rev)
1237 1241 n = change[0]
1238 1242 files = repo.manifest.readflags(n)
1239 1243 wlock = repo.wlock()
1240 1244 repo.dirstate.rebuild(rev, files.iteritems())
1241 1245
1242 1246 def debugcheckstate(ui, repo):
1243 1247 """validate the correctness of the current dirstate"""
1244 1248 parent1, parent2 = repo.dirstate.parents()
1245 1249 repo.dirstate.read()
1246 1250 dc = repo.dirstate.map
1247 1251 keys = dc.keys()
1248 1252 keys.sort()
1249 1253 m1n = repo.changelog.read(parent1)[0]
1250 1254 m2n = repo.changelog.read(parent2)[0]
1251 1255 m1 = repo.manifest.read(m1n)
1252 1256 m2 = repo.manifest.read(m2n)
1253 1257 errors = 0
1254 1258 for f in dc:
1255 1259 state = repo.dirstate.state(f)
1256 1260 if state in "nr" and f not in m1:
1257 1261 ui.warn(_("%s in state %s, but not in manifest1\n") % (f, state))
1258 1262 errors += 1
1259 1263 if state in "a" and f in m1:
1260 1264 ui.warn(_("%s in state %s, but also in manifest1\n") % (f, state))
1261 1265 errors += 1
1262 1266 if state in "m" and f not in m1 and f not in m2:
1263 1267 ui.warn(_("%s in state %s, but not in either manifest\n") %
1264 1268 (f, state))
1265 1269 errors += 1
1266 1270 for f in m1:
1267 1271 state = repo.dirstate.state(f)
1268 1272 if state not in "nrm":
1269 1273 ui.warn(_("%s in manifest1, but listed as state %s") % (f, state))
1270 1274 errors += 1
1271 1275 if errors:
1272 1276 error = _(".hg/dirstate inconsistent with current parent's manifest")
1273 1277 raise util.Abort(error)
1274 1278
1275 1279 def debugconfig(ui, repo, *values):
1276 1280 """show combined config settings from all hgrc files
1277 1281
1278 1282 With no args, print names and values of all config items.
1279 1283
1280 1284 With one arg of the form section.name, print just the value of
1281 1285 that config item.
1282 1286
1283 1287 With multiple args, print names and values of all config items
1284 1288 with matching section names."""
1285 1289
1286 1290 if values:
1287 1291 if len([v for v in values if '.' in v]) > 1:
1288 1292 raise util.Abort(_('only one config item permitted'))
1289 1293 for section, name, value in ui.walkconfig():
1290 1294 sectname = section + '.' + name
1291 1295 if values:
1292 1296 for v in values:
1293 1297 if v == section:
1294 1298 ui.write('%s=%s\n' % (sectname, value))
1295 1299 elif v == sectname:
1296 1300 ui.write(value, '\n')
1297 1301 else:
1298 1302 ui.write('%s=%s\n' % (sectname, value))
1299 1303
1300 1304 def debugsetparents(ui, repo, rev1, rev2=None):
1301 1305 """manually set the parents of the current working directory
1302 1306
1303 1307 This is useful for writing repository conversion tools, but should
1304 1308 be used with care.
1305 1309 """
1306 1310
1307 1311 if not rev2:
1308 1312 rev2 = hex(nullid)
1309 1313
1310 1314 repo.dirstate.setparents(repo.lookup(rev1), repo.lookup(rev2))
1311 1315
1312 1316 def debugstate(ui, repo):
1313 1317 """show the contents of the current dirstate"""
1314 1318 repo.dirstate.read()
1315 1319 dc = repo.dirstate.map
1316 1320 keys = dc.keys()
1317 1321 keys.sort()
1318 1322 for file_ in keys:
1319 1323 ui.write("%c %3o %10d %s %s\n"
1320 1324 % (dc[file_][0], dc[file_][1] & 0777, dc[file_][2],
1321 1325 time.strftime("%x %X",
1322 1326 time.localtime(dc[file_][3])), file_))
1323 1327 for f in repo.dirstate.copies:
1324 1328 ui.write(_("copy: %s -> %s\n") % (repo.dirstate.copies[f], f))
1325 1329
1326 1330 def debugdata(ui, file_, rev):
1327 1331 """dump the contents of an data file revision"""
1328 1332 r = revlog.revlog(util.opener(os.getcwd(), audit=False),
1329 1333 file_[:-2] + ".i", file_, 0)
1330 1334 try:
1331 1335 ui.write(r.revision(r.lookup(rev)))
1332 1336 except KeyError:
1333 1337 raise util.Abort(_('invalid revision identifier %s'), rev)
1334 1338
1335 1339 def debugindex(ui, file_):
1336 1340 """dump the contents of an index file"""
1337 1341 r = revlog.revlog(util.opener(os.getcwd(), audit=False), file_, "", 0)
1338 1342 ui.write(" rev offset length base linkrev" +
1339 1343 " nodeid p1 p2\n")
1340 1344 for i in range(r.count()):
1341 1345 node = r.node(i)
1342 1346 pp = r.parents(node)
1343 1347 ui.write("% 6d % 9d % 7d % 6d % 7d %s %s %s\n" % (
1344 1348 i, r.start(i), r.length(i), r.base(i), r.linkrev(node),
1345 1349 short(node), short(pp[0]), short(pp[1])))
1346 1350
1347 1351 def debugindexdot(ui, file_):
1348 1352 """dump an index DAG as a .dot file"""
1349 1353 r = revlog.revlog(util.opener(os.getcwd(), audit=False), file_, "", 0)
1350 1354 ui.write("digraph G {\n")
1351 1355 for i in range(r.count()):
1352 1356 node = r.node(i)
1353 1357 pp = r.parents(node)
1354 1358 ui.write("\t%d -> %d\n" % (r.rev(pp[0]), i))
1355 1359 if pp[1] != nullid:
1356 1360 ui.write("\t%d -> %d\n" % (r.rev(pp[1]), i))
1357 1361 ui.write("}\n")
1358 1362
1359 1363 def debugrename(ui, repo, file, rev=None):
1360 1364 """dump rename information"""
1361 1365 r = repo.file(relpath(repo, [file])[0])
1362 1366 if rev:
1363 1367 try:
1364 1368 # assume all revision numbers are for changesets
1365 1369 n = repo.lookup(rev)
1366 1370 change = repo.changelog.read(n)
1367 1371 m = repo.manifest.read(change[0])
1368 1372 n = m[relpath(repo, [file])[0]]
1369 1373 except (hg.RepoError, KeyError):
1370 1374 n = r.lookup(rev)
1371 1375 else:
1372 1376 n = r.tip()
1373 1377 m = r.renamed(n)
1374 1378 if m:
1375 1379 ui.write(_("renamed from %s:%s\n") % (m[0], hex(m[1])))
1376 1380 else:
1377 1381 ui.write(_("not renamed\n"))
1378 1382
1379 1383 def debugwalk(ui, repo, *pats, **opts):
1380 1384 """show how files match on given patterns"""
1381 1385 items = list(walk(repo, pats, opts))
1382 1386 if not items:
1383 1387 return
1384 1388 fmt = '%%s %%-%ds %%-%ds %%s' % (
1385 1389 max([len(abs) for (src, abs, rel, exact) in items]),
1386 1390 max([len(rel) for (src, abs, rel, exact) in items]))
1387 1391 for src, abs, rel, exact in items:
1388 1392 line = fmt % (src, abs, rel, exact and 'exact' or '')
1389 1393 ui.write("%s\n" % line.rstrip())
1390 1394
1391 1395 def diff(ui, repo, *pats, **opts):
1392 1396 """diff repository (or selected files)
1393 1397
1394 1398 Show differences between revisions for the specified files.
1395 1399
1396 1400 Differences between files are shown using the unified diff format.
1397 1401
1398 1402 When two revision arguments are given, then changes are shown
1399 1403 between those revisions. If only one revision is specified then
1400 1404 that revision is compared to the working directory, and, when no
1401 1405 revisions are specified, the working directory files are compared
1402 1406 to its parent.
1403 1407
1404 1408 Without the -a option, diff will avoid generating diffs of files
1405 1409 it detects as binary. With -a, diff will generate a diff anyway,
1406 1410 probably with undesirable results.
1407 1411 """
1408 1412 node1, node2 = revpair(ui, repo, opts['rev'])
1409 1413
1410 1414 fns, matchfn, anypats = matchpats(repo, pats, opts)
1411 1415
1412 1416 dodiff(sys.stdout, ui, repo, node1, node2, fns, match=matchfn,
1413 1417 text=opts['text'], opts=opts)
1414 1418
1415 1419 def doexport(ui, repo, changeset, seqno, total, revwidth, opts):
1416 1420 node = repo.lookup(changeset)
1417 1421 parents = [p for p in repo.changelog.parents(node) if p != nullid]
1418 1422 if opts['switch_parent']:
1419 1423 parents.reverse()
1420 1424 prev = (parents and parents[0]) or nullid
1421 1425 change = repo.changelog.read(node)
1422 1426
1423 1427 fp = make_file(repo, opts['output'], node, total=total, seqno=seqno,
1424 1428 revwidth=revwidth)
1425 1429 if fp != sys.stdout:
1426 1430 ui.note("%s\n" % fp.name)
1427 1431
1428 1432 fp.write("# HG changeset patch\n")
1429 1433 fp.write("# User %s\n" % change[1])
1430 1434 fp.write("# Date %d %d\n" % change[2])
1431 1435 fp.write("# Node ID %s\n" % hex(node))
1432 1436 fp.write("# Parent %s\n" % hex(prev))
1433 1437 if len(parents) > 1:
1434 1438 fp.write("# Parent %s\n" % hex(parents[1]))
1435 1439 fp.write(change[4].rstrip())
1436 1440 fp.write("\n\n")
1437 1441
1438 1442 dodiff(fp, ui, repo, prev, node, text=opts['text'])
1439 1443 if fp != sys.stdout:
1440 1444 fp.close()
1441 1445
1442 1446 def export(ui, repo, *changesets, **opts):
1443 1447 """dump the header and diffs for one or more changesets
1444 1448
1445 1449 Print the changeset header and diffs for one or more revisions.
1446 1450
1447 1451 The information shown in the changeset header is: author,
1448 1452 changeset hash, parent and commit comment.
1449 1453
1450 1454 Output may be to a file, in which case the name of the file is
1451 1455 given using a format string. The formatting rules are as follows:
1452 1456
1453 1457 %% literal "%" character
1454 1458 %H changeset hash (40 bytes of hexadecimal)
1455 1459 %N number of patches being generated
1456 1460 %R changeset revision number
1457 1461 %b basename of the exporting repository
1458 1462 %h short-form changeset hash (12 bytes of hexadecimal)
1459 1463 %n zero-padded sequence number, starting at 1
1460 1464 %r zero-padded changeset revision number
1461 1465
1462 1466 Without the -a option, export will avoid generating diffs of files
1463 1467 it detects as binary. With -a, export will generate a diff anyway,
1464 1468 probably with undesirable results.
1465 1469
1466 1470 With the --switch-parent option, the diff will be against the second
1467 1471 parent. It can be useful to review a merge.
1468 1472 """
1469 1473 if not changesets:
1470 1474 raise util.Abort(_("export requires at least one changeset"))
1471 1475 seqno = 0
1472 1476 revs = list(revrange(ui, repo, changesets))
1473 1477 total = len(revs)
1474 1478 revwidth = max(map(len, revs))
1475 1479 msg = len(revs) > 1 and _("Exporting patches:\n") or _("Exporting patch:\n")
1476 1480 ui.note(msg)
1477 1481 for cset in revs:
1478 1482 seqno += 1
1479 1483 doexport(ui, repo, cset, seqno, total, revwidth, opts)
1480 1484
1481 1485 def forget(ui, repo, *pats, **opts):
1482 1486 """don't add the specified files on the next commit (DEPRECATED)
1483 1487
1484 1488 (DEPRECATED)
1485 1489 Undo an 'hg add' scheduled for the next commit.
1486 1490
1487 1491 This command is now deprecated and will be removed in a future
1488 1492 release. Please use revert instead.
1489 1493 """
1490 1494 ui.warn(_("(the forget command is deprecated; use revert instead)\n"))
1491 1495 forget = []
1492 1496 for src, abs, rel, exact in walk(repo, pats, opts):
1493 1497 if repo.dirstate.state(abs) == 'a':
1494 1498 forget.append(abs)
1495 1499 if ui.verbose or not exact:
1496 1500 ui.status(_('forgetting %s\n') % ((pats and rel) or abs))
1497 1501 repo.forget(forget)
1498 1502
1499 1503 def grep(ui, repo, pattern, *pats, **opts):
1500 1504 """search for a pattern in specified files and revisions
1501 1505
1502 1506 Search revisions of files for a regular expression.
1503 1507
1504 1508 This command behaves differently than Unix grep. It only accepts
1505 1509 Python/Perl regexps. It searches repository history, not the
1506 1510 working directory. It always prints the revision number in which
1507 1511 a match appears.
1508 1512
1509 1513 By default, grep only prints output for the first revision of a
1510 1514 file in which it finds a match. To get it to print every revision
1511 1515 that contains a change in match status ("-" for a match that
1512 1516 becomes a non-match, or "+" for a non-match that becomes a match),
1513 1517 use the --all flag.
1514 1518 """
1515 1519 reflags = 0
1516 1520 if opts['ignore_case']:
1517 1521 reflags |= re.I
1518 1522 regexp = re.compile(pattern, reflags)
1519 1523 sep, eol = ':', '\n'
1520 1524 if opts['print0']:
1521 1525 sep = eol = '\0'
1522 1526
1523 1527 fcache = {}
1524 1528 def getfile(fn):
1525 1529 if fn not in fcache:
1526 1530 fcache[fn] = repo.file(fn)
1527 1531 return fcache[fn]
1528 1532
1529 1533 def matchlines(body):
1530 1534 begin = 0
1531 1535 linenum = 0
1532 1536 while True:
1533 1537 match = regexp.search(body, begin)
1534 1538 if not match:
1535 1539 break
1536 1540 mstart, mend = match.span()
1537 1541 linenum += body.count('\n', begin, mstart) + 1
1538 1542 lstart = body.rfind('\n', begin, mstart) + 1 or begin
1539 1543 lend = body.find('\n', mend)
1540 1544 yield linenum, mstart - lstart, mend - lstart, body[lstart:lend]
1541 1545 begin = lend + 1
1542 1546
1543 1547 class linestate(object):
1544 1548 def __init__(self, line, linenum, colstart, colend):
1545 1549 self.line = line
1546 1550 self.linenum = linenum
1547 1551 self.colstart = colstart
1548 1552 self.colend = colend
1549 1553 def __eq__(self, other):
1550 1554 return self.line == other.line
1551 1555 def __hash__(self):
1552 1556 return hash(self.line)
1553 1557
1554 1558 matches = {}
1555 1559 def grepbody(fn, rev, body):
1556 1560 matches[rev].setdefault(fn, {})
1557 1561 m = matches[rev][fn]
1558 1562 for lnum, cstart, cend, line in matchlines(body):
1559 1563 s = linestate(line, lnum, cstart, cend)
1560 1564 m[s] = s
1561 1565
1562 1566 # FIXME: prev isn't used, why ?
1563 1567 prev = {}
1564 1568 ucache = {}
1565 1569 def display(fn, rev, states, prevstates):
1566 1570 diff = list(sets.Set(states).symmetric_difference(sets.Set(prevstates)))
1567 1571 diff.sort(lambda x, y: cmp(x.linenum, y.linenum))
1568 1572 counts = {'-': 0, '+': 0}
1569 1573 filerevmatches = {}
1570 1574 for l in diff:
1571 1575 if incrementing or not opts['all']:
1572 1576 change = ((l in prevstates) and '-') or '+'
1573 1577 r = rev
1574 1578 else:
1575 1579 change = ((l in states) and '-') or '+'
1576 1580 r = prev[fn]
1577 1581 cols = [fn, str(rev)]
1578 1582 if opts['line_number']:
1579 1583 cols.append(str(l.linenum))
1580 1584 if opts['all']:
1581 1585 cols.append(change)
1582 1586 if opts['user']:
1583 1587 cols.append(trimuser(ui, getchange(rev)[1], rev,
1584 1588 ucache))
1585 1589 if opts['files_with_matches']:
1586 1590 c = (fn, rev)
1587 1591 if c in filerevmatches:
1588 1592 continue
1589 1593 filerevmatches[c] = 1
1590 1594 else:
1591 1595 cols.append(l.line)
1592 1596 ui.write(sep.join(cols), eol)
1593 1597 counts[change] += 1
1594 1598 return counts['+'], counts['-']
1595 1599
1596 1600 fstate = {}
1597 1601 skip = {}
1598 1602 changeiter, getchange, matchfn = walkchangerevs(ui, repo, pats, opts)
1599 1603 count = 0
1600 1604 incrementing = False
1601 1605 for st, rev, fns in changeiter:
1602 1606 if st == 'window':
1603 1607 incrementing = rev
1604 1608 matches.clear()
1605 1609 elif st == 'add':
1606 1610 change = repo.changelog.read(repo.lookup(str(rev)))
1607 1611 mf = repo.manifest.read(change[0])
1608 1612 matches[rev] = {}
1609 1613 for fn in fns:
1610 1614 if fn in skip:
1611 1615 continue
1612 1616 fstate.setdefault(fn, {})
1613 1617 try:
1614 1618 grepbody(fn, rev, getfile(fn).read(mf[fn]))
1615 1619 except KeyError:
1616 1620 pass
1617 1621 elif st == 'iter':
1618 1622 states = matches[rev].items()
1619 1623 states.sort()
1620 1624 for fn, m in states:
1621 1625 if fn in skip:
1622 1626 continue
1623 1627 if incrementing or not opts['all'] or fstate[fn]:
1624 1628 pos, neg = display(fn, rev, m, fstate[fn])
1625 1629 count += pos + neg
1626 1630 if pos and not opts['all']:
1627 1631 skip[fn] = True
1628 1632 fstate[fn] = m
1629 1633 prev[fn] = rev
1630 1634
1631 1635 if not incrementing:
1632 1636 fstate = fstate.items()
1633 1637 fstate.sort()
1634 1638 for fn, state in fstate:
1635 1639 if fn in skip:
1636 1640 continue
1637 1641 display(fn, rev, {}, state)
1638 1642 return (count == 0 and 1) or 0
1639 1643
1640 1644 def heads(ui, repo, **opts):
1641 1645 """show current repository heads
1642 1646
1643 1647 Show all repository head changesets.
1644 1648
1645 1649 Repository "heads" are changesets that don't have children
1646 1650 changesets. They are where development generally takes place and
1647 1651 are the usual targets for update and merge operations.
1648 1652 """
1649 1653 if opts['rev']:
1650 1654 heads = repo.heads(repo.lookup(opts['rev']))
1651 1655 else:
1652 1656 heads = repo.heads()
1653 1657 br = None
1654 1658 if opts['branches']:
1655 1659 br = repo.branchlookup(heads)
1656 1660 displayer = show_changeset(ui, repo, opts)
1657 1661 for n in heads:
1658 1662 displayer.show(changenode=n, brinfo=br)
1659 1663
1660 1664 def identify(ui, repo):
1661 1665 """print information about the working copy
1662 1666
1663 1667 Print a short summary of the current state of the repo.
1664 1668
1665 1669 This summary identifies the repository state using one or two parent
1666 1670 hash identifiers, followed by a "+" if there are uncommitted changes
1667 1671 in the working directory, followed by a list of tags for this revision.
1668 1672 """
1669 1673 parents = [p for p in repo.dirstate.parents() if p != nullid]
1670 1674 if not parents:
1671 1675 ui.write(_("unknown\n"))
1672 1676 return
1673 1677
1674 1678 hexfunc = ui.verbose and hex or short
1675 1679 modified, added, removed, deleted, unknown = repo.changes()
1676 1680 output = ["%s%s" %
1677 1681 ('+'.join([hexfunc(parent) for parent in parents]),
1678 1682 (modified or added or removed or deleted) and "+" or "")]
1679 1683
1680 1684 if not ui.quiet:
1681 1685 # multiple tags for a single parent separated by '/'
1682 1686 parenttags = ['/'.join(tags)
1683 1687 for tags in map(repo.nodetags, parents) if tags]
1684 1688 # tags for multiple parents separated by ' + '
1685 1689 if parenttags:
1686 1690 output.append(' + '.join(parenttags))
1687 1691
1688 1692 ui.write("%s\n" % ' '.join(output))
1689 1693
1690 1694 def import_(ui, repo, patch1, *patches, **opts):
1691 1695 """import an ordered set of patches
1692 1696
1693 1697 Import a list of patches and commit them individually.
1694 1698
1695 1699 If there are outstanding changes in the working directory, import
1696 1700 will abort unless given the -f flag.
1697 1701
1698 1702 You can import a patch straight from a mail message. Even patches
1699 1703 as attachments work (body part must be type text/plain or
1700 1704 text/x-patch to be used). From and Subject headers of email
1701 1705 message are used as default committer and commit message. All
1702 1706 text/plain body parts before first diff are added to commit
1703 1707 message.
1704 1708
1705 1709 If imported patch was generated by hg export, user and description
1706 1710 from patch override values from message headers and body. Values
1707 1711 given on command line with -m and -u override these.
1708 1712
1709 1713 To read a patch from standard input, use patch name "-".
1710 1714 """
1711 1715 patches = (patch1,) + patches
1712 1716
1713 1717 if not opts['force']:
1714 1718 bail_if_changed(repo)
1715 1719
1716 1720 d = opts["base"]
1717 1721 strip = opts["strip"]
1718 1722
1719 1723 mailre = re.compile(r'(?:From |[\w-]+:)')
1720 1724
1721 1725 # attempt to detect the start of a patch
1722 1726 # (this heuristic is borrowed from quilt)
1723 1727 diffre = re.compile(r'^(?:Index:[ \t]|diff[ \t]|RCS file: |' +
1724 1728 'retrieving revision [0-9]+(\.[0-9]+)*$|' +
1725 1729 '(---|\*\*\*)[ \t])', re.MULTILINE)
1726 1730
1727 1731 for patch in patches:
1728 1732 pf = os.path.join(d, patch)
1729 1733
1730 1734 message = None
1731 1735 user = None
1732 1736 date = None
1733 1737 hgpatch = False
1734 1738
1735 1739 p = email.Parser.Parser()
1736 1740 if pf == '-':
1737 1741 msg = p.parse(sys.stdin)
1738 1742 ui.status(_("applying patch from stdin\n"))
1739 1743 else:
1740 1744 msg = p.parse(file(pf))
1741 1745 ui.status(_("applying %s\n") % patch)
1742 1746
1743 1747 fd, tmpname = tempfile.mkstemp(prefix='hg-patch-')
1744 1748 tmpfp = os.fdopen(fd, 'w')
1745 1749 try:
1746 1750 message = msg['Subject']
1747 1751 if message:
1748 1752 message = message.replace('\n\t', ' ')
1749 1753 ui.debug('Subject: %s\n' % message)
1750 1754 user = msg['From']
1751 1755 if user:
1752 1756 ui.debug('From: %s\n' % user)
1753 1757 diffs_seen = 0
1754 1758 ok_types = ('text/plain', 'text/x-patch')
1755 1759 for part in msg.walk():
1756 1760 content_type = part.get_content_type()
1757 1761 ui.debug('Content-Type: %s\n' % content_type)
1758 1762 if content_type not in ok_types:
1759 1763 continue
1760 1764 payload = part.get_payload(decode=True)
1761 1765 m = diffre.search(payload)
1762 1766 if m:
1763 1767 ui.debug(_('found patch at byte %d\n') % m.start(0))
1764 1768 diffs_seen += 1
1765 1769 hgpatch = False
1766 1770 fp = cStringIO.StringIO()
1767 1771 if message:
1768 1772 fp.write(message)
1769 1773 fp.write('\n')
1770 1774 for line in payload[:m.start(0)].splitlines():
1771 1775 if line.startswith('# HG changeset patch'):
1772 1776 ui.debug(_('patch generated by hg export\n'))
1773 1777 hgpatch = True
1774 1778 # drop earlier commit message content
1775 1779 fp.seek(0)
1776 1780 fp.truncate()
1777 1781 elif hgpatch:
1778 1782 if line.startswith('# User '):
1779 1783 user = line[7:]
1780 1784 ui.debug('From: %s\n' % user)
1781 1785 elif line.startswith("# Date "):
1782 1786 date = line[7:]
1783 1787 if not line.startswith('# '):
1784 1788 fp.write(line)
1785 1789 fp.write('\n')
1786 1790 message = fp.getvalue()
1787 1791 if tmpfp:
1788 1792 tmpfp.write(payload)
1789 1793 if not payload.endswith('\n'):
1790 1794 tmpfp.write('\n')
1791 1795 elif not diffs_seen and message and content_type == 'text/plain':
1792 1796 message += '\n' + payload
1793 1797
1794 1798 if opts['message']:
1795 1799 # pickup the cmdline msg
1796 1800 message = opts['message']
1797 1801 elif message:
1798 1802 # pickup the patch msg
1799 1803 message = message.strip()
1800 1804 else:
1801 1805 # launch the editor
1802 1806 message = None
1803 1807 ui.debug(_('message:\n%s\n') % message)
1804 1808
1805 1809 tmpfp.close()
1806 1810 if not diffs_seen:
1807 1811 raise util.Abort(_('no diffs found'))
1808 1812
1809 1813 files = util.patch(strip, tmpname, ui)
1810 1814 if len(files) > 0:
1811 1815 addremove_lock(ui, repo, files, {})
1812 1816 repo.commit(files, message, user, date)
1813 1817 finally:
1814 1818 os.unlink(tmpname)
1815 1819
1816 1820 def incoming(ui, repo, source="default", **opts):
1817 1821 """show new changesets found in source
1818 1822
1819 1823 Show new changesets found in the specified path/URL or the default
1820 1824 pull location. These are the changesets that would be pulled if a pull
1821 1825 was requested.
1822 1826
1823 1827 For remote repository, using --bundle avoids downloading the changesets
1824 1828 twice if the incoming is followed by a pull.
1825 1829
1826 1830 See pull for valid source format details.
1827 1831 """
1828 1832 source = ui.expandpath(source)
1829 1833 ui.setconfig_remoteopts(**opts)
1830 1834
1831 1835 other = hg.repository(ui, source)
1832 1836 incoming = repo.findincoming(other, force=opts["force"])
1833 1837 if not incoming:
1834 1838 ui.status(_("no changes found\n"))
1835 1839 return
1836 1840
1837 1841 cleanup = None
1838 1842 try:
1839 1843 fname = opts["bundle"]
1840 1844 if fname or not other.local():
1841 1845 # create a bundle (uncompressed if other repo is not local)
1842 1846 cg = other.changegroup(incoming, "incoming")
1843 1847 fname = cleanup = write_bundle(cg, fname, compress=other.local())
1844 1848 # keep written bundle?
1845 1849 if opts["bundle"]:
1846 1850 cleanup = None
1847 1851 if not other.local():
1848 1852 # use the created uncompressed bundlerepo
1849 1853 other = bundlerepo.bundlerepository(ui, repo.root, fname)
1850 1854
1851 1855 revs = None
1852 1856 if opts['rev']:
1853 1857 revs = [other.lookup(rev) for rev in opts['rev']]
1854 1858 o = other.changelog.nodesbetween(incoming, revs)[0]
1855 1859 if opts['newest_first']:
1856 1860 o.reverse()
1857 1861 displayer = show_changeset(ui, other, opts)
1858 1862 for n in o:
1859 1863 parents = [p for p in other.changelog.parents(n) if p != nullid]
1860 1864 if opts['no_merges'] and len(parents) == 2:
1861 1865 continue
1862 1866 displayer.show(changenode=n)
1863 1867 if opts['patch']:
1864 1868 prev = (parents and parents[0]) or nullid
1865 1869 dodiff(ui, ui, other, prev, n)
1866 1870 ui.write("\n")
1867 1871 finally:
1868 1872 if hasattr(other, 'close'):
1869 1873 other.close()
1870 1874 if cleanup:
1871 1875 os.unlink(cleanup)
1872 1876
1873 1877 def init(ui, dest=".", **opts):
1874 1878 """create a new repository in the given directory
1875 1879
1876 1880 Initialize a new repository in the given directory. If the given
1877 1881 directory does not exist, it is created.
1878 1882
1879 1883 If no directory is given, the current directory is used.
1880 1884
1881 1885 It is possible to specify an ssh:// URL as the destination.
1882 1886 Look at the help text for the pull command for important details
1883 1887 about ssh:// URLs.
1884 1888 """
1885 1889 ui.setconfig_remoteopts(**opts)
1886 1890 hg.repository(ui, dest, create=1)
1887 1891
1888 1892 def locate(ui, repo, *pats, **opts):
1889 1893 """locate files matching specific patterns
1890 1894
1891 1895 Print all files under Mercurial control whose names match the
1892 1896 given patterns.
1893 1897
1894 1898 This command searches the current directory and its
1895 1899 subdirectories. To search an entire repository, move to the root
1896 1900 of the repository.
1897 1901
1898 1902 If no patterns are given to match, this command prints all file
1899 1903 names.
1900 1904
1901 1905 If you want to feed the output of this command into the "xargs"
1902 1906 command, use the "-0" option to both this command and "xargs".
1903 1907 This will avoid the problem of "xargs" treating single filenames
1904 1908 that contain white space as multiple filenames.
1905 1909 """
1906 1910 end = opts['print0'] and '\0' or '\n'
1907 1911 rev = opts['rev']
1908 1912 if rev:
1909 1913 node = repo.lookup(rev)
1910 1914 else:
1911 1915 node = None
1912 1916
1913 1917 for src, abs, rel, exact in walk(repo, pats, opts, node=node,
1914 1918 head='(?:.*/|)'):
1915 1919 if not node and repo.dirstate.state(abs) == '?':
1916 1920 continue
1917 1921 if opts['fullpath']:
1918 1922 ui.write(os.path.join(repo.root, abs), end)
1919 1923 else:
1920 1924 ui.write(((pats and rel) or abs), end)
1921 1925
1922 1926 def log(ui, repo, *pats, **opts):
1923 1927 """show revision history of entire repository or files
1924 1928
1925 1929 Print the revision history of the specified files or the entire project.
1926 1930
1927 1931 By default this command outputs: changeset id and hash, tags,
1928 1932 non-trivial parents, user, date and time, and a summary for each
1929 1933 commit. When the -v/--verbose switch is used, the list of changed
1930 1934 files and full commit message is shown.
1931 1935 """
1932 1936 class dui(object):
1933 1937 # Implement and delegate some ui protocol. Save hunks of
1934 1938 # output for later display in the desired order.
1935 1939 def __init__(self, ui):
1936 1940 self.ui = ui
1937 1941 self.hunk = {}
1938 1942 self.header = {}
1939 1943 def bump(self, rev):
1940 1944 self.rev = rev
1941 1945 self.hunk[rev] = []
1942 1946 self.header[rev] = []
1943 1947 def note(self, *args):
1944 1948 if self.verbose:
1945 1949 self.write(*args)
1946 1950 def status(self, *args):
1947 1951 if not self.quiet:
1948 1952 self.write(*args)
1949 1953 def write(self, *args):
1950 1954 self.hunk[self.rev].append(args)
1951 1955 def write_header(self, *args):
1952 1956 self.header[self.rev].append(args)
1953 1957 def debug(self, *args):
1954 1958 if self.debugflag:
1955 1959 self.write(*args)
1956 1960 def __getattr__(self, key):
1957 1961 return getattr(self.ui, key)
1958 1962
1959 1963 changeiter, getchange, matchfn = walkchangerevs(ui, repo, pats, opts)
1960 1964
1961 1965 if opts['limit']:
1962 1966 try:
1963 1967 limit = int(opts['limit'])
1964 1968 except ValueError:
1965 1969 raise util.Abort(_('limit must be a positive integer'))
1966 1970 if limit <= 0: raise util.Abort(_('limit must be positive'))
1967 1971 else:
1968 1972 limit = sys.maxint
1969 1973 count = 0
1970 1974
1971 1975 displayer = show_changeset(ui, repo, opts)
1972 1976 for st, rev, fns in changeiter:
1973 1977 if st == 'window':
1974 1978 du = dui(ui)
1975 1979 displayer.ui = du
1976 1980 elif st == 'add':
1977 1981 du.bump(rev)
1978 1982 changenode = repo.changelog.node(rev)
1979 1983 parents = [p for p in repo.changelog.parents(changenode)
1980 1984 if p != nullid]
1981 1985 if opts['no_merges'] and len(parents) == 2:
1982 1986 continue
1983 1987 if opts['only_merges'] and len(parents) != 2:
1984 1988 continue
1985 1989
1986 1990 if opts['keyword']:
1987 1991 changes = getchange(rev)
1988 1992 miss = 0
1989 1993 for k in [kw.lower() for kw in opts['keyword']]:
1990 1994 if not (k in changes[1].lower() or
1991 1995 k in changes[4].lower() or
1992 1996 k in " ".join(changes[3][:20]).lower()):
1993 1997 miss = 1
1994 1998 break
1995 1999 if miss:
1996 2000 continue
1997 2001
1998 2002 br = None
1999 2003 if opts['branches']:
2000 2004 br = repo.branchlookup([repo.changelog.node(rev)])
2001 2005
2002 2006 displayer.show(rev, brinfo=br)
2003 2007 if opts['patch']:
2004 2008 prev = (parents and parents[0]) or nullid
2005 2009 dodiff(du, du, repo, prev, changenode, match=matchfn)
2006 2010 du.write("\n\n")
2007 2011 elif st == 'iter':
2008 2012 if count == limit: break
2009 2013 if du.header[rev]:
2010 2014 for args in du.header[rev]:
2011 2015 ui.write_header(*args)
2012 2016 if du.hunk[rev]:
2013 2017 count += 1
2014 2018 for args in du.hunk[rev]:
2015 2019 ui.write(*args)
2016 2020
2017 2021 def manifest(ui, repo, rev=None):
2018 2022 """output the latest or given revision of the project manifest
2019 2023
2020 2024 Print a list of version controlled files for the given revision.
2021 2025
2022 2026 The manifest is the list of files being version controlled. If no revision
2023 2027 is given then the tip is used.
2024 2028 """
2025 2029 if rev:
2026 2030 try:
2027 2031 # assume all revision numbers are for changesets
2028 2032 n = repo.lookup(rev)
2029 2033 change = repo.changelog.read(n)
2030 2034 n = change[0]
2031 2035 except hg.RepoError:
2032 2036 n = repo.manifest.lookup(rev)
2033 2037 else:
2034 2038 n = repo.manifest.tip()
2035 2039 m = repo.manifest.read(n)
2036 2040 mf = repo.manifest.readflags(n)
2037 2041 files = m.keys()
2038 2042 files.sort()
2039 2043
2040 2044 for f in files:
2041 2045 ui.write("%40s %3s %s\n" % (hex(m[f]), mf[f] and "755" or "644", f))
2042 2046
2043 2047 def merge(ui, repo, node=None, **opts):
2044 2048 """Merge working directory with another revision
2045 2049
2046 2050 Merge the contents of the current working directory and the
2047 2051 requested revision. Files that changed between either parent are
2048 2052 marked as changed for the next commit and a commit must be
2049 2053 performed before any further updates are allowed.
2050 2054 """
2051 2055 return doupdate(ui, repo, node=node, merge=True, **opts)
2052 2056
2053 2057 def outgoing(ui, repo, dest=None, **opts):
2054 2058 """show changesets not found in destination
2055 2059
2056 2060 Show changesets not found in the specified destination repository or
2057 2061 the default push location. These are the changesets that would be pushed
2058 2062 if a push was requested.
2059 2063
2060 2064 See pull for valid destination format details.
2061 2065 """
2062 2066 dest = ui.expandpath(dest or 'default-push', dest or 'default')
2063 2067 ui.setconfig_remoteopts(**opts)
2064 2068 revs = None
2065 2069 if opts['rev']:
2066 2070 revs = [repo.lookup(rev) for rev in opts['rev']]
2067 2071
2068 2072 other = hg.repository(ui, dest)
2069 2073 o = repo.findoutgoing(other, force=opts['force'])
2070 2074 if not o:
2071 2075 ui.status(_("no changes found\n"))
2072 2076 return
2073 2077 o = repo.changelog.nodesbetween(o, revs)[0]
2074 2078 if opts['newest_first']:
2075 2079 o.reverse()
2076 2080 displayer = show_changeset(ui, repo, opts)
2077 2081 for n in o:
2078 2082 parents = [p for p in repo.changelog.parents(n) if p != nullid]
2079 2083 if opts['no_merges'] and len(parents) == 2:
2080 2084 continue
2081 2085 displayer.show(changenode=n)
2082 2086 if opts['patch']:
2083 2087 prev = (parents and parents[0]) or nullid
2084 2088 dodiff(ui, ui, repo, prev, n)
2085 2089 ui.write("\n")
2086 2090
2087 2091 def parents(ui, repo, rev=None, branches=None, **opts):
2088 2092 """show the parents of the working dir or revision
2089 2093
2090 2094 Print the working directory's parent revisions.
2091 2095 """
2092 2096 if rev:
2093 2097 p = repo.changelog.parents(repo.lookup(rev))
2094 2098 else:
2095 2099 p = repo.dirstate.parents()
2096 2100
2097 2101 br = None
2098 2102 if branches is not None:
2099 2103 br = repo.branchlookup(p)
2100 2104 displayer = show_changeset(ui, repo, opts)
2101 2105 for n in p:
2102 2106 if n != nullid:
2103 2107 displayer.show(changenode=n, brinfo=br)
2104 2108
2105 2109 def paths(ui, repo, search=None):
2106 2110 """show definition of symbolic path names
2107 2111
2108 2112 Show definition of symbolic path name NAME. If no name is given, show
2109 2113 definition of available names.
2110 2114
2111 2115 Path names are defined in the [paths] section of /etc/mercurial/hgrc
2112 2116 and $HOME/.hgrc. If run inside a repository, .hg/hgrc is used, too.
2113 2117 """
2114 2118 if search:
2115 2119 for name, path in ui.configitems("paths"):
2116 2120 if name == search:
2117 2121 ui.write("%s\n" % path)
2118 2122 return
2119 2123 ui.warn(_("not found!\n"))
2120 2124 return 1
2121 2125 else:
2122 2126 for name, path in ui.configitems("paths"):
2123 2127 ui.write("%s = %s\n" % (name, path))
2124 2128
2125 2129 def postincoming(ui, repo, modheads, optupdate):
2126 2130 if modheads == 0:
2127 2131 return
2128 2132 if optupdate:
2129 2133 if modheads == 1:
2130 2134 return doupdate(ui, repo)
2131 2135 else:
2132 2136 ui.status(_("not updating, since new heads added\n"))
2133 2137 if modheads > 1:
2134 2138 ui.status(_("(run 'hg heads' to see heads, 'hg merge' to merge)\n"))
2135 2139 else:
2136 2140 ui.status(_("(run 'hg update' to get a working copy)\n"))
2137 2141
2138 2142 def pull(ui, repo, source="default", **opts):
2139 2143 """pull changes from the specified source
2140 2144
2141 2145 Pull changes from a remote repository to a local one.
2142 2146
2143 2147 This finds all changes from the repository at the specified path
2144 2148 or URL and adds them to the local repository. By default, this
2145 2149 does not update the copy of the project in the working directory.
2146 2150
2147 2151 Valid URLs are of the form:
2148 2152
2149 2153 local/filesystem/path
2150 2154 http://[user@]host[:port]/[path]
2151 2155 https://[user@]host[:port]/[path]
2152 2156 ssh://[user@]host[:port]/[path]
2153 2157
2154 2158 Some notes about using SSH with Mercurial:
2155 2159 - SSH requires an accessible shell account on the destination machine
2156 2160 and a copy of hg in the remote path or specified with as remotecmd.
2157 2161 - path is relative to the remote user's home directory by default.
2158 2162 Use an extra slash at the start of a path to specify an absolute path:
2159 2163 ssh://example.com//tmp/repository
2160 2164 - Mercurial doesn't use its own compression via SSH; the right thing
2161 2165 to do is to configure it in your ~/.ssh/ssh_config, e.g.:
2162 2166 Host *.mylocalnetwork.example.com
2163 2167 Compression off
2164 2168 Host *
2165 2169 Compression on
2166 2170 Alternatively specify "ssh -C" as your ssh command in your hgrc or
2167 2171 with the --ssh command line option.
2168 2172 """
2169 2173 source = ui.expandpath(source)
2170 2174 ui.setconfig_remoteopts(**opts)
2171 2175
2172 2176 other = hg.repository(ui, source)
2173 2177 ui.status(_('pulling from %s\n') % (source))
2174 2178 revs = None
2175 2179 if opts['rev'] and not other.local():
2176 2180 raise util.Abort(_("pull -r doesn't work for remote repositories yet"))
2177 2181 elif opts['rev']:
2178 2182 revs = [other.lookup(rev) for rev in opts['rev']]
2179 2183 modheads = repo.pull(other, heads=revs, force=opts['force'])
2180 2184 return postincoming(ui, repo, modheads, opts['update'])
2181 2185
2182 2186 def push(ui, repo, dest=None, **opts):
2183 2187 """push changes to the specified destination
2184 2188
2185 2189 Push changes from the local repository to the given destination.
2186 2190
2187 2191 This is the symmetrical operation for pull. It helps to move
2188 2192 changes from the current repository to a different one. If the
2189 2193 destination is local this is identical to a pull in that directory
2190 2194 from the current one.
2191 2195
2192 2196 By default, push will refuse to run if it detects the result would
2193 2197 increase the number of remote heads. This generally indicates the
2194 2198 the client has forgotten to sync and merge before pushing.
2195 2199
2196 2200 Valid URLs are of the form:
2197 2201
2198 2202 local/filesystem/path
2199 2203 ssh://[user@]host[:port]/[path]
2200 2204
2201 2205 Look at the help text for the pull command for important details
2202 2206 about ssh:// URLs.
2203 2207
2204 2208 Pushing to http:// and https:// URLs is possible, too, if this
2205 2209 feature is enabled on the remote Mercurial server.
2206 2210 """
2207 2211 dest = ui.expandpath(dest or 'default-push', dest or 'default')
2208 2212 ui.setconfig_remoteopts(**opts)
2209 2213
2210 2214 other = hg.repository(ui, dest)
2211 2215 ui.status('pushing to %s\n' % (dest))
2212 2216 revs = None
2213 2217 if opts['rev']:
2214 2218 revs = [repo.lookup(rev) for rev in opts['rev']]
2215 2219 r = repo.push(other, opts['force'], revs=revs)
2216 2220 return r == 0
2217 2221
2218 2222 def rawcommit(ui, repo, *flist, **rc):
2219 2223 """raw commit interface (DEPRECATED)
2220 2224
2221 2225 (DEPRECATED)
2222 2226 Lowlevel commit, for use in helper scripts.
2223 2227
2224 2228 This command is not intended to be used by normal users, as it is
2225 2229 primarily useful for importing from other SCMs.
2226 2230
2227 2231 This command is now deprecated and will be removed in a future
2228 2232 release, please use debugsetparents and commit instead.
2229 2233 """
2230 2234
2231 2235 ui.warn(_("(the rawcommit command is deprecated)\n"))
2232 2236
2233 2237 message = rc['message']
2234 2238 if not message and rc['logfile']:
2235 2239 try:
2236 2240 message = open(rc['logfile']).read()
2237 2241 except IOError:
2238 2242 pass
2239 2243 if not message and not rc['logfile']:
2240 2244 raise util.Abort(_("missing commit message"))
2241 2245
2242 2246 files = relpath(repo, list(flist))
2243 2247 if rc['files']:
2244 2248 files += open(rc['files']).read().splitlines()
2245 2249
2246 2250 rc['parent'] = map(repo.lookup, rc['parent'])
2247 2251
2248 2252 try:
2249 2253 repo.rawcommit(files, message, rc['user'], rc['date'], *rc['parent'])
2250 2254 except ValueError, inst:
2251 2255 raise util.Abort(str(inst))
2252 2256
2253 2257 def recover(ui, repo):
2254 2258 """roll back an interrupted transaction
2255 2259
2256 2260 Recover from an interrupted commit or pull.
2257 2261
2258 2262 This command tries to fix the repository status after an interrupted
2259 2263 operation. It should only be necessary when Mercurial suggests it.
2260 2264 """
2261 2265 if repo.recover():
2262 2266 return repo.verify()
2263 2267 return 1
2264 2268
2265 2269 def remove(ui, repo, *pats, **opts):
2266 2270 """remove the specified files on the next commit
2267 2271
2268 2272 Schedule the indicated files for removal from the repository.
2269 2273
2270 2274 This command schedules the files to be removed at the next commit.
2271 2275 This only removes files from the current branch, not from the
2272 2276 entire project history. If the files still exist in the working
2273 2277 directory, they will be deleted from it. If invoked with --after,
2274 2278 files that have been manually deleted are marked as removed.
2275 2279
2276 2280 Modified files and added files are not removed by default. To
2277 2281 remove them, use the -f/--force option.
2278 2282 """
2279 2283 names = []
2280 2284 if not opts['after'] and not pats:
2281 2285 raise util.Abort(_('no files specified'))
2282 2286 files, matchfn, anypats = matchpats(repo, pats, opts)
2283 2287 exact = dict.fromkeys(files)
2284 2288 mardu = map(dict.fromkeys, repo.changes(files=files, match=matchfn))
2285 2289 modified, added, removed, deleted, unknown = mardu
2286 2290 remove, forget = [], []
2287 2291 for src, abs, rel, exact in walk(repo, pats, opts):
2288 2292 reason = None
2289 2293 if abs not in deleted and opts['after']:
2290 2294 reason = _('is still present')
2291 2295 elif abs in modified and not opts['force']:
2292 2296 reason = _('is modified (use -f to force removal)')
2293 2297 elif abs in added:
2294 2298 if opts['force']:
2295 2299 forget.append(abs)
2296 2300 continue
2297 2301 reason = _('has been marked for add (use -f to force removal)')
2298 2302 elif abs in unknown:
2299 2303 reason = _('is not managed')
2300 2304 elif abs in removed:
2301 2305 continue
2302 2306 if reason:
2303 2307 if exact:
2304 2308 ui.warn(_('not removing %s: file %s\n') % (rel, reason))
2305 2309 else:
2306 2310 if ui.verbose or not exact:
2307 2311 ui.status(_('removing %s\n') % rel)
2308 2312 remove.append(abs)
2309 2313 repo.forget(forget)
2310 2314 repo.remove(remove, unlink=not opts['after'])
2311 2315
2312 2316 def rename(ui, repo, *pats, **opts):
2313 2317 """rename files; equivalent of copy + remove
2314 2318
2315 2319 Mark dest as copies of sources; mark sources for deletion. If
2316 2320 dest is a directory, copies are put in that directory. If dest is
2317 2321 a file, there can only be one source.
2318 2322
2319 2323 By default, this command copies the contents of files as they
2320 2324 stand in the working directory. If invoked with --after, the
2321 2325 operation is recorded, but no copying is performed.
2322 2326
2323 2327 This command takes effect in the next commit.
2324 2328
2325 2329 NOTE: This command should be treated as experimental. While it
2326 2330 should properly record rename files, this information is not yet
2327 2331 fully used by merge, nor fully reported by log.
2328 2332 """
2329 2333 wlock = repo.wlock(0)
2330 2334 errs, copied = docopy(ui, repo, pats, opts, wlock)
2331 2335 names = []
2332 2336 for abs, rel, exact in copied:
2333 2337 if ui.verbose or not exact:
2334 2338 ui.status(_('removing %s\n') % rel)
2335 2339 names.append(abs)
2336 2340 if not opts.get('dry_run'):
2337 2341 repo.remove(names, True, wlock)
2338 2342 return errs
2339 2343
2340 2344 def revert(ui, repo, *pats, **opts):
2341 2345 """revert files or dirs to their states as of some revision
2342 2346
2343 2347 With no revision specified, revert the named files or directories
2344 2348 to the contents they had in the parent of the working directory.
2345 2349 This restores the contents of the affected files to an unmodified
2346 2350 state. If the working directory has two parents, you must
2347 2351 explicitly specify the revision to revert to.
2348 2352
2349 2353 Modified files are saved with a .orig suffix before reverting.
2350 2354 To disable these backups, use --no-backup.
2351 2355
2352 2356 Using the -r option, revert the given files or directories to
2353 2357 their contents as of a specific revision. This can be helpful to"roll
2354 2358 back" some or all of a change that should not have been committed.
2355 2359
2356 2360 Revert modifies the working directory. It does not commit any
2357 2361 changes, or change the parent of the working directory. If you
2358 2362 revert to a revision other than the parent of the working
2359 2363 directory, the reverted files will thus appear modified
2360 2364 afterwards.
2361 2365
2362 2366 If a file has been deleted, it is recreated. If the executable
2363 2367 mode of a file was changed, it is reset.
2364 2368
2365 2369 If names are given, all files matching the names are reverted.
2366 2370
2367 2371 If no arguments are given, all files in the repository are reverted.
2368 2372 """
2369 2373 parent, p2 = repo.dirstate.parents()
2370 2374 if opts['rev']:
2371 2375 node = repo.lookup(opts['rev'])
2372 2376 elif p2 != nullid:
2373 2377 raise util.Abort(_('working dir has two parents; '
2374 2378 'you must specify the revision to revert to'))
2375 2379 else:
2376 2380 node = parent
2377 2381 mf = repo.manifest.read(repo.changelog.read(node)[0])
2378 2382 if node == parent:
2379 2383 pmf = mf
2380 2384 else:
2381 2385 pmf = None
2382 2386
2383 2387 wlock = repo.wlock()
2384 2388
2385 2389 # need all matching names in dirstate and manifest of target rev,
2386 2390 # so have to walk both. do not print errors if files exist in one
2387 2391 # but not other.
2388 2392
2389 2393 names = {}
2390 2394 target_only = {}
2391 2395
2392 2396 # walk dirstate.
2393 2397
2394 2398 for src, abs, rel, exact in walk(repo, pats, opts, badmatch=mf.has_key):
2395 2399 names[abs] = (rel, exact)
2396 2400 if src == 'b':
2397 2401 target_only[abs] = True
2398 2402
2399 2403 # walk target manifest.
2400 2404
2401 2405 for src, abs, rel, exact in walk(repo, pats, opts, node=node,
2402 2406 badmatch=names.has_key):
2403 2407 if abs in names: continue
2404 2408 names[abs] = (rel, exact)
2405 2409 target_only[abs] = True
2406 2410
2407 2411 changes = repo.changes(match=names.has_key, wlock=wlock)
2408 2412 modified, added, removed, deleted, unknown = map(dict.fromkeys, changes)
2409 2413
2410 2414 revert = ([], _('reverting %s\n'))
2411 2415 add = ([], _('adding %s\n'))
2412 2416 remove = ([], _('removing %s\n'))
2413 2417 forget = ([], _('forgetting %s\n'))
2414 2418 undelete = ([], _('undeleting %s\n'))
2415 2419 update = {}
2416 2420
2417 2421 disptable = (
2418 2422 # dispatch table:
2419 2423 # file state
2420 2424 # action if in target manifest
2421 2425 # action if not in target manifest
2422 2426 # make backup if in target manifest
2423 2427 # make backup if not in target manifest
2424 2428 (modified, revert, remove, True, True),
2425 2429 (added, revert, forget, True, False),
2426 2430 (removed, undelete, None, False, False),
2427 2431 (deleted, revert, remove, False, False),
2428 2432 (unknown, add, None, True, False),
2429 2433 (target_only, add, None, False, False),
2430 2434 )
2431 2435
2432 2436 entries = names.items()
2433 2437 entries.sort()
2434 2438
2435 2439 for abs, (rel, exact) in entries:
2436 2440 mfentry = mf.get(abs)
2437 2441 def handle(xlist, dobackup):
2438 2442 xlist[0].append(abs)
2439 2443 update[abs] = 1
2440 2444 if dobackup and not opts['no_backup'] and os.path.exists(rel):
2441 2445 bakname = "%s.orig" % rel
2442 2446 ui.note(_('saving current version of %s as %s\n') %
2443 2447 (rel, bakname))
2444 2448 if not opts.get('dry_run'):
2445 2449 shutil.copyfile(rel, bakname)
2446 2450 shutil.copymode(rel, bakname)
2447 2451 if ui.verbose or not exact:
2448 2452 ui.status(xlist[1] % rel)
2449 2453 for table, hitlist, misslist, backuphit, backupmiss in disptable:
2450 2454 if abs not in table: continue
2451 2455 # file has changed in dirstate
2452 2456 if mfentry:
2453 2457 handle(hitlist, backuphit)
2454 2458 elif misslist is not None:
2455 2459 handle(misslist, backupmiss)
2456 2460 else:
2457 2461 if exact: ui.warn(_('file not managed: %s\n' % rel))
2458 2462 break
2459 2463 else:
2460 2464 # file has not changed in dirstate
2461 2465 if node == parent:
2462 2466 if exact: ui.warn(_('no changes needed to %s\n' % rel))
2463 2467 continue
2464 2468 if pmf is None:
2465 2469 # only need parent manifest in this unlikely case,
2466 2470 # so do not read by default
2467 2471 pmf = repo.manifest.read(repo.changelog.read(parent)[0])
2468 2472 if abs in pmf:
2469 2473 if mfentry:
2470 2474 # if version of file is same in parent and target
2471 2475 # manifests, do nothing
2472 2476 if pmf[abs] != mfentry:
2473 2477 handle(revert, False)
2474 2478 else:
2475 2479 handle(remove, False)
2476 2480
2477 2481 if not opts.get('dry_run'):
2478 2482 repo.dirstate.forget(forget[0])
2479 2483 r = repo.update(node, False, True, update.has_key, False, wlock=wlock,
2480 2484 show_stats=False)
2481 2485 repo.dirstate.update(add[0], 'a')
2482 2486 repo.dirstate.update(undelete[0], 'n')
2483 2487 repo.dirstate.update(remove[0], 'r')
2484 2488 return r
2485 2489
2486 2490 def rollback(ui, repo):
2487 2491 """roll back the last transaction in this repository
2488 2492
2489 2493 Roll back the last transaction in this repository, restoring the
2490 2494 project to its state prior to the transaction.
2491 2495
2492 2496 Transactions are used to encapsulate the effects of all commands
2493 2497 that create new changesets or propagate existing changesets into a
2494 2498 repository. For example, the following commands are transactional,
2495 2499 and their effects can be rolled back:
2496 2500
2497 2501 commit
2498 2502 import
2499 2503 pull
2500 2504 push (with this repository as destination)
2501 2505 unbundle
2502 2506
2503 2507 This command should be used with care. There is only one level of
2504 2508 rollback, and there is no way to undo a rollback.
2505 2509
2506 2510 This command is not intended for use on public repositories. Once
2507 2511 changes are visible for pull by other users, rolling a transaction
2508 2512 back locally is ineffective (someone else may already have pulled
2509 2513 the changes). Furthermore, a race is possible with readers of the
2510 2514 repository; for example an in-progress pull from the repository
2511 2515 may fail if a rollback is performed.
2512 2516 """
2513 2517 repo.rollback()
2514 2518
2515 2519 def root(ui, repo):
2516 2520 """print the root (top) of the current working dir
2517 2521
2518 2522 Print the root directory of the current repository.
2519 2523 """
2520 2524 ui.write(repo.root + "\n")
2521 2525
2522 2526 def serve(ui, repo, **opts):
2523 2527 """export the repository via HTTP
2524 2528
2525 2529 Start a local HTTP repository browser and pull server.
2526 2530
2527 2531 By default, the server logs accesses to stdout and errors to
2528 2532 stderr. Use the "-A" and "-E" options to log to files.
2529 2533 """
2530 2534
2531 2535 if opts["stdio"]:
2532 2536 if repo is None:
2533 2537 raise hg.RepoError(_('no repo found'))
2534 2538 s = sshserver.sshserver(ui, repo)
2535 2539 s.serve_forever()
2536 2540
2537 2541 optlist = ("name templates style address port ipv6"
2538 2542 " accesslog errorlog webdir_conf")
2539 2543 for o in optlist.split():
2540 2544 if opts[o]:
2541 2545 ui.setconfig("web", o, opts[o])
2542 2546
2543 2547 if repo is None and not ui.config("web", "webdir_conf"):
2544 2548 raise hg.RepoError(_('no repo found'))
2545 2549
2546 2550 if opts['daemon'] and not opts['daemon_pipefds']:
2547 2551 rfd, wfd = os.pipe()
2548 2552 args = sys.argv[:]
2549 2553 args.append('--daemon-pipefds=%d,%d' % (rfd, wfd))
2550 2554 pid = os.spawnvp(os.P_NOWAIT | getattr(os, 'P_DETACH', 0),
2551 2555 args[0], args)
2552 2556 os.close(wfd)
2553 2557 os.read(rfd, 1)
2554 2558 os._exit(0)
2555 2559
2556 2560 try:
2557 2561 httpd = hgweb.server.create_server(ui, repo)
2558 2562 except socket.error, inst:
2559 2563 raise util.Abort(_('cannot start server: ') + inst.args[1])
2560 2564
2561 2565 if ui.verbose:
2562 2566 addr, port = httpd.socket.getsockname()
2563 2567 if addr == '0.0.0.0':
2564 2568 addr = socket.gethostname()
2565 2569 else:
2566 2570 try:
2567 2571 addr = socket.gethostbyaddr(addr)[0]
2568 2572 except socket.error:
2569 2573 pass
2570 2574 if port != 80:
2571 2575 ui.status(_('listening at http://%s:%d/\n') % (addr, port))
2572 2576 else:
2573 2577 ui.status(_('listening at http://%s/\n') % addr)
2574 2578
2575 2579 if opts['pid_file']:
2576 2580 fp = open(opts['pid_file'], 'w')
2577 2581 fp.write(str(os.getpid()) + '\n')
2578 2582 fp.close()
2579 2583
2580 2584 if opts['daemon_pipefds']:
2581 2585 rfd, wfd = [int(x) for x in opts['daemon_pipefds'].split(',')]
2582 2586 os.close(rfd)
2583 2587 os.write(wfd, 'y')
2584 2588 os.close(wfd)
2585 2589 sys.stdout.flush()
2586 2590 sys.stderr.flush()
2587 2591 fd = os.open(util.nulldev, os.O_RDWR)
2588 2592 if fd != 0: os.dup2(fd, 0)
2589 2593 if fd != 1: os.dup2(fd, 1)
2590 2594 if fd != 2: os.dup2(fd, 2)
2591 2595 if fd not in (0, 1, 2): os.close(fd)
2592 2596
2593 2597 httpd.serve_forever()
2594 2598
2595 2599 def status(ui, repo, *pats, **opts):
2596 2600 """show changed files in the working directory
2597 2601
2598 2602 Show changed files in the repository. If names are
2599 2603 given, only files that match are shown.
2600 2604
2601 2605 The codes used to show the status of files are:
2602 2606 M = modified
2603 2607 A = added
2604 2608 R = removed
2605 2609 ! = deleted, but still tracked
2606 2610 ? = not tracked
2607 2611 I = ignored (not shown by default)
2608 2612 """
2609 2613
2610 2614 show_ignored = opts['ignored'] and True or False
2611 2615 files, matchfn, anypats = matchpats(repo, pats, opts)
2612 2616 cwd = (pats and repo.getcwd()) or ''
2613 2617 modified, added, removed, deleted, unknown, ignored = [
2614 2618 [util.pathto(cwd, x) for x in n]
2615 2619 for n in repo.changes(files=files, match=matchfn,
2616 2620 show_ignored=show_ignored)]
2617 2621
2618 2622 changetypes = [('modified', 'M', modified),
2619 2623 ('added', 'A', added),
2620 2624 ('removed', 'R', removed),
2621 2625 ('deleted', '!', deleted),
2622 2626 ('unknown', '?', unknown),
2623 2627 ('ignored', 'I', ignored)]
2624 2628
2625 2629 end = opts['print0'] and '\0' or '\n'
2626 2630
2627 2631 for opt, char, changes in ([ct for ct in changetypes if opts[ct[0]]]
2628 2632 or changetypes):
2629 2633 if opts['no_status']:
2630 2634 format = "%%s%s" % end
2631 2635 else:
2632 2636 format = "%s %%s%s" % (char, end)
2633 2637
2634 2638 for f in changes:
2635 2639 ui.write(format % f)
2636 2640
2637 2641 def tag(ui, repo, name, rev_=None, **opts):
2638 2642 """add a tag for the current tip or a given revision
2639 2643
2640 2644 Name a particular revision using <name>.
2641 2645
2642 2646 Tags are used to name particular revisions of the repository and are
2643 2647 very useful to compare different revision, to go back to significant
2644 2648 earlier versions or to mark branch points as releases, etc.
2645 2649
2646 2650 If no revision is given, the tip is used.
2647 2651
2648 2652 To facilitate version control, distribution, and merging of tags,
2649 2653 they are stored as a file named ".hgtags" which is managed
2650 2654 similarly to other project files and can be hand-edited if
2651 2655 necessary. The file '.hg/localtags' is used for local tags (not
2652 2656 shared among repositories).
2653 2657 """
2654 2658 if name == "tip":
2655 2659 raise util.Abort(_("the name 'tip' is reserved"))
2656 2660 if rev_ is not None:
2657 2661 ui.warn(_("use of 'hg tag NAME [REV]' is deprecated, "
2658 2662 "please use 'hg tag [-r REV] NAME' instead\n"))
2659 2663 if opts['rev']:
2660 2664 raise util.Abort(_("use only one form to specify the revision"))
2661 2665 if opts['rev']:
2662 2666 rev_ = opts['rev']
2663 2667 if rev_:
2664 2668 r = hex(repo.lookup(rev_))
2665 2669 else:
2666 2670 r = hex(repo.changelog.tip())
2667 2671
2668 2672 repo.tag(name, r, opts['local'], opts['message'], opts['user'],
2669 2673 opts['date'])
2670 2674
2671 2675 def tags(ui, repo):
2672 2676 """list repository tags
2673 2677
2674 2678 List the repository tags.
2675 2679
2676 2680 This lists both regular and local tags.
2677 2681 """
2678 2682
2679 2683 l = repo.tagslist()
2680 2684 l.reverse()
2681 2685 for t, n in l:
2682 2686 try:
2683 2687 r = "%5d:%s" % (repo.changelog.rev(n), hex(n))
2684 2688 except KeyError:
2685 2689 r = " ?:?"
2686 2690 if ui.quiet:
2687 2691 ui.write("%s\n" % t)
2688 2692 else:
2689 2693 ui.write("%-30s %s\n" % (t, r))
2690 2694
2691 2695 def tip(ui, repo, **opts):
2692 2696 """show the tip revision
2693 2697
2694 2698 Show the tip revision.
2695 2699 """
2696 2700 n = repo.changelog.tip()
2697 2701 br = None
2698 2702 if opts['branches']:
2699 2703 br = repo.branchlookup([n])
2700 2704 show_changeset(ui, repo, opts).show(changenode=n, brinfo=br)
2701 2705 if opts['patch']:
2702 2706 dodiff(ui, ui, repo, repo.changelog.parents(n)[0], n)
2703 2707
2704 2708 def unbundle(ui, repo, fname, **opts):
2705 2709 """apply a changegroup file
2706 2710
2707 2711 Apply a compressed changegroup file generated by the bundle
2708 2712 command.
2709 2713 """
2710 2714 f = urllib.urlopen(fname)
2711 2715
2712 2716 header = f.read(6)
2713 2717 if not header.startswith("HG"):
2714 2718 raise util.Abort(_("%s: not a Mercurial bundle file") % fname)
2715 2719 elif not header.startswith("HG10"):
2716 2720 raise util.Abort(_("%s: unknown bundle version") % fname)
2717 2721 elif header == "HG10BZ":
2718 2722 def generator(f):
2719 2723 zd = bz2.BZ2Decompressor()
2720 2724 zd.decompress("BZ")
2721 2725 for chunk in f:
2722 2726 yield zd.decompress(chunk)
2723 2727 elif header == "HG10UN":
2724 2728 def generator(f):
2725 2729 for chunk in f:
2726 2730 yield chunk
2727 2731 else:
2728 2732 raise util.Abort(_("%s: unknown bundle compression type")
2729 2733 % fname)
2730 2734 gen = generator(util.filechunkiter(f, 4096))
2731 2735 modheads = repo.addchangegroup(util.chunkbuffer(gen), 'unbundle')
2732 2736 return postincoming(ui, repo, modheads, opts['update'])
2733 2737
2734 2738 def undo(ui, repo):
2735 2739 """undo the last commit or pull (DEPRECATED)
2736 2740
2737 2741 (DEPRECATED)
2738 2742 This command is now deprecated and will be removed in a future
2739 2743 release. Please use the rollback command instead. For usage
2740 2744 instructions, see the rollback command.
2741 2745 """
2742 2746 ui.warn(_('(the undo command is deprecated; use rollback instead)\n'))
2743 2747 repo.rollback()
2744 2748
2745 2749 def update(ui, repo, node=None, merge=False, clean=False, force=None,
2746 2750 branch=None, **opts):
2747 2751 """update or merge working directory
2748 2752
2749 2753 Update the working directory to the specified revision.
2750 2754
2751 2755 If there are no outstanding changes in the working directory and
2752 2756 there is a linear relationship between the current version and the
2753 2757 requested version, the result is the requested version.
2754 2758
2755 2759 To merge the working directory with another revision, use the
2756 2760 merge command.
2757 2761
2758 2762 By default, update will refuse to run if doing so would require
2759 2763 merging or discarding local changes.
2760 2764 """
2761 2765 if merge:
2762 2766 ui.warn(_('(the -m/--merge option is deprecated; '
2763 2767 'use the merge command instead)\n'))
2764 2768 return doupdate(ui, repo, node, merge, clean, force, branch, **opts)
2765 2769
2766 2770 def doupdate(ui, repo, node=None, merge=False, clean=False, force=None,
2767 2771 branch=None, **opts):
2768 2772 if branch:
2769 2773 br = repo.branchlookup(branch=branch)
2770 2774 found = []
2771 2775 for x in br:
2772 2776 if branch in br[x]:
2773 2777 found.append(x)
2774 2778 if len(found) > 1:
2775 2779 ui.warn(_("Found multiple heads for %s\n") % branch)
2776 2780 for x in found:
2777 2781 show_changeset(ui, repo, opts).show(changenode=x, brinfo=br)
2778 2782 return 1
2779 2783 if len(found) == 1:
2780 2784 node = found[0]
2781 2785 ui.warn(_("Using head %s for branch %s\n") % (short(node), branch))
2782 2786 else:
2783 2787 ui.warn(_("branch %s not found\n") % (branch))
2784 2788 return 1
2785 2789 else:
2786 2790 node = node and repo.lookup(node) or repo.changelog.tip()
2787 2791 return repo.update(node, allow=merge, force=clean, forcemerge=force)
2788 2792
2789 2793 def verify(ui, repo):
2790 2794 """verify the integrity of the repository
2791 2795
2792 2796 Verify the integrity of the current repository.
2793 2797
2794 2798 This will perform an extensive check of the repository's
2795 2799 integrity, validating the hashes and checksums of each entry in
2796 2800 the changelog, manifest, and tracked files, as well as the
2797 2801 integrity of their crosslinks and indices.
2798 2802 """
2799 2803 return repo.verify()
2800 2804
2801 2805 # Command options and aliases are listed here, alphabetically
2802 2806
2803 2807 table = {
2804 2808 "^add":
2805 2809 (add,
2806 2810 [('I', 'include', [], _('include names matching the given patterns')),
2807 2811 ('X', 'exclude', [], _('exclude names matching the given patterns')),
2808 2812 ('n', 'dry-run', None, _('do not perform actions, just print output'))],
2809 2813 _('hg add [OPTION]... [FILE]...')),
2810 2814 "debugaddremove|addremove":
2811 2815 (addremove,
2812 2816 [('I', 'include', [], _('include names matching the given patterns')),
2813 2817 ('X', 'exclude', [], _('exclude names matching the given patterns')),
2814 2818 ('n', 'dry-run', None, _('do not perform actions, just print output'))],
2815 2819 _('hg addremove [OPTION]... [FILE]...')),
2816 2820 "^annotate":
2817 2821 (annotate,
2818 2822 [('r', 'rev', '', _('annotate the specified revision')),
2819 2823 ('a', 'text', None, _('treat all files as text')),
2820 2824 ('u', 'user', None, _('list the author')),
2821 2825 ('d', 'date', None, _('list the date')),
2822 2826 ('n', 'number', None, _('list the revision number (default)')),
2823 2827 ('c', 'changeset', None, _('list the changeset')),
2824 2828 ('I', 'include', [], _('include names matching the given patterns')),
2825 2829 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
2826 2830 _('hg annotate [-r REV] [-a] [-u] [-d] [-n] [-c] FILE...')),
2827 2831 "archive":
2828 2832 (archive,
2829 2833 [('', 'no-decode', None, _('do not pass files through decoders')),
2830 2834 ('p', 'prefix', '', _('directory prefix for files in archive')),
2831 2835 ('r', 'rev', '', _('revision to distribute')),
2832 2836 ('t', 'type', '', _('type of distribution to create')),
2833 2837 ('I', 'include', [], _('include names matching the given patterns')),
2834 2838 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
2835 2839 _('hg archive [OPTION]... DEST')),
2836 2840 "backout":
2837 2841 (backout,
2838 2842 [('', 'merge', None,
2839 2843 _('merge with old dirstate parent after backout')),
2840 2844 ('m', 'message', '', _('use <text> as commit message')),
2841 2845 ('l', 'logfile', '', _('read commit message from <file>')),
2842 2846 ('d', 'date', '', _('record datecode as commit date')),
2843 2847 ('', 'parent', '', _('parent to choose when backing out merge')),
2844 2848 ('u', 'user', '', _('record user as committer')),
2845 2849 ('I', 'include', [], _('include names matching the given patterns')),
2846 2850 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
2847 2851 _('hg backout [OPTION]... REV')),
2848 2852 "bundle":
2849 2853 (bundle,
2850 2854 [('f', 'force', None,
2851 2855 _('run even when remote repository is unrelated'))],
2852 2856 _('hg bundle FILE DEST')),
2853 2857 "cat":
2854 2858 (cat,
2855 2859 [('o', 'output', '', _('print output to file with formatted name')),
2856 2860 ('r', 'rev', '', _('print the given revision')),
2857 2861 ('I', 'include', [], _('include names matching the given patterns')),
2858 2862 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
2859 2863 _('hg cat [OPTION]... FILE...')),
2860 2864 "^clone":
2861 2865 (clone,
2862 2866 [('U', 'noupdate', None, _('do not update the new working directory')),
2863 2867 ('r', 'rev', [],
2864 2868 _('a changeset you would like to have after cloning')),
2865 2869 ('', 'pull', None, _('use pull protocol to copy metadata')),
2866 ('', 'stream', None, _('use streaming protocol (fast over LAN)')),
2870 ('', 'uncompressed', None,
2871 _('use uncompressed transfer (fast over LAN)')),
2867 2872 ('e', 'ssh', '', _('specify ssh command to use')),
2868 2873 ('', 'remotecmd', '',
2869 2874 _('specify hg command to run on the remote side'))],
2870 2875 _('hg clone [OPTION]... SOURCE [DEST]')),
2871 2876 "^commit|ci":
2872 2877 (commit,
2873 2878 [('A', 'addremove', None,
2874 2879 _('mark new/missing files as added/removed before committing')),
2875 2880 ('m', 'message', '', _('use <text> as commit message')),
2876 2881 ('l', 'logfile', '', _('read the commit message from <file>')),
2877 2882 ('d', 'date', '', _('record datecode as commit date')),
2878 2883 ('u', 'user', '', _('record user as commiter')),
2879 2884 ('I', 'include', [], _('include names matching the given patterns')),
2880 2885 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
2881 2886 _('hg commit [OPTION]... [FILE]...')),
2882 2887 "copy|cp":
2883 2888 (copy,
2884 2889 [('A', 'after', None, _('record a copy that has already occurred')),
2885 2890 ('f', 'force', None,
2886 2891 _('forcibly copy over an existing managed file')),
2887 2892 ('I', 'include', [], _('include names matching the given patterns')),
2888 2893 ('X', 'exclude', [], _('exclude names matching the given patterns')),
2889 2894 ('n', 'dry-run', None, _('do not perform actions, just print output'))],
2890 2895 _('hg copy [OPTION]... [SOURCE]... DEST')),
2891 2896 "debugancestor": (debugancestor, [], _('debugancestor INDEX REV1 REV2')),
2892 2897 "debugcomplete":
2893 2898 (debugcomplete,
2894 2899 [('o', 'options', None, _('show the command options'))],
2895 2900 _('debugcomplete [-o] CMD')),
2896 2901 "debugrebuildstate":
2897 2902 (debugrebuildstate,
2898 2903 [('r', 'rev', '', _('revision to rebuild to'))],
2899 2904 _('debugrebuildstate [-r REV] [REV]')),
2900 2905 "debugcheckstate": (debugcheckstate, [], _('debugcheckstate')),
2901 2906 "debugconfig": (debugconfig, [], _('debugconfig [NAME]...')),
2902 2907 "debugsetparents": (debugsetparents, [], _('debugsetparents REV1 [REV2]')),
2903 2908 "debugstate": (debugstate, [], _('debugstate')),
2904 2909 "debugdata": (debugdata, [], _('debugdata FILE REV')),
2905 2910 "debugindex": (debugindex, [], _('debugindex FILE')),
2906 2911 "debugindexdot": (debugindexdot, [], _('debugindexdot FILE')),
2907 2912 "debugrename": (debugrename, [], _('debugrename FILE [REV]')),
2908 2913 "debugwalk":
2909 2914 (debugwalk,
2910 2915 [('I', 'include', [], _('include names matching the given patterns')),
2911 2916 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
2912 2917 _('debugwalk [OPTION]... [FILE]...')),
2913 2918 "^diff":
2914 2919 (diff,
2915 2920 [('r', 'rev', [], _('revision')),
2916 2921 ('a', 'text', None, _('treat all files as text')),
2917 2922 ('p', 'show-function', None,
2918 2923 _('show which function each change is in')),
2919 2924 ('w', 'ignore-all-space', None,
2920 2925 _('ignore white space when comparing lines')),
2921 2926 ('b', 'ignore-space-change', None,
2922 2927 _('ignore changes in the amount of white space')),
2923 2928 ('B', 'ignore-blank-lines', None,
2924 2929 _('ignore changes whose lines are all blank')),
2925 2930 ('I', 'include', [], _('include names matching the given patterns')),
2926 2931 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
2927 2932 _('hg diff [-a] [-I] [-X] [-r REV1 [-r REV2]] [FILE]...')),
2928 2933 "^export":
2929 2934 (export,
2930 2935 [('o', 'output', '', _('print output to file with formatted name')),
2931 2936 ('a', 'text', None, _('treat all files as text')),
2932 2937 ('', 'switch-parent', None, _('diff against the second parent'))],
2933 2938 _('hg export [-a] [-o OUTFILESPEC] REV...')),
2934 2939 "debugforget|forget":
2935 2940 (forget,
2936 2941 [('I', 'include', [], _('include names matching the given patterns')),
2937 2942 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
2938 2943 _('hg forget [OPTION]... FILE...')),
2939 2944 "grep":
2940 2945 (grep,
2941 2946 [('0', 'print0', None, _('end fields with NUL')),
2942 2947 ('', 'all', None, _('print all revisions that match')),
2943 2948 ('i', 'ignore-case', None, _('ignore case when matching')),
2944 2949 ('l', 'files-with-matches', None,
2945 2950 _('print only filenames and revs that match')),
2946 2951 ('n', 'line-number', None, _('print matching line numbers')),
2947 2952 ('r', 'rev', [], _('search in given revision range')),
2948 2953 ('u', 'user', None, _('print user who committed change')),
2949 2954 ('I', 'include', [], _('include names matching the given patterns')),
2950 2955 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
2951 2956 _('hg grep [OPTION]... PATTERN [FILE]...')),
2952 2957 "heads":
2953 2958 (heads,
2954 2959 [('b', 'branches', None, _('show branches')),
2955 2960 ('', 'style', '', _('display using template map file')),
2956 2961 ('r', 'rev', '', _('show only heads which are descendants of rev')),
2957 2962 ('', 'template', '', _('display with template'))],
2958 2963 _('hg heads [-b] [-r <rev>]')),
2959 2964 "help": (help_, [], _('hg help [COMMAND]')),
2960 2965 "identify|id": (identify, [], _('hg identify')),
2961 2966 "import|patch":
2962 2967 (import_,
2963 2968 [('p', 'strip', 1,
2964 2969 _('directory strip option for patch. This has the same\n'
2965 2970 'meaning as the corresponding patch option')),
2966 2971 ('m', 'message', '', _('use <text> as commit message')),
2967 2972 ('b', 'base', '', _('base path')),
2968 2973 ('f', 'force', None,
2969 2974 _('skip check for outstanding uncommitted changes'))],
2970 2975 _('hg import [-p NUM] [-b BASE] [-m MESSAGE] [-f] PATCH...')),
2971 2976 "incoming|in": (incoming,
2972 2977 [('M', 'no-merges', None, _('do not show merges')),
2973 2978 ('f', 'force', None,
2974 2979 _('run even when remote repository is unrelated')),
2975 2980 ('', 'style', '', _('display using template map file')),
2976 2981 ('n', 'newest-first', None, _('show newest record first')),
2977 2982 ('', 'bundle', '', _('file to store the bundles into')),
2978 2983 ('p', 'patch', None, _('show patch')),
2979 2984 ('r', 'rev', [], _('a specific revision you would like to pull')),
2980 2985 ('', 'template', '', _('display with template')),
2981 2986 ('e', 'ssh', '', _('specify ssh command to use')),
2982 2987 ('', 'remotecmd', '',
2983 2988 _('specify hg command to run on the remote side'))],
2984 2989 _('hg incoming [-p] [-n] [-M] [-r REV]...'
2985 2990 ' [--bundle FILENAME] [SOURCE]')),
2986 2991 "^init":
2987 2992 (init,
2988 2993 [('e', 'ssh', '', _('specify ssh command to use')),
2989 2994 ('', 'remotecmd', '',
2990 2995 _('specify hg command to run on the remote side'))],
2991 2996 _('hg init [-e FILE] [--remotecmd FILE] [DEST]')),
2992 2997 "locate":
2993 2998 (locate,
2994 2999 [('r', 'rev', '', _('search the repository as it stood at rev')),
2995 3000 ('0', 'print0', None,
2996 3001 _('end filenames with NUL, for use with xargs')),
2997 3002 ('f', 'fullpath', None,
2998 3003 _('print complete paths from the filesystem root')),
2999 3004 ('I', 'include', [], _('include names matching the given patterns')),
3000 3005 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
3001 3006 _('hg locate [OPTION]... [PATTERN]...')),
3002 3007 "^log|history":
3003 3008 (log,
3004 3009 [('b', 'branches', None, _('show branches')),
3005 3010 ('k', 'keyword', [], _('search for a keyword')),
3006 3011 ('l', 'limit', '', _('limit number of changes displayed')),
3007 3012 ('r', 'rev', [], _('show the specified revision or range')),
3008 3013 ('M', 'no-merges', None, _('do not show merges')),
3009 3014 ('', 'style', '', _('display using template map file')),
3010 3015 ('m', 'only-merges', None, _('show only merges')),
3011 3016 ('p', 'patch', None, _('show patch')),
3012 3017 ('', 'template', '', _('display with template')),
3013 3018 ('I', 'include', [], _('include names matching the given patterns')),
3014 3019 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
3015 3020 _('hg log [OPTION]... [FILE]')),
3016 3021 "manifest": (manifest, [], _('hg manifest [REV]')),
3017 3022 "merge":
3018 3023 (merge,
3019 3024 [('b', 'branch', '', _('merge with head of a specific branch')),
3020 3025 ('f', 'force', None, _('force a merge with outstanding changes'))],
3021 3026 _('hg merge [-b TAG] [-f] [REV]')),
3022 3027 "outgoing|out": (outgoing,
3023 3028 [('M', 'no-merges', None, _('do not show merges')),
3024 3029 ('f', 'force', None,
3025 3030 _('run even when remote repository is unrelated')),
3026 3031 ('p', 'patch', None, _('show patch')),
3027 3032 ('', 'style', '', _('display using template map file')),
3028 3033 ('r', 'rev', [], _('a specific revision you would like to push')),
3029 3034 ('n', 'newest-first', None, _('show newest record first')),
3030 3035 ('', 'template', '', _('display with template')),
3031 3036 ('e', 'ssh', '', _('specify ssh command to use')),
3032 3037 ('', 'remotecmd', '',
3033 3038 _('specify hg command to run on the remote side'))],
3034 3039 _('hg outgoing [-M] [-p] [-n] [-r REV]... [DEST]')),
3035 3040 "^parents":
3036 3041 (parents,
3037 3042 [('b', 'branches', None, _('show branches')),
3038 3043 ('', 'style', '', _('display using template map file')),
3039 3044 ('', 'template', '', _('display with template'))],
3040 3045 _('hg parents [-b] [REV]')),
3041 3046 "paths": (paths, [], _('hg paths [NAME]')),
3042 3047 "^pull":
3043 3048 (pull,
3044 3049 [('u', 'update', None,
3045 3050 _('update the working directory to tip after pull')),
3046 3051 ('e', 'ssh', '', _('specify ssh command to use')),
3047 3052 ('f', 'force', None,
3048 3053 _('run even when remote repository is unrelated')),
3049 3054 ('r', 'rev', [], _('a specific revision you would like to pull')),
3050 3055 ('', 'remotecmd', '',
3051 3056 _('specify hg command to run on the remote side'))],
3052 3057 _('hg pull [-u] [-r REV]... [-e FILE] [--remotecmd FILE] [SOURCE]')),
3053 3058 "^push":
3054 3059 (push,
3055 3060 [('f', 'force', None, _('force push')),
3056 3061 ('e', 'ssh', '', _('specify ssh command to use')),
3057 3062 ('r', 'rev', [], _('a specific revision you would like to push')),
3058 3063 ('', 'remotecmd', '',
3059 3064 _('specify hg command to run on the remote side'))],
3060 3065 _('hg push [-f] [-r REV]... [-e FILE] [--remotecmd FILE] [DEST]')),
3061 3066 "debugrawcommit|rawcommit":
3062 3067 (rawcommit,
3063 3068 [('p', 'parent', [], _('parent')),
3064 3069 ('d', 'date', '', _('date code')),
3065 3070 ('u', 'user', '', _('user')),
3066 3071 ('F', 'files', '', _('file list')),
3067 3072 ('m', 'message', '', _('commit message')),
3068 3073 ('l', 'logfile', '', _('commit message file'))],
3069 3074 _('hg debugrawcommit [OPTION]... [FILE]...')),
3070 3075 "recover": (recover, [], _('hg recover')),
3071 3076 "^remove|rm":
3072 3077 (remove,
3073 3078 [('A', 'after', None, _('record remove that has already occurred')),
3074 3079 ('f', 'force', None, _('remove file even if modified')),
3075 3080 ('I', 'include', [], _('include names matching the given patterns')),
3076 3081 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
3077 3082 _('hg remove [OPTION]... FILE...')),
3078 3083 "rename|mv":
3079 3084 (rename,
3080 3085 [('A', 'after', None, _('record a rename that has already occurred')),
3081 3086 ('f', 'force', None,
3082 3087 _('forcibly copy over an existing managed file')),
3083 3088 ('I', 'include', [], _('include names matching the given patterns')),
3084 3089 ('X', 'exclude', [], _('exclude names matching the given patterns')),
3085 3090 ('n', 'dry-run', None, _('do not perform actions, just print output'))],
3086 3091 _('hg rename [OPTION]... SOURCE... DEST')),
3087 3092 "^revert":
3088 3093 (revert,
3089 3094 [('r', 'rev', '', _('revision to revert to')),
3090 3095 ('', 'no-backup', None, _('do not save backup copies of files')),
3091 3096 ('I', 'include', [], _('include names matching given patterns')),
3092 3097 ('X', 'exclude', [], _('exclude names matching given patterns')),
3093 3098 ('n', 'dry-run', None, _('do not perform actions, just print output'))],
3094 3099 _('hg revert [-r REV] [NAME]...')),
3095 3100 "rollback": (rollback, [], _('hg rollback')),
3096 3101 "root": (root, [], _('hg root')),
3097 3102 "^serve":
3098 3103 (serve,
3099 3104 [('A', 'accesslog', '', _('name of access log file to write to')),
3100 3105 ('d', 'daemon', None, _('run server in background')),
3101 3106 ('', 'daemon-pipefds', '', _('used internally by daemon mode')),
3102 3107 ('E', 'errorlog', '', _('name of error log file to write to')),
3103 3108 ('p', 'port', 0, _('port to use (default: 8000)')),
3104 3109 ('a', 'address', '', _('address to use')),
3105 3110 ('n', 'name', '',
3106 3111 _('name to show in web pages (default: working dir)')),
3107 3112 ('', 'webdir-conf', '', _('name of the webdir config file'
3108 3113 ' (serve more than one repo)')),
3109 3114 ('', 'pid-file', '', _('name of file to write process ID to')),
3110 3115 ('', 'stdio', None, _('for remote clients')),
3111 3116 ('t', 'templates', '', _('web templates to use')),
3112 3117 ('', 'style', '', _('template style to use')),
3113 3118 ('6', 'ipv6', None, _('use IPv6 in addition to IPv4'))],
3114 3119 _('hg serve [OPTION]...')),
3115 3120 "^status|st":
3116 3121 (status,
3117 3122 [('m', 'modified', None, _('show only modified files')),
3118 3123 ('a', 'added', None, _('show only added files')),
3119 3124 ('r', 'removed', None, _('show only removed files')),
3120 3125 ('d', 'deleted', None, _('show only deleted (but tracked) files')),
3121 3126 ('u', 'unknown', None, _('show only unknown (not tracked) files')),
3122 3127 ('i', 'ignored', None, _('show ignored files')),
3123 3128 ('n', 'no-status', None, _('hide status prefix')),
3124 3129 ('0', 'print0', None,
3125 3130 _('end filenames with NUL, for use with xargs')),
3126 3131 ('I', 'include', [], _('include names matching the given patterns')),
3127 3132 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
3128 3133 _('hg status [OPTION]... [FILE]...')),
3129 3134 "tag":
3130 3135 (tag,
3131 3136 [('l', 'local', None, _('make the tag local')),
3132 3137 ('m', 'message', '', _('message for tag commit log entry')),
3133 3138 ('d', 'date', '', _('record datecode as commit date')),
3134 3139 ('u', 'user', '', _('record user as commiter')),
3135 3140 ('r', 'rev', '', _('revision to tag'))],
3136 3141 _('hg tag [-l] [-m TEXT] [-d DATE] [-u USER] [-r REV] NAME')),
3137 3142 "tags": (tags, [], _('hg tags')),
3138 3143 "tip":
3139 3144 (tip,
3140 3145 [('b', 'branches', None, _('show branches')),
3141 3146 ('', 'style', '', _('display using template map file')),
3142 3147 ('p', 'patch', None, _('show patch')),
3143 3148 ('', 'template', '', _('display with template'))],
3144 3149 _('hg tip [-b] [-p]')),
3145 3150 "unbundle":
3146 3151 (unbundle,
3147 3152 [('u', 'update', None,
3148 3153 _('update the working directory to tip after unbundle'))],
3149 3154 _('hg unbundle [-u] FILE')),
3150 3155 "debugundo|undo": (undo, [], _('hg undo')),
3151 3156 "^update|up|checkout|co":
3152 3157 (update,
3153 3158 [('b', 'branch', '', _('checkout the head of a specific branch')),
3154 3159 ('m', 'merge', None, _('allow merging of branches (DEPRECATED)')),
3155 3160 ('C', 'clean', None, _('overwrite locally modified files')),
3156 3161 ('f', 'force', None, _('force a merge with outstanding changes'))],
3157 3162 _('hg update [-b TAG] [-m] [-C] [-f] [REV]')),
3158 3163 "verify": (verify, [], _('hg verify')),
3159 3164 "version": (show_version, [], _('hg version')),
3160 3165 }
3161 3166
3162 3167 globalopts = [
3163 3168 ('R', 'repository', '',
3164 3169 _('repository root directory or symbolic path name')),
3165 3170 ('', 'cwd', '', _('change working directory')),
3166 3171 ('y', 'noninteractive', None,
3167 3172 _('do not prompt, assume \'yes\' for any required answers')),
3168 3173 ('q', 'quiet', None, _('suppress output')),
3169 3174 ('v', 'verbose', None, _('enable additional output')),
3170 3175 ('', 'config', [], _('set/override config option')),
3171 3176 ('', 'debug', None, _('enable debugging output')),
3172 3177 ('', 'debugger', None, _('start debugger')),
3173 3178 ('', 'lsprof', None, _('print improved command execution profile')),
3174 3179 ('', 'traceback', None, _('print traceback on exception')),
3175 3180 ('', 'time', None, _('time how long the command takes')),
3176 3181 ('', 'profile', None, _('print command execution profile')),
3177 3182 ('', 'version', None, _('output version information and exit')),
3178 3183 ('h', 'help', None, _('display help and exit')),
3179 3184 ]
3180 3185
3181 3186 norepo = ("clone init version help debugancestor debugcomplete debugdata"
3182 3187 " debugindex debugindexdot")
3183 3188 optionalrepo = ("paths serve debugconfig")
3184 3189
3185 3190 def findpossible(cmd):
3186 3191 """
3187 3192 Return cmd -> (aliases, command table entry)
3188 3193 for each matching command.
3189 3194 Return debug commands (or their aliases) only if no normal command matches.
3190 3195 """
3191 3196 choice = {}
3192 3197 debugchoice = {}
3193 3198 for e in table.keys():
3194 3199 aliases = e.lstrip("^").split("|")
3195 3200 found = None
3196 3201 if cmd in aliases:
3197 3202 found = cmd
3198 3203 else:
3199 3204 for a in aliases:
3200 3205 if a.startswith(cmd):
3201 3206 found = a
3202 3207 break
3203 3208 if found is not None:
3204 3209 if aliases[0].startswith("debug"):
3205 3210 debugchoice[found] = (aliases, table[e])
3206 3211 else:
3207 3212 choice[found] = (aliases, table[e])
3208 3213
3209 3214 if not choice and debugchoice:
3210 3215 choice = debugchoice
3211 3216
3212 3217 return choice
3213 3218
3214 3219 def findcmd(cmd):
3215 3220 """Return (aliases, command table entry) for command string."""
3216 3221 choice = findpossible(cmd)
3217 3222
3218 3223 if choice.has_key(cmd):
3219 3224 return choice[cmd]
3220 3225
3221 3226 if len(choice) > 1:
3222 3227 clist = choice.keys()
3223 3228 clist.sort()
3224 3229 raise AmbiguousCommand(cmd, clist)
3225 3230
3226 3231 if choice:
3227 3232 return choice.values()[0]
3228 3233
3229 3234 raise UnknownCommand(cmd)
3230 3235
3231 3236 def catchterm(*args):
3232 3237 raise util.SignalInterrupt
3233 3238
3234 3239 def run():
3235 3240 sys.exit(dispatch(sys.argv[1:]))
3236 3241
3237 3242 class ParseError(Exception):
3238 3243 """Exception raised on errors in parsing the command line."""
3239 3244
3240 3245 def parse(ui, args):
3241 3246 options = {}
3242 3247 cmdoptions = {}
3243 3248
3244 3249 try:
3245 3250 args = fancyopts.fancyopts(args, globalopts, options)
3246 3251 except fancyopts.getopt.GetoptError, inst:
3247 3252 raise ParseError(None, inst)
3248 3253
3249 3254 if args:
3250 3255 cmd, args = args[0], args[1:]
3251 3256 aliases, i = findcmd(cmd)
3252 3257 cmd = aliases[0]
3253 3258 defaults = ui.config("defaults", cmd)
3254 3259 if defaults:
3255 3260 args = defaults.split() + args
3256 3261 c = list(i[1])
3257 3262 else:
3258 3263 cmd = None
3259 3264 c = []
3260 3265
3261 3266 # combine global options into local
3262 3267 for o in globalopts:
3263 3268 c.append((o[0], o[1], options[o[1]], o[3]))
3264 3269
3265 3270 try:
3266 3271 args = fancyopts.fancyopts(args, c, cmdoptions)
3267 3272 except fancyopts.getopt.GetoptError, inst:
3268 3273 raise ParseError(cmd, inst)
3269 3274
3270 3275 # separate global options back out
3271 3276 for o in globalopts:
3272 3277 n = o[1]
3273 3278 options[n] = cmdoptions[n]
3274 3279 del cmdoptions[n]
3275 3280
3276 3281 return (cmd, cmd and i[0] or None, args, options, cmdoptions)
3277 3282
3278 3283 external = {}
3279 3284
3280 3285 def findext(name):
3281 3286 '''return module with given extension name'''
3282 3287 try:
3283 3288 return sys.modules[external[name]]
3284 3289 except KeyError:
3285 3290 dotname = '.' + name
3286 3291 for k, v in external.iteritems():
3287 3292 if k.endswith('.' + name) or v == name:
3288 3293 return sys.modules[v]
3289 3294 raise KeyError(name)
3290 3295
3291 3296 def dispatch(args):
3292 3297 for name in 'SIGBREAK', 'SIGHUP', 'SIGTERM':
3293 3298 num = getattr(signal, name, None)
3294 3299 if num: signal.signal(num, catchterm)
3295 3300
3296 3301 try:
3297 3302 u = ui.ui(traceback='--traceback' in sys.argv[1:])
3298 3303 except util.Abort, inst:
3299 3304 sys.stderr.write(_("abort: %s\n") % inst)
3300 3305 return -1
3301 3306
3302 3307 for ext_name, load_from_name in u.extensions():
3303 3308 try:
3304 3309 if load_from_name:
3305 3310 # the module will be loaded in sys.modules
3306 3311 # choose an unique name so that it doesn't
3307 3312 # conflicts with other modules
3308 3313 module_name = "hgext_%s" % ext_name.replace('.', '_')
3309 3314 mod = imp.load_source(module_name, load_from_name)
3310 3315 else:
3311 3316 def importh(name):
3312 3317 mod = __import__(name)
3313 3318 components = name.split('.')
3314 3319 for comp in components[1:]:
3315 3320 mod = getattr(mod, comp)
3316 3321 return mod
3317 3322 try:
3318 3323 mod = importh("hgext.%s" % ext_name)
3319 3324 except ImportError:
3320 3325 mod = importh(ext_name)
3321 3326 external[ext_name] = mod.__name__
3322 3327 except (util.SignalInterrupt, KeyboardInterrupt):
3323 3328 raise
3324 3329 except Exception, inst:
3325 u.warn(_("*** failed to import extension %s: %s\n") % (x[0], inst))
3330 u.warn(_("*** failed to import extension %s: %s\n") % (ext_name, inst))
3326 3331 if u.print_exc():
3327 3332 return 1
3328 3333
3329 3334 for name in external.itervalues():
3330 3335 mod = sys.modules[name]
3331 3336 uisetup = getattr(mod, 'uisetup', None)
3332 3337 if uisetup:
3333 3338 uisetup(u)
3334 3339 cmdtable = getattr(mod, 'cmdtable', {})
3335 3340 for t in cmdtable:
3336 3341 if t in table:
3337 3342 u.warn(_("module %s overrides %s\n") % (name, t))
3338 3343 table.update(cmdtable)
3339 3344
3340 3345 try:
3341 3346 cmd, func, args, options, cmdoptions = parse(u, args)
3342 3347 if options["time"]:
3343 3348 def get_times():
3344 3349 t = os.times()
3345 3350 if t[4] == 0.0: # Windows leaves this as zero, so use time.clock()
3346 3351 t = (t[0], t[1], t[2], t[3], time.clock())
3347 3352 return t
3348 3353 s = get_times()
3349 3354 def print_time():
3350 3355 t = get_times()
3351 3356 u.warn(_("Time: real %.3f secs (user %.3f+%.3f sys %.3f+%.3f)\n") %
3352 3357 (t[4]-s[4], t[0]-s[0], t[2]-s[2], t[1]-s[1], t[3]-s[3]))
3353 3358 atexit.register(print_time)
3354 3359
3355 3360 u.updateopts(options["verbose"], options["debug"], options["quiet"],
3356 3361 not options["noninteractive"], options["traceback"],
3357 3362 options["config"])
3358 3363
3359 3364 # enter the debugger before command execution
3360 3365 if options['debugger']:
3361 3366 pdb.set_trace()
3362 3367
3363 3368 try:
3364 3369 if options['cwd']:
3365 3370 try:
3366 3371 os.chdir(options['cwd'])
3367 3372 except OSError, inst:
3368 3373 raise util.Abort('%s: %s' %
3369 3374 (options['cwd'], inst.strerror))
3370 3375
3371 3376 path = u.expandpath(options["repository"]) or ""
3372 3377 repo = path and hg.repository(u, path=path) or None
3373 3378
3374 3379 if options['help']:
3375 3380 return help_(u, cmd, options['version'])
3376 3381 elif options['version']:
3377 3382 return show_version(u)
3378 3383 elif not cmd:
3379 3384 return help_(u, 'shortlist')
3380 3385
3381 3386 if cmd not in norepo.split():
3382 3387 try:
3383 3388 if not repo:
3384 3389 repo = hg.repository(u, path=path)
3385 3390 u = repo.ui
3386 3391 for name in external.itervalues():
3387 3392 mod = sys.modules[name]
3388 3393 if hasattr(mod, 'reposetup'):
3389 3394 mod.reposetup(u, repo)
3390 3395 except hg.RepoError:
3391 3396 if cmd not in optionalrepo.split():
3392 3397 raise
3393 3398 d = lambda: func(u, repo, *args, **cmdoptions)
3394 3399 else:
3395 3400 d = lambda: func(u, *args, **cmdoptions)
3396 3401
3397 3402 try:
3398 3403 if options['profile']:
3399 3404 import hotshot, hotshot.stats
3400 3405 prof = hotshot.Profile("hg.prof")
3401 3406 try:
3402 3407 try:
3403 3408 return prof.runcall(d)
3404 3409 except:
3405 3410 try:
3406 3411 u.warn(_('exception raised - generating '
3407 3412 'profile anyway\n'))
3408 3413 except:
3409 3414 pass
3410 3415 raise
3411 3416 finally:
3412 3417 prof.close()
3413 3418 stats = hotshot.stats.load("hg.prof")
3414 3419 stats.strip_dirs()
3415 3420 stats.sort_stats('time', 'calls')
3416 3421 stats.print_stats(40)
3417 3422 elif options['lsprof']:
3418 3423 try:
3419 3424 from mercurial import lsprof
3420 3425 except ImportError:
3421 3426 raise util.Abort(_(
3422 3427 'lsprof not available - install from '
3423 3428 'http://codespeak.net/svn/user/arigo/hack/misc/lsprof/'))
3424 3429 p = lsprof.Profiler()
3425 3430 p.enable(subcalls=True)
3426 3431 try:
3427 3432 return d()
3428 3433 finally:
3429 3434 p.disable()
3430 3435 stats = lsprof.Stats(p.getstats())
3431 3436 stats.sort()
3432 3437 stats.pprint(top=10, file=sys.stderr, climit=5)
3433 3438 else:
3434 3439 return d()
3435 3440 finally:
3436 3441 u.flush()
3437 3442 except:
3438 3443 # enter the debugger when we hit an exception
3439 3444 if options['debugger']:
3440 3445 pdb.post_mortem(sys.exc_info()[2])
3441 3446 u.print_exc()
3442 3447 raise
3443 3448 except ParseError, inst:
3444 3449 if inst.args[0]:
3445 3450 u.warn(_("hg %s: %s\n") % (inst.args[0], inst.args[1]))
3446 3451 help_(u, inst.args[0])
3447 3452 else:
3448 3453 u.warn(_("hg: %s\n") % inst.args[1])
3449 3454 help_(u, 'shortlist')
3450 3455 except AmbiguousCommand, inst:
3451 3456 u.warn(_("hg: command '%s' is ambiguous:\n %s\n") %
3452 3457 (inst.args[0], " ".join(inst.args[1])))
3453 3458 except UnknownCommand, inst:
3454 3459 u.warn(_("hg: unknown command '%s'\n") % inst.args[0])
3455 3460 help_(u, 'shortlist')
3456 3461 except hg.RepoError, inst:
3457 3462 u.warn(_("abort: %s!\n") % inst)
3458 3463 except lock.LockHeld, inst:
3459 3464 if inst.errno == errno.ETIMEDOUT:
3460 3465 reason = _('timed out waiting for lock held by %s') % inst.locker
3461 3466 else:
3462 3467 reason = _('lock held by %s') % inst.locker
3463 3468 u.warn(_("abort: %s: %s\n") % (inst.desc or inst.filename, reason))
3464 3469 except lock.LockUnavailable, inst:
3465 3470 u.warn(_("abort: could not lock %s: %s\n") %
3466 3471 (inst.desc or inst.filename, inst.strerror))
3467 3472 except revlog.RevlogError, inst:
3468 3473 u.warn(_("abort: "), inst, "!\n")
3469 3474 except util.SignalInterrupt:
3470 3475 u.warn(_("killed!\n"))
3471 3476 except KeyboardInterrupt:
3472 3477 try:
3473 3478 u.warn(_("interrupted!\n"))
3474 3479 except IOError, inst:
3475 3480 if inst.errno == errno.EPIPE:
3476 3481 if u.debugflag:
3477 3482 u.warn(_("\nbroken pipe\n"))
3478 3483 else:
3479 3484 raise
3480 3485 except IOError, inst:
3481 3486 if hasattr(inst, "code"):
3482 3487 u.warn(_("abort: %s\n") % inst)
3483 3488 elif hasattr(inst, "reason"):
3484 3489 u.warn(_("abort: error: %s\n") % inst.reason[1])
3485 3490 elif hasattr(inst, "args") and inst[0] == errno.EPIPE:
3486 3491 if u.debugflag:
3487 3492 u.warn(_("broken pipe\n"))
3488 3493 elif getattr(inst, "strerror", None):
3489 3494 if getattr(inst, "filename", None):
3490 3495 u.warn(_("abort: %s - %s\n") % (inst.strerror, inst.filename))
3491 3496 else:
3492 3497 u.warn(_("abort: %s\n") % inst.strerror)
3493 3498 else:
3494 3499 raise
3495 3500 except OSError, inst:
3496 3501 if hasattr(inst, "filename"):
3497 3502 u.warn(_("abort: %s: %s\n") % (inst.strerror, inst.filename))
3498 3503 else:
3499 3504 u.warn(_("abort: %s\n") % inst.strerror)
3500 3505 except util.Abort, inst:
3501 3506 u.warn(_('abort: '), inst.args[0] % inst.args[1:], '\n')
3502 3507 except TypeError, inst:
3503 3508 # was this an argument error?
3504 3509 tb = traceback.extract_tb(sys.exc_info()[2])
3505 3510 if len(tb) > 2: # no
3506 3511 raise
3507 3512 u.debug(inst, "\n")
3508 3513 u.warn(_("%s: invalid arguments\n") % cmd)
3509 3514 help_(u, cmd)
3510 3515 except SystemExit, inst:
3511 3516 # Commands shouldn't sys.exit directly, but give a return code.
3512 3517 # Just in case catch this and and pass exit code to caller.
3513 3518 return inst.code
3514 3519 except:
3515 3520 u.warn(_("** unknown exception encountered, details follow\n"))
3516 u.warn(_("** report bug details to mercurial@selenic.com\n"))
3521 u.warn(_("** report bug details to "
3522 "http://www.selenic.com/mercurial/bts\n"))
3523 u.warn(_("** or mercurial@selenic.com\n"))
3517 3524 u.warn(_("** Mercurial Distributed SCM (version %s)\n")
3518 3525 % version.get_version())
3519 3526 raise
3520 3527
3521 3528 return -1
@@ -1,124 +1,126 b''
1 1 # context.py - changeset and file context objects for mercurial
2 2 #
3 3 # Copyright 2005 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms
6 6 # of the GNU General Public License, incorporated herein by reference.
7 7
8 8 class changectx(object):
9 9 """A changecontext object makes access to data related to a particular
10 10 changeset convenient."""
11 11 def __init__(self, repo, changeid):
12 12 """changeid is a revision number, node, or tag"""
13 13 self._repo = repo
14 14 self._id = changeid
15 15
16 16 self._node = self._repo.lookup(self._id)
17 17 self._rev = self._repo.changelog.rev(self._node)
18 18
19 19 def changeset(self):
20 20 try:
21 21 return self._changeset
22 22 except AttributeError:
23 23 self._changeset = self._repo.changelog.read(self.node())
24 24 return self._changeset
25 25
26 26 def manifest(self):
27 27 try:
28 28 return self._manifest
29 29 except AttributeError:
30 30 self._manifest = self._repo.manifest.read(self.changeset()[0])
31 31 return self._manifest
32 32
33 33 def rev(self): return self._rev
34 34 def node(self): return self._node
35 35 def user(self): return self.changeset()[1]
36 36 def date(self): return self.changeset()[2]
37 37 def changedfiles(self): return self.changeset()[3]
38 38 def description(self): return self.changeset()[4]
39 39
40 40 def parents(self):
41 41 """return contexts for each parent changeset"""
42 p = self.repo.changelog.parents(self._node)
42 p = self._repo.changelog.parents(self._node)
43 43 return [ changectx(self._repo, x) for x in p ]
44 44
45 45 def children(self):
46 46 """return contexts for each child changeset"""
47 c = self.repo.changelog.children(self._node)
47 c = self._repo.changelog.children(self._node)
48 48 return [ changectx(self._repo, x) for x in c ]
49 49
50 50 def filenode(self, path):
51 51 node, flag = self._repo.manifest.find(self.changeset()[0], path)
52 52 return node
53 53
54 def filectx(self, path):
54 def filectx(self, path, fileid=None):
55 55 """get a file context from this changeset"""
56 return filectx(self._repo, path, fileid=self.filenode(path))
56 if fileid is None:
57 fileid = self.filenode(path)
58 return filectx(self._repo, path, fileid=fileid)
57 59
58 60 def filectxs(self):
59 61 """generate a file context for each file in this changeset's
60 62 manifest"""
61 63 mf = self.manifest()
62 64 m = mf.keys()
63 65 m.sort()
64 66 for f in m:
65 67 yield self.filectx(f, fileid=mf[f])
66 68
67 69 class filectx(object):
68 70 """A filecontext object makes access to data related to a particular
69 71 filerevision convenient."""
70 72 def __init__(self, repo, path, changeid=None, fileid=None):
71 73 """changeid can be a changeset revision, node, or tag.
72 74 fileid can be a file revision or node."""
73 75 self._repo = repo
74 76 self._path = path
75 77 self._id = changeid
76 78 self._fileid = fileid
77 79
78 80 if self._id:
79 81 # if given a changeset id, go ahead and look up the file
80 self._changeset = changectx(repo, self._id)
82 self._changeset = self._repo.changelog.read(self._id)
81 83 node, flag = self._repo.manifest.find(self._changeset[0], path)
82 self._node = node
83 self._filelog = self.repo.file(self._path)
84 self._filelog = self._repo.file(self._path)
85 self._filenode = node
84 86 elif self._fileid:
85 87 # else be lazy
86 88 self._filelog = self._repo.file(self._path)
87 89 self._filenode = self._filelog.lookup(self._fileid)
88 90 self._filerev = self._filelog.rev(self._filenode)
89 91
90 92 def changeset(self):
91 93 try:
92 94 return self._changeset
93 95 except AttributeError:
94 96 self._changeset = self._repo.changelog.read(self.node())
95 97 return self._changeset
96 98
97 99 def filerev(self): return self._filerev
98 100 def filenode(self): return self._filenode
99 101 def filelog(self): return self._filelog
100 102
101 103 def rev(self): return self.changeset().rev()
102 104 def node(self): return self.changeset().node()
103 105 def user(self): return self.changeset().user()
104 106 def date(self): return self.changeset().date()
105 107 def files(self): return self.changeset().files()
106 108 def description(self): return self.changeset().description()
107 109 def manifest(self): return self.changeset().manifest()
108 110
109 111 def data(self): return self._filelog.read(self._filenode)
110 112 def metadata(self): return self._filelog.readmeta(self._filenode)
111 113 def renamed(self): return self._filelog.renamed(self._filenode)
112 114
113 115 def parents(self):
114 116 # need to fix for renames
115 117 p = self._filelog.parents(self._filenode)
116 118 return [ filectx(self._repo, self._path, fileid=x) for x in p ]
117 119
118 120 def children(self):
119 121 # hard for renames
120 122 c = self._filelog.children(self._filenode)
121 123 return [ filectx(self._repo, self._path, fileid=x) for x in c ]
122 124
123 125 def annotate(self):
124 126 return self._filelog.annotate(self._filenode)
@@ -1,208 +1,209 b''
1 1 # hg.py - repository classes for mercurial
2 2 #
3 3 # Copyright 2005 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms
6 6 # of the GNU General Public License, incorporated herein by reference.
7 7
8 8 from node import *
9 9 from repo import *
10 10 from demandload import *
11 11 from i18n import gettext as _
12 12 demandload(globals(), "localrepo bundlerepo httprepo sshrepo statichttprepo")
13 13 demandload(globals(), "errno lock os shutil util")
14 14
15 15 def bundle(ui, path):
16 16 if path.startswith('bundle://'):
17 17 path = path[9:]
18 18 else:
19 19 path = path[7:]
20 20 s = path.split("+", 1)
21 21 if len(s) == 1:
22 22 repopath, bundlename = "", s[0]
23 23 else:
24 24 repopath, bundlename = s
25 25 return bundlerepo.bundlerepository(ui, repopath, bundlename)
26 26
27 27 def hg(ui, path):
28 28 ui.warn(_("hg:// syntax is deprecated, please use http:// instead\n"))
29 29 return httprepo.httprepository(ui, path.replace("hg://", "http://"))
30 30
31 31 def local_(ui, path, create=0):
32 32 if path.startswith('file:'):
33 33 path = path[5:]
34 34 return localrepo.localrepository(ui, path, create)
35 35
36 36 def ssh_(ui, path, create=0):
37 37 return sshrepo.sshrepository(ui, path, create)
38 38
39 39 def old_http(ui, path):
40 40 ui.warn(_("old-http:// syntax is deprecated, "
41 41 "please use static-http:// instead\n"))
42 42 return statichttprepo.statichttprepository(
43 43 ui, path.replace("old-http://", "http://"))
44 44
45 45 def static_http(ui, path):
46 46 return statichttprepo.statichttprepository(
47 47 ui, path.replace("static-http://", "http://"))
48 48
49 49 schemes = {
50 50 'bundle': bundle,
51 51 'file': local_,
52 52 'hg': hg,
53 53 'http': lambda ui, path: httprepo.httprepository(ui, path),
54 54 'https': lambda ui, path: httprepo.httpsrepository(ui, path),
55 55 'old-http': old_http,
56 56 'ssh': ssh_,
57 57 'static-http': static_http,
58 58 }
59 59
60 60 def repository(ui, path=None, create=0):
61 61 scheme = None
62 62 if path:
63 63 c = path.find(':')
64 64 if c > 0:
65 65 scheme = schemes.get(path[:c])
66 66 else:
67 67 path = ''
68 68 ctor = scheme or schemes['file']
69 69 if create:
70 70 try:
71 71 return ctor(ui, path, create)
72 72 except TypeError:
73 73 raise util.Abort(_('cannot create new repository over "%s" protocol') %
74 74 scheme)
75 75 return ctor(ui, path)
76 76
77 77 def clone(ui, source, dest=None, pull=False, rev=None, update=True,
78 78 stream=False):
79 79 """Make a copy of an existing repository.
80 80
81 81 Create a copy of an existing repository in a new directory. The
82 82 source and destination are URLs, as passed to the repository
83 83 function. Returns a pair of repository objects, the source and
84 84 newly created destination.
85 85
86 86 The location of the source is added to the new repository's
87 87 .hg/hgrc file, as the default to be used for future pulls and
88 88 pushes.
89 89
90 90 If an exception is raised, the partly cloned/updated destination
91 91 repository will be deleted.
92 92
93 93 Keyword arguments:
94 94
95 95 dest: URL of destination repository to create (defaults to base
96 96 name of source repository)
97 97
98 98 pull: always pull from source repository, even in local case
99 99
100 stream: stream from repository (fast over LAN, slow over WAN)
100 stream: stream raw data uncompressed from repository (fast over
101 LAN, slow over WAN)
101 102
102 103 rev: revision to clone up to (implies pull=True)
103 104
104 105 update: update working directory after clone completes, if
105 106 destination is local repository
106 107 """
107 108 if dest is None:
108 109 dest = os.path.basename(os.path.normpath(source))
109 110
110 111 if os.path.exists(dest):
111 112 raise util.Abort(_("destination '%s' already exists"), dest)
112 113
113 114 class DirCleanup(object):
114 115 def __init__(self, dir_):
115 116 self.rmtree = shutil.rmtree
116 117 self.dir_ = dir_
117 118 def close(self):
118 119 self.dir_ = None
119 120 def __del__(self):
120 121 if self.dir_:
121 122 self.rmtree(self.dir_, True)
122 123
123 124 src_repo = repository(ui, source)
124 125
125 126 dest_repo = None
126 127 try:
127 128 dest_repo = repository(ui, dest)
128 129 raise util.Abort(_("destination '%s' already exists." % dest))
129 130 except RepoError:
130 131 dest_repo = repository(ui, dest, create=True)
131 132
132 133 dest_path = None
133 134 dir_cleanup = None
134 135 if dest_repo.local():
135 136 dest_path = os.path.realpath(dest)
136 137 dir_cleanup = DirCleanup(dest_path)
137 138
138 139 abspath = source
139 140 copy = False
140 141 if src_repo.local() and dest_repo.local():
141 142 abspath = os.path.abspath(source)
142 143 copy = not pull and not rev
143 144
144 145 src_lock, dest_lock = None, None
145 146 if copy:
146 147 try:
147 148 # we use a lock here because if we race with commit, we
148 149 # can end up with extra data in the cloned revlogs that's
149 150 # not pointed to by changesets, thus causing verify to
150 151 # fail
151 152 src_lock = src_repo.lock()
152 153 except lock.LockException:
153 154 copy = False
154 155
155 156 if copy:
156 157 # we lock here to avoid premature writing to the target
157 158 dest_lock = lock.lock(os.path.join(dest_path, ".hg", "lock"))
158 159
159 # we need to remove the (empty) data dir in dest so copyfiles
160 # can do its work
161 os.rmdir(os.path.join(dest_path, ".hg", "data"))
160 # we need to remove the (empty) data dir in dest so copyfiles
161 # can do its work
162 os.rmdir(os.path.join(dest_path, ".hg", "data"))
162 163 files = "data 00manifest.d 00manifest.i 00changelog.d 00changelog.i"
163 164 for f in files.split():
164 165 src = os.path.join(source, ".hg", f)
165 166 dst = os.path.join(dest_path, ".hg", f)
166 167 try:
167 168 util.copyfiles(src, dst)
168 169 except OSError, inst:
169 170 if inst.errno != errno.ENOENT:
170 171 raise
171 172
172 # we need to re-init the repo after manually copying the data
173 # into it
173 # we need to re-init the repo after manually copying the data
174 # into it
174 175 dest_repo = repository(ui, dest)
175 176
176 177 else:
177 178 revs = None
178 179 if rev:
179 180 if not src_repo.local():
180 181 raise util.Abort(_("clone by revision not supported yet "
181 182 "for remote repositories"))
182 183 revs = [src_repo.lookup(r) for r in rev]
183 184
184 185 if dest_repo.local():
185 186 dest_repo.clone(src_repo, heads=revs, stream=stream)
186 187 elif src_repo.local():
187 188 src_repo.push(dest_repo, revs=revs)
188 189 else:
189 190 raise util.Abort(_("clone from remote to remote not supported"))
190 191
191 192 if src_lock:
192 193 src_lock.release()
193 194
194 195 if dest_repo.local():
195 196 fp = dest_repo.opener("hgrc", "w", text=True)
196 197 fp.write("[paths]\n")
197 198 fp.write("default = %s\n" % abspath)
198 199 fp.close()
199 200
200 201 if dest_lock:
201 202 dest_lock.release()
202 203
203 204 if update:
204 205 dest_repo.update(dest_repo.changelog.tip())
205 206 if dir_cleanup:
206 207 dir_cleanup.close()
207 208
208 209 return src_repo, dest_repo
@@ -1,957 +1,960 b''
1 1 # hgweb/hgweb_mod.py - Web interface for a repository.
2 2 #
3 3 # Copyright 21 May 2005 - (c) 2005 Jake Edge <jake@edge2.net>
4 4 # Copyright 2005 Matt Mackall <mpm@selenic.com>
5 5 #
6 6 # This software may be used and distributed according to the terms
7 7 # of the GNU General Public License, incorporated herein by reference.
8 8
9 9 import os
10 10 import os.path
11 11 import mimetypes
12 12 from mercurial.demandload import demandload
13 13 demandload(globals(), "re zlib ConfigParser mimetools cStringIO sys tempfile")
14 14 demandload(globals(), "mercurial:mdiff,ui,hg,util,archival,streamclone")
15 15 demandload(globals(), "mercurial:templater")
16 16 demandload(globals(), "mercurial.hgweb.common:get_mtime,staticfile")
17 17 from mercurial.node import *
18 18 from mercurial.i18n import gettext as _
19 19
20 20 def _up(p):
21 21 if p[0] != "/":
22 22 p = "/" + p
23 23 if p[-1] == "/":
24 24 p = p[:-1]
25 25 up = os.path.dirname(p)
26 26 if up == "/":
27 27 return "/"
28 28 return up + "/"
29 29
30 30 class hgweb(object):
31 31 def __init__(self, repo, name=None):
32 32 if type(repo) == type(""):
33 33 self.repo = hg.repository(ui.ui(), repo)
34 34 else:
35 35 self.repo = repo
36 36
37 37 self.mtime = -1
38 38 self.reponame = name
39 39 self.archives = 'zip', 'gz', 'bz2'
40 40 self.templatepath = self.repo.ui.config("web", "templates",
41 41 templater.templatepath())
42 42
43 43 def refresh(self):
44 44 mtime = get_mtime(self.repo.root)
45 45 if mtime != self.mtime:
46 46 self.mtime = mtime
47 47 self.repo = hg.repository(self.repo.ui, self.repo.root)
48 48 self.maxchanges = int(self.repo.ui.config("web", "maxchanges", 10))
49 49 self.maxfiles = int(self.repo.ui.config("web", "maxfiles", 10))
50 50 self.allowpull = self.repo.ui.configbool("web", "allowpull", True)
51 51
52 52 def archivelist(self, nodeid):
53 53 allowed = self.repo.ui.configlist("web", "allow_archive")
54 54 for i in self.archives:
55 55 if i in allowed or self.repo.ui.configbool("web", "allow" + i):
56 56 yield {"type" : i, "node" : nodeid, "url": ""}
57 57
58 58 def listfiles(self, files, mf):
59 59 for f in files[:self.maxfiles]:
60 60 yield self.t("filenodelink", node=hex(mf[f]), file=f)
61 61 if len(files) > self.maxfiles:
62 62 yield self.t("fileellipses")
63 63
64 64 def listfilediffs(self, files, changeset):
65 65 for f in files[:self.maxfiles]:
66 66 yield self.t("filedifflink", node=hex(changeset), file=f)
67 67 if len(files) > self.maxfiles:
68 68 yield self.t("fileellipses")
69 69
70 70 def siblings(self, siblings=[], rev=None, hiderev=None, **args):
71 71 if not rev:
72 72 rev = lambda x: ""
73 73 siblings = [s for s in siblings if s != nullid]
74 74 if len(siblings) == 1 and rev(siblings[0]) == hiderev:
75 75 return
76 76 for s in siblings:
77 77 yield dict(node=hex(s), rev=rev(s), **args)
78 78
79 79 def renamelink(self, fl, node):
80 80 r = fl.renamed(node)
81 81 if r:
82 82 return [dict(file=r[0], node=hex(r[1]))]
83 83 return []
84 84
85 85 def showtag(self, t1, node=nullid, **args):
86 86 for t in self.repo.nodetags(node):
87 87 yield self.t(t1, tag=t, **args)
88 88
89 89 def diff(self, node1, node2, files):
90 90 def filterfiles(filters, files):
91 91 l = [x for x in files if x in filters]
92 92
93 93 for t in filters:
94 94 if t and t[-1] != os.sep:
95 95 t += os.sep
96 96 l += [x for x in files if x.startswith(t)]
97 97 return l
98 98
99 99 parity = [0]
100 100 def diffblock(diff, f, fn):
101 101 yield self.t("diffblock",
102 102 lines=prettyprintlines(diff),
103 103 parity=parity[0],
104 104 file=f,
105 105 filenode=hex(fn or nullid))
106 106 parity[0] = 1 - parity[0]
107 107
108 108 def prettyprintlines(diff):
109 109 for l in diff.splitlines(1):
110 110 if l.startswith('+'):
111 111 yield self.t("difflineplus", line=l)
112 112 elif l.startswith('-'):
113 113 yield self.t("difflineminus", line=l)
114 114 elif l.startswith('@'):
115 115 yield self.t("difflineat", line=l)
116 116 else:
117 117 yield self.t("diffline", line=l)
118 118
119 119 r = self.repo
120 120 cl = r.changelog
121 121 mf = r.manifest
122 122 change1 = cl.read(node1)
123 123 change2 = cl.read(node2)
124 124 mmap1 = mf.read(change1[0])
125 125 mmap2 = mf.read(change2[0])
126 126 date1 = util.datestr(change1[2])
127 127 date2 = util.datestr(change2[2])
128 128
129 129 modified, added, removed, deleted, unknown = r.changes(node1, node2)
130 130 if files:
131 131 modified, added, removed = map(lambda x: filterfiles(files, x),
132 132 (modified, added, removed))
133 133
134 134 diffopts = self.repo.ui.diffopts()
135 135 showfunc = diffopts['showfunc']
136 136 ignorews = diffopts['ignorews']
137 137 ignorewsamount = diffopts['ignorewsamount']
138 138 ignoreblanklines = diffopts['ignoreblanklines']
139 139 for f in modified:
140 140 to = r.file(f).read(mmap1[f])
141 141 tn = r.file(f).read(mmap2[f])
142 142 yield diffblock(mdiff.unidiff(to, date1, tn, date2, f,
143 143 showfunc=showfunc, ignorews=ignorews,
144 144 ignorewsamount=ignorewsamount,
145 145 ignoreblanklines=ignoreblanklines), f, tn)
146 146 for f in added:
147 147 to = None
148 148 tn = r.file(f).read(mmap2[f])
149 149 yield diffblock(mdiff.unidiff(to, date1, tn, date2, f,
150 150 showfunc=showfunc, ignorews=ignorews,
151 151 ignorewsamount=ignorewsamount,
152 152 ignoreblanklines=ignoreblanklines), f, tn)
153 153 for f in removed:
154 154 to = r.file(f).read(mmap1[f])
155 155 tn = None
156 156 yield diffblock(mdiff.unidiff(to, date1, tn, date2, f,
157 157 showfunc=showfunc, ignorews=ignorews,
158 158 ignorewsamount=ignorewsamount,
159 159 ignoreblanklines=ignoreblanklines), f, tn)
160 160
161 161 def changelog(self, pos):
162 162 def changenav(**map):
163 163 def seq(factor, maxchanges=None):
164 164 if maxchanges:
165 165 yield maxchanges
166 166 if maxchanges >= 20 and maxchanges <= 40:
167 167 yield 50
168 168 else:
169 169 yield 1 * factor
170 170 yield 3 * factor
171 171 for f in seq(factor * 10):
172 172 yield f
173 173
174 174 l = []
175 175 last = 0
176 176 for f in seq(1, self.maxchanges):
177 177 if f < self.maxchanges or f <= last:
178 178 continue
179 179 if f > count:
180 180 break
181 181 last = f
182 182 r = "%d" % f
183 183 if pos + f < count:
184 184 l.append(("+" + r, pos + f))
185 185 if pos - f >= 0:
186 186 l.insert(0, ("-" + r, pos - f))
187 187
188 188 yield {"rev": 0, "label": "(0)"}
189 189
190 190 for label, rev in l:
191 191 yield {"label": label, "rev": rev}
192 192
193 193 yield {"label": "tip", "rev": "tip"}
194 194
195 195 def changelist(**map):
196 196 parity = (start - end) & 1
197 197 cl = self.repo.changelog
198 198 l = [] # build a list in forward order for efficiency
199 199 for i in range(start, end):
200 200 n = cl.node(i)
201 201 changes = cl.read(n)
202 202 hn = hex(n)
203 203
204 204 l.insert(0, {"parity": parity,
205 205 "author": changes[1],
206 206 "parent": self.siblings(cl.parents(n), cl.rev,
207 207 cl.rev(n) - 1),
208 208 "child": self.siblings(cl.children(n), cl.rev,
209 209 cl.rev(n) + 1),
210 210 "changelogtag": self.showtag("changelogtag",n),
211 211 "manifest": hex(changes[0]),
212 212 "desc": changes[4],
213 213 "date": changes[2],
214 214 "files": self.listfilediffs(changes[3], n),
215 215 "rev": i,
216 216 "node": hn})
217 217 parity = 1 - parity
218 218
219 219 for e in l:
220 220 yield e
221 221
222 222 cl = self.repo.changelog
223 223 mf = cl.read(cl.tip())[0]
224 224 count = cl.count()
225 225 start = max(0, pos - self.maxchanges + 1)
226 226 end = min(count, start + self.maxchanges)
227 227 pos = end - 1
228 228
229 229 yield self.t('changelog',
230 230 changenav=changenav,
231 231 manifest=hex(mf),
232 232 rev=pos, changesets=count, entries=changelist,
233 233 archives=self.archivelist("tip"))
234 234
235 235 def search(self, query):
236 236
237 237 def changelist(**map):
238 238 cl = self.repo.changelog
239 239 count = 0
240 240 qw = query.lower().split()
241 241
242 242 def revgen():
243 243 for i in range(cl.count() - 1, 0, -100):
244 244 l = []
245 245 for j in range(max(0, i - 100), i):
246 246 n = cl.node(j)
247 247 changes = cl.read(n)
248 248 l.append((n, j, changes))
249 249 l.reverse()
250 250 for e in l:
251 251 yield e
252 252
253 253 for n, i, changes in revgen():
254 254 miss = 0
255 255 for q in qw:
256 256 if not (q in changes[1].lower() or
257 257 q in changes[4].lower() or
258 258 q in " ".join(changes[3][:20]).lower()):
259 259 miss = 1
260 260 break
261 261 if miss:
262 262 continue
263 263
264 264 count += 1
265 265 hn = hex(n)
266 266
267 267 yield self.t('searchentry',
268 268 parity=count & 1,
269 269 author=changes[1],
270 270 parent=self.siblings(cl.parents(n), cl.rev),
271 271 child=self.siblings(cl.children(n), cl.rev),
272 272 changelogtag=self.showtag("changelogtag",n),
273 273 manifest=hex(changes[0]),
274 274 desc=changes[4],
275 275 date=changes[2],
276 276 files=self.listfilediffs(changes[3], n),
277 277 rev=i,
278 278 node=hn)
279 279
280 280 if count >= self.maxchanges:
281 281 break
282 282
283 283 cl = self.repo.changelog
284 284 mf = cl.read(cl.tip())[0]
285 285
286 286 yield self.t('search',
287 287 query=query,
288 288 manifest=hex(mf),
289 289 entries=changelist)
290 290
291 291 def changeset(self, nodeid):
292 292 cl = self.repo.changelog
293 293 n = self.repo.lookup(nodeid)
294 294 nodeid = hex(n)
295 295 changes = cl.read(n)
296 296 p1 = cl.parents(n)[0]
297 297
298 298 files = []
299 299 mf = self.repo.manifest.read(changes[0])
300 300 for f in changes[3]:
301 301 files.append(self.t("filenodelink",
302 302 filenode=hex(mf.get(f, nullid)), file=f))
303 303
304 304 def diff(**map):
305 305 yield self.diff(p1, n, None)
306 306
307 307 yield self.t('changeset',
308 308 diff=diff,
309 309 rev=cl.rev(n),
310 310 node=nodeid,
311 311 parent=self.siblings(cl.parents(n), cl.rev),
312 312 child=self.siblings(cl.children(n), cl.rev),
313 313 changesettag=self.showtag("changesettag",n),
314 314 manifest=hex(changes[0]),
315 315 author=changes[1],
316 316 desc=changes[4],
317 317 date=changes[2],
318 318 files=files,
319 319 archives=self.archivelist(nodeid))
320 320
321 321 def filelog(self, f, filenode):
322 322 cl = self.repo.changelog
323 323 fl = self.repo.file(f)
324 324 filenode = hex(fl.lookup(filenode))
325 325 count = fl.count()
326 326
327 327 def entries(**map):
328 328 l = []
329 329 parity = (count - 1) & 1
330 330
331 331 for i in range(count):
332 332 n = fl.node(i)
333 333 lr = fl.linkrev(n)
334 334 cn = cl.node(lr)
335 335 cs = cl.read(cl.node(lr))
336 336
337 337 l.insert(0, {"parity": parity,
338 338 "filenode": hex(n),
339 339 "filerev": i,
340 340 "file": f,
341 341 "node": hex(cn),
342 342 "author": cs[1],
343 343 "date": cs[2],
344 344 "rename": self.renamelink(fl, n),
345 345 "parent": self.siblings(fl.parents(n),
346 346 fl.rev, file=f),
347 347 "child": self.siblings(fl.children(n),
348 348 fl.rev, file=f),
349 349 "desc": cs[4]})
350 350 parity = 1 - parity
351 351
352 352 for e in l:
353 353 yield e
354 354
355 355 yield self.t("filelog", file=f, filenode=filenode, entries=entries)
356 356
357 357 def filerevision(self, f, node):
358 358 fl = self.repo.file(f)
359 359 n = fl.lookup(node)
360 360 node = hex(n)
361 361 text = fl.read(n)
362 362 changerev = fl.linkrev(n)
363 363 cl = self.repo.changelog
364 364 cn = cl.node(changerev)
365 365 cs = cl.read(cn)
366 366 mfn = cs[0]
367 367
368 368 mt = mimetypes.guess_type(f)[0]
369 369 rawtext = text
370 370 if util.binary(text):
371 371 mt = mt or 'application/octet-stream'
372 372 text = "(binary:%s)" % mt
373 373 mt = mt or 'text/plain'
374 374
375 375 def lines():
376 376 for l, t in enumerate(text.splitlines(1)):
377 377 yield {"line": t,
378 378 "linenumber": "% 6d" % (l + 1),
379 379 "parity": l & 1}
380 380
381 381 yield self.t("filerevision",
382 382 file=f,
383 383 filenode=node,
384 384 path=_up(f),
385 385 text=lines(),
386 386 raw=rawtext,
387 387 mimetype=mt,
388 388 rev=changerev,
389 389 node=hex(cn),
390 390 manifest=hex(mfn),
391 391 author=cs[1],
392 392 date=cs[2],
393 393 parent=self.siblings(fl.parents(n), fl.rev, file=f),
394 394 child=self.siblings(fl.children(n), fl.rev, file=f),
395 395 rename=self.renamelink(fl, n),
396 396 permissions=self.repo.manifest.readflags(mfn)[f])
397 397
398 398 def fileannotate(self, f, node):
399 399 bcache = {}
400 400 ncache = {}
401 401 fl = self.repo.file(f)
402 402 n = fl.lookup(node)
403 403 node = hex(n)
404 404 changerev = fl.linkrev(n)
405 405
406 406 cl = self.repo.changelog
407 407 cn = cl.node(changerev)
408 408 cs = cl.read(cn)
409 409 mfn = cs[0]
410 410
411 411 def annotate(**map):
412 412 parity = 1
413 413 last = None
414 414 for r, l in fl.annotate(n):
415 415 try:
416 416 cnode = ncache[r]
417 417 except KeyError:
418 418 cnode = ncache[r] = self.repo.changelog.node(r)
419 419
420 420 try:
421 421 name = bcache[r]
422 422 except KeyError:
423 423 cl = self.repo.changelog.read(cnode)
424 424 bcache[r] = name = self.repo.ui.shortuser(cl[1])
425 425
426 426 if last != cnode:
427 427 parity = 1 - parity
428 428 last = cnode
429 429
430 430 yield {"parity": parity,
431 431 "node": hex(cnode),
432 432 "rev": r,
433 433 "author": name,
434 434 "file": f,
435 435 "line": l}
436 436
437 437 yield self.t("fileannotate",
438 438 file=f,
439 439 filenode=node,
440 440 annotate=annotate,
441 441 path=_up(f),
442 442 rev=changerev,
443 443 node=hex(cn),
444 444 manifest=hex(mfn),
445 445 author=cs[1],
446 446 date=cs[2],
447 447 rename=self.renamelink(fl, n),
448 448 parent=self.siblings(fl.parents(n), fl.rev, file=f),
449 449 child=self.siblings(fl.children(n), fl.rev, file=f),
450 450 permissions=self.repo.manifest.readflags(mfn)[f])
451 451
452 452 def manifest(self, mnode, path):
453 453 man = self.repo.manifest
454 454 mn = man.lookup(mnode)
455 455 mnode = hex(mn)
456 456 mf = man.read(mn)
457 457 rev = man.rev(mn)
458 458 changerev = man.linkrev(mn)
459 459 node = self.repo.changelog.node(changerev)
460 460 mff = man.readflags(mn)
461 461
462 462 files = {}
463 463
464 464 p = path[1:]
465 465 if p and p[-1] != "/":
466 466 p += "/"
467 467 l = len(p)
468 468
469 469 for f,n in mf.items():
470 470 if f[:l] != p:
471 471 continue
472 472 remain = f[l:]
473 473 if "/" in remain:
474 474 short = remain[:remain.index("/") + 1] # bleah
475 475 files[short] = (f, None)
476 476 else:
477 477 short = os.path.basename(remain)
478 478 files[short] = (f, n)
479 479
480 480 def filelist(**map):
481 481 parity = 0
482 482 fl = files.keys()
483 483 fl.sort()
484 484 for f in fl:
485 485 full, fnode = files[f]
486 486 if not fnode:
487 487 continue
488 488
489 489 yield {"file": full,
490 490 "manifest": mnode,
491 491 "filenode": hex(fnode),
492 492 "parity": parity,
493 493 "basename": f,
494 494 "permissions": mff[full]}
495 495 parity = 1 - parity
496 496
497 497 def dirlist(**map):
498 498 parity = 0
499 499 fl = files.keys()
500 500 fl.sort()
501 501 for f in fl:
502 502 full, fnode = files[f]
503 503 if fnode:
504 504 continue
505 505
506 506 yield {"parity": parity,
507 507 "path": os.path.join(path, f),
508 508 "manifest": mnode,
509 509 "basename": f[:-1]}
510 510 parity = 1 - parity
511 511
512 512 yield self.t("manifest",
513 513 manifest=mnode,
514 514 rev=rev,
515 515 node=hex(node),
516 516 path=path,
517 517 up=_up(path),
518 518 fentries=filelist,
519 519 dentries=dirlist,
520 520 archives=self.archivelist(hex(node)))
521 521
522 522 def tags(self):
523 523 cl = self.repo.changelog
524 524 mf = cl.read(cl.tip())[0]
525 525
526 526 i = self.repo.tagslist()
527 527 i.reverse()
528 528
529 529 def entries(notip=False, **map):
530 530 parity = 0
531 531 for k,n in i:
532 532 if notip and k == "tip": continue
533 533 yield {"parity": parity,
534 534 "tag": k,
535 535 "tagmanifest": hex(cl.read(n)[0]),
536 536 "date": cl.read(n)[2],
537 537 "node": hex(n)}
538 538 parity = 1 - parity
539 539
540 540 yield self.t("tags",
541 541 manifest=hex(mf),
542 542 entries=lambda **x: entries(False, **x),
543 543 entriesnotip=lambda **x: entries(True, **x))
544 544
545 545 def summary(self):
546 546 cl = self.repo.changelog
547 547 mf = cl.read(cl.tip())[0]
548 548
549 549 i = self.repo.tagslist()
550 550 i.reverse()
551 551
552 552 def tagentries(**map):
553 553 parity = 0
554 554 count = 0
555 555 for k,n in i:
556 556 if k == "tip": # skip tip
557 557 continue;
558 558
559 559 count += 1
560 560 if count > 10: # limit to 10 tags
561 561 break;
562 562
563 563 c = cl.read(n)
564 564 m = c[0]
565 565 t = c[2]
566 566
567 567 yield self.t("tagentry",
568 568 parity = parity,
569 569 tag = k,
570 570 node = hex(n),
571 571 date = t,
572 572 tagmanifest = hex(m))
573 573 parity = 1 - parity
574 574
575 575 def changelist(**map):
576 576 parity = 0
577 577 cl = self.repo.changelog
578 578 l = [] # build a list in forward order for efficiency
579 579 for i in range(start, end):
580 580 n = cl.node(i)
581 581 changes = cl.read(n)
582 582 hn = hex(n)
583 583 t = changes[2]
584 584
585 585 l.insert(0, self.t(
586 586 'shortlogentry',
587 587 parity = parity,
588 588 author = changes[1],
589 589 manifest = hex(changes[0]),
590 590 desc = changes[4],
591 591 date = t,
592 592 rev = i,
593 593 node = hn))
594 594 parity = 1 - parity
595 595
596 596 yield l
597 597
598 598 cl = self.repo.changelog
599 599 mf = cl.read(cl.tip())[0]
600 600 count = cl.count()
601 601 start = max(0, count - self.maxchanges)
602 602 end = min(count, start + self.maxchanges)
603 603
604 604 yield self.t("summary",
605 605 desc = self.repo.ui.config("web", "description", "unknown"),
606 606 owner = (self.repo.ui.config("ui", "username") or # preferred
607 607 self.repo.ui.config("web", "contact") or # deprecated
608 608 self.repo.ui.config("web", "author", "unknown")), # also
609 609 lastchange = (0, 0), # FIXME
610 610 manifest = hex(mf),
611 611 tags = tagentries,
612 612 shortlog = changelist)
613 613
614 614 def filediff(self, file, changeset):
615 615 cl = self.repo.changelog
616 616 n = self.repo.lookup(changeset)
617 617 changeset = hex(n)
618 618 p1 = cl.parents(n)[0]
619 619 cs = cl.read(n)
620 620 mf = self.repo.manifest.read(cs[0])
621 621
622 622 def diff(**map):
623 623 yield self.diff(p1, n, [file])
624 624
625 625 yield self.t("filediff",
626 626 file=file,
627 627 filenode=hex(mf.get(file, nullid)),
628 628 node=changeset,
629 629 rev=self.repo.changelog.rev(n),
630 630 parent=self.siblings(cl.parents(n), cl.rev),
631 631 child=self.siblings(cl.children(n), cl.rev),
632 632 diff=diff)
633 633
634 634 archive_specs = {
635 635 'bz2': ('application/x-tar', 'tbz2', '.tar.bz2', None),
636 636 'gz': ('application/x-tar', 'tgz', '.tar.gz', None),
637 637 'zip': ('application/zip', 'zip', '.zip', None),
638 638 }
639 639
640 640 def archive(self, req, cnode, type_):
641 641 reponame = re.sub(r"\W+", "-", os.path.basename(self.reponame))
642 642 name = "%s-%s" % (reponame, short(cnode))
643 643 mimetype, artype, extension, encoding = self.archive_specs[type_]
644 644 headers = [('Content-type', mimetype),
645 645 ('Content-disposition', 'attachment; filename=%s%s' %
646 646 (name, extension))]
647 647 if encoding:
648 648 headers.append(('Content-encoding', encoding))
649 649 req.header(headers)
650 650 archival.archive(self.repo, req.out, cnode, artype, prefix=name)
651 651
652 652 # add tags to things
653 653 # tags -> list of changesets corresponding to tags
654 654 # find tag, changeset, file
655 655
656 656 def cleanpath(self, path):
657 657 p = util.normpath(path)
658 658 if p[:2] == "..":
659 659 raise Exception("suspicious path")
660 660 return p
661 661
662 662 def run(self):
663 663 if not os.environ.get('GATEWAY_INTERFACE', '').startswith("CGI/1."):
664 664 raise RuntimeError("This function is only intended to be called while running as a CGI script.")
665 665 import mercurial.hgweb.wsgicgi as wsgicgi
666 666 from request import wsgiapplication
667 667 def make_web_app():
668 668 return self
669 669 wsgicgi.launch(wsgiapplication(make_web_app))
670 670
671 671 def run_wsgi(self, req):
672 672 def header(**map):
673 673 header_file = cStringIO.StringIO(''.join(self.t("header", **map)))
674 674 msg = mimetools.Message(header_file, 0)
675 675 req.header(msg.items())
676 676 yield header_file.read()
677 677
678 678 def rawfileheader(**map):
679 679 req.header([('Content-type', map['mimetype']),
680 680 ('Content-disposition', 'filename=%s' % map['file']),
681 681 ('Content-length', str(len(map['raw'])))])
682 682 yield ''
683 683
684 684 def footer(**map):
685 685 yield self.t("footer",
686 686 motd=self.repo.ui.config("web", "motd", ""),
687 687 **map)
688 688
689 689 def expand_form(form):
690 690 shortcuts = {
691 691 'cl': [('cmd', ['changelog']), ('rev', None)],
692 692 'cs': [('cmd', ['changeset']), ('node', None)],
693 693 'f': [('cmd', ['file']), ('filenode', None)],
694 694 'fl': [('cmd', ['filelog']), ('filenode', None)],
695 695 'fd': [('cmd', ['filediff']), ('node', None)],
696 696 'fa': [('cmd', ['annotate']), ('filenode', None)],
697 697 'mf': [('cmd', ['manifest']), ('manifest', None)],
698 698 'ca': [('cmd', ['archive']), ('node', None)],
699 699 'tags': [('cmd', ['tags'])],
700 700 'tip': [('cmd', ['changeset']), ('node', ['tip'])],
701 701 'static': [('cmd', ['static']), ('file', None)]
702 702 }
703 703
704 704 for k in shortcuts.iterkeys():
705 705 if form.has_key(k):
706 706 for name, value in shortcuts[k]:
707 707 if value is None:
708 708 value = form[k]
709 709 form[name] = value
710 710 del form[k]
711 711
712 712 self.refresh()
713 713
714 714 expand_form(req.form)
715 715
716 716 m = os.path.join(self.templatepath, "map")
717 717 style = self.repo.ui.config("web", "style", "")
718 718 if req.form.has_key('style'):
719 719 style = req.form['style'][0]
720 720 if style:
721 721 b = os.path.basename("map-" + style)
722 722 p = os.path.join(self.templatepath, b)
723 723 if os.path.isfile(p):
724 724 m = p
725 725
726 726 port = req.env["SERVER_PORT"]
727 727 port = port != "80" and (":" + port) or ""
728 728 uri = req.env["REQUEST_URI"]
729 729 if "?" in uri:
730 730 uri = uri.split("?")[0]
731 731 url = "http://%s%s%s" % (req.env["SERVER_NAME"], port, uri)
732 732 if not self.reponame:
733 733 self.reponame = (self.repo.ui.config("web", "name")
734 734 or uri.strip('/') or self.repo.root)
735 735
736 736 self.t = templater.templater(m, templater.common_filters,
737 737 defaults={"url": url,
738 738 "repo": self.reponame,
739 739 "header": header,
740 740 "footer": footer,
741 741 "rawfileheader": rawfileheader,
742 742 })
743 743
744 744 if not req.form.has_key('cmd'):
745 745 req.form['cmd'] = [self.t.cache['default'],]
746 746
747 747 cmd = req.form['cmd'][0]
748 748
749 749 method = getattr(self, 'do_' + cmd, None)
750 750 if method:
751 751 method(req)
752 752 else:
753 753 req.write(self.t("error"))
754 754
755 755 def do_changelog(self, req):
756 756 hi = self.repo.changelog.count() - 1
757 757 if req.form.has_key('rev'):
758 758 hi = req.form['rev'][0]
759 759 try:
760 760 hi = self.repo.changelog.rev(self.repo.lookup(hi))
761 761 except hg.RepoError:
762 762 req.write(self.search(hi)) # XXX redirect to 404 page?
763 763 return
764 764
765 765 req.write(self.changelog(hi))
766 766
767 767 def do_changeset(self, req):
768 768 req.write(self.changeset(req.form['node'][0]))
769 769
770 770 def do_manifest(self, req):
771 771 req.write(self.manifest(req.form['manifest'][0],
772 772 self.cleanpath(req.form['path'][0])))
773 773
774 774 def do_tags(self, req):
775 775 req.write(self.tags())
776 776
777 777 def do_summary(self, req):
778 778 req.write(self.summary())
779 779
780 780 def do_filediff(self, req):
781 781 req.write(self.filediff(self.cleanpath(req.form['file'][0]),
782 782 req.form['node'][0]))
783 783
784 784 def do_file(self, req):
785 785 req.write(self.filerevision(self.cleanpath(req.form['file'][0]),
786 786 req.form['filenode'][0]))
787 787
788 788 def do_annotate(self, req):
789 789 req.write(self.fileannotate(self.cleanpath(req.form['file'][0]),
790 790 req.form['filenode'][0]))
791 791
792 792 def do_filelog(self, req):
793 793 req.write(self.filelog(self.cleanpath(req.form['file'][0]),
794 794 req.form['filenode'][0]))
795 795
796 796 def do_heads(self, req):
797 797 resp = " ".join(map(hex, self.repo.heads())) + "\n"
798 798 req.httphdr("application/mercurial-0.1", length=len(resp))
799 799 req.write(resp)
800 800
801 801 def do_branches(self, req):
802 802 nodes = []
803 803 if req.form.has_key('nodes'):
804 804 nodes = map(bin, req.form['nodes'][0].split(" "))
805 805 resp = cStringIO.StringIO()
806 806 for b in self.repo.branches(nodes):
807 807 resp.write(" ".join(map(hex, b)) + "\n")
808 808 resp = resp.getvalue()
809 809 req.httphdr("application/mercurial-0.1", length=len(resp))
810 810 req.write(resp)
811 811
812 812 def do_between(self, req):
813 813 nodes = []
814 814 if req.form.has_key('pairs'):
815 815 pairs = [map(bin, p.split("-"))
816 816 for p in req.form['pairs'][0].split(" ")]
817 817 resp = cStringIO.StringIO()
818 818 for b in self.repo.between(pairs):
819 819 resp.write(" ".join(map(hex, b)) + "\n")
820 820 resp = resp.getvalue()
821 821 req.httphdr("application/mercurial-0.1", length=len(resp))
822 822 req.write(resp)
823 823
824 824 def do_changegroup(self, req):
825 825 req.httphdr("application/mercurial-0.1")
826 826 nodes = []
827 827 if not self.allowpull:
828 828 return
829 829
830 830 if req.form.has_key('roots'):
831 831 nodes = map(bin, req.form['roots'][0].split(" "))
832 832
833 833 z = zlib.compressobj()
834 834 f = self.repo.changegroup(nodes, 'serve')
835 835 while 1:
836 836 chunk = f.read(4096)
837 837 if not chunk:
838 838 break
839 839 req.write(z.compress(chunk))
840 840
841 841 req.write(z.flush())
842 842
843 843 def do_archive(self, req):
844 844 changeset = self.repo.lookup(req.form['node'][0])
845 845 type_ = req.form['type'][0]
846 846 allowed = self.repo.ui.configlist("web", "allow_archive")
847 847 if (type_ in self.archives and (type_ in allowed or
848 848 self.repo.ui.configbool("web", "allow" + type_, False))):
849 849 self.archive(req, changeset, type_)
850 850 return
851 851
852 852 req.write(self.t("error"))
853 853
854 854 def do_static(self, req):
855 855 fname = req.form['file'][0]
856 856 static = self.repo.ui.config("web", "static",
857 857 os.path.join(self.templatepath,
858 858 "static"))
859 859 req.write(staticfile(static, fname, req)
860 860 or self.t("error", error="%r not found" % fname))
861 861
862 862 def do_capabilities(self, req):
863 resp = 'unbundle stream=%d' % (self.repo.revlogversion,)
863 caps = ['unbundle']
864 if self.repo.ui.configbool('server', 'uncompressed'):
865 caps.append('stream=%d' % self.repo.revlogversion)
866 resp = ' '.join(caps)
864 867 req.httphdr("application/mercurial-0.1", length=len(resp))
865 868 req.write(resp)
866 869
867 870 def check_perm(self, req, op, default):
868 871 '''check permission for operation based on user auth.
869 872 return true if op allowed, else false.
870 873 default is policy to use if no config given.'''
871 874
872 875 user = req.env.get('REMOTE_USER')
873 876
874 877 deny = self.repo.ui.configlist('web', 'deny_' + op)
875 878 if deny and (not user or deny == ['*'] or user in deny):
876 879 return False
877 880
878 881 allow = self.repo.ui.configlist('web', 'allow_' + op)
879 882 return (allow and (allow == ['*'] or user in allow)) or default
880 883
881 884 def do_unbundle(self, req):
882 885 def bail(response, headers={}):
883 886 length = int(req.env['CONTENT_LENGTH'])
884 887 for s in util.filechunkiter(req, limit=length):
885 888 # drain incoming bundle, else client will not see
886 889 # response when run outside cgi script
887 890 pass
888 891 req.httphdr("application/mercurial-0.1", headers=headers)
889 892 req.write('0\n')
890 893 req.write(response)
891 894
892 895 # require ssl by default, auth info cannot be sniffed and
893 896 # replayed
894 897 ssl_req = self.repo.ui.configbool('web', 'push_ssl', True)
895 898 if ssl_req and not req.env.get('HTTPS'):
896 899 bail(_('ssl required\n'))
897 900 return
898 901
899 902 # do not allow push unless explicitly allowed
900 903 if not self.check_perm(req, 'push', False):
901 904 bail(_('push not authorized\n'),
902 905 headers={'status': '401 Unauthorized'})
903 906 return
904 907
905 908 req.httphdr("application/mercurial-0.1")
906 909
907 910 their_heads = req.form['heads'][0].split(' ')
908 911
909 912 def check_heads():
910 913 heads = map(hex, self.repo.heads())
911 914 return their_heads == [hex('force')] or their_heads == heads
912 915
913 916 # fail early if possible
914 917 if not check_heads():
915 918 bail(_('unsynced changes\n'))
916 919 return
917 920
918 921 # do not lock repo until all changegroup data is
919 922 # streamed. save to temporary file.
920 923
921 924 fd, tempname = tempfile.mkstemp(prefix='hg-unbundle-')
922 925 fp = os.fdopen(fd, 'wb+')
923 926 try:
924 927 length = int(req.env['CONTENT_LENGTH'])
925 928 for s in util.filechunkiter(req, limit=length):
926 929 fp.write(s)
927 930
928 931 lock = self.repo.lock()
929 932 try:
930 933 if not check_heads():
931 934 req.write('0\n')
932 935 req.write(_('unsynced changes\n'))
933 936 return
934 937
935 938 fp.seek(0)
936 939
937 940 # send addchangegroup output to client
938 941
939 942 old_stdout = sys.stdout
940 943 sys.stdout = cStringIO.StringIO()
941 944
942 945 try:
943 946 ret = self.repo.addchangegroup(fp, 'serve')
944 947 finally:
945 948 val = sys.stdout.getvalue()
946 949 sys.stdout = old_stdout
947 950 req.write('%d\n' % ret)
948 951 req.write(val)
949 952 finally:
950 953 lock.release()
951 954 finally:
952 955 fp.close()
953 956 os.unlink(tempname)
954 957
955 958 def do_stream_out(self, req):
956 959 req.httphdr("application/mercurial-0.1")
957 960 streamclone.stream_out(self.repo, req)
@@ -1,2254 +1,2258 b''
1 1 # localrepo.py - read/write repository class for mercurial
2 2 #
3 3 # Copyright 2005 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms
6 6 # of the GNU General Public License, incorporated herein by reference.
7 7
8 8 from node import *
9 9 from i18n import gettext as _
10 10 from demandload import *
11 11 import repo
12 12 demandload(globals(), "appendfile changegroup")
13 13 demandload(globals(), "changelog dirstate filelog manifest context")
14 14 demandload(globals(), "re lock transaction tempfile stat mdiff errno ui")
15 15 demandload(globals(), "os revlog time util")
16 16
17 17 class localrepository(repo.repository):
18 18 capabilities = ()
19 19
20 20 def __del__(self):
21 21 self.transhandle = None
22 22 def __init__(self, parentui, path=None, create=0):
23 23 repo.repository.__init__(self)
24 24 if not path:
25 25 p = os.getcwd()
26 26 while not os.path.isdir(os.path.join(p, ".hg")):
27 27 oldp = p
28 28 p = os.path.dirname(p)
29 29 if p == oldp:
30 30 raise repo.RepoError(_("no repo found"))
31 31 path = p
32 32 self.path = os.path.join(path, ".hg")
33 33
34 34 if not create and not os.path.isdir(self.path):
35 35 raise repo.RepoError(_("repository %s not found") % path)
36 36
37 37 self.root = os.path.abspath(path)
38 38 self.origroot = path
39 39 self.ui = ui.ui(parentui=parentui)
40 40 self.opener = util.opener(self.path)
41 41 self.wopener = util.opener(self.root)
42 42
43 43 try:
44 44 self.ui.readconfig(self.join("hgrc"), self.root)
45 45 except IOError:
46 46 pass
47 47
48 48 v = self.ui.revlogopts
49 49 self.revlogversion = int(v.get('format', revlog.REVLOG_DEFAULT_FORMAT))
50 50 self.revlogv1 = self.revlogversion != revlog.REVLOGV0
51 51 fl = v.get('flags', None)
52 52 flags = 0
53 53 if fl != None:
54 54 for x in fl.split():
55 55 flags |= revlog.flagstr(x)
56 56 elif self.revlogv1:
57 57 flags = revlog.REVLOG_DEFAULT_FLAGS
58 58
59 59 v = self.revlogversion | flags
60 60 self.manifest = manifest.manifest(self.opener, v)
61 61 self.changelog = changelog.changelog(self.opener, v)
62 62
63 63 # the changelog might not have the inline index flag
64 64 # on. If the format of the changelog is the same as found in
65 65 # .hgrc, apply any flags found in the .hgrc as well.
66 66 # Otherwise, just version from the changelog
67 67 v = self.changelog.version
68 68 if v == self.revlogversion:
69 69 v |= flags
70 70 self.revlogversion = v
71 71
72 72 self.tagscache = None
73 73 self.nodetagscache = None
74 74 self.encodepats = None
75 75 self.decodepats = None
76 76 self.transhandle = None
77 77
78 78 if create:
79 79 if not os.path.exists(path):
80 80 os.mkdir(path)
81 81 os.mkdir(self.path)
82 82 os.mkdir(self.join("data"))
83 83
84 84 self.dirstate = dirstate.dirstate(self.opener, self.ui, self.root)
85 85
86 86 def hook(self, name, throw=False, **args):
87 87 def callhook(hname, funcname):
88 88 '''call python hook. hook is callable object, looked up as
89 89 name in python module. if callable returns "true", hook
90 90 fails, else passes. if hook raises exception, treated as
91 91 hook failure. exception propagates if throw is "true".
92 92
93 93 reason for "true" meaning "hook failed" is so that
94 94 unmodified commands (e.g. mercurial.commands.update) can
95 95 be run as hooks without wrappers to convert return values.'''
96 96
97 97 self.ui.note(_("calling hook %s: %s\n") % (hname, funcname))
98 98 d = funcname.rfind('.')
99 99 if d == -1:
100 100 raise util.Abort(_('%s hook is invalid ("%s" not in a module)')
101 101 % (hname, funcname))
102 102 modname = funcname[:d]
103 103 try:
104 104 obj = __import__(modname)
105 105 except ImportError:
106 106 try:
107 107 # extensions are loaded with hgext_ prefix
108 108 obj = __import__("hgext_%s" % modname)
109 109 except ImportError:
110 110 raise util.Abort(_('%s hook is invalid '
111 111 '(import of "%s" failed)') %
112 112 (hname, modname))
113 113 try:
114 114 for p in funcname.split('.')[1:]:
115 115 obj = getattr(obj, p)
116 116 except AttributeError, err:
117 117 raise util.Abort(_('%s hook is invalid '
118 118 '("%s" is not defined)') %
119 119 (hname, funcname))
120 120 if not callable(obj):
121 121 raise util.Abort(_('%s hook is invalid '
122 122 '("%s" is not callable)') %
123 123 (hname, funcname))
124 124 try:
125 125 r = obj(ui=self.ui, repo=self, hooktype=name, **args)
126 126 except (KeyboardInterrupt, util.SignalInterrupt):
127 127 raise
128 128 except Exception, exc:
129 129 if isinstance(exc, util.Abort):
130 130 self.ui.warn(_('error: %s hook failed: %s\n') %
131 131 (hname, exc.args[0] % exc.args[1:]))
132 132 else:
133 133 self.ui.warn(_('error: %s hook raised an exception: '
134 134 '%s\n') % (hname, exc))
135 135 if throw:
136 136 raise
137 137 self.ui.print_exc()
138 138 return True
139 139 if r:
140 140 if throw:
141 141 raise util.Abort(_('%s hook failed') % hname)
142 142 self.ui.warn(_('warning: %s hook failed\n') % hname)
143 143 return r
144 144
145 145 def runhook(name, cmd):
146 146 self.ui.note(_("running hook %s: %s\n") % (name, cmd))
147 147 env = dict([('HG_' + k.upper(), v) for k, v in args.iteritems()])
148 148 r = util.system(cmd, environ=env, cwd=self.root)
149 149 if r:
150 150 desc, r = util.explain_exit(r)
151 151 if throw:
152 152 raise util.Abort(_('%s hook %s') % (name, desc))
153 153 self.ui.warn(_('warning: %s hook %s\n') % (name, desc))
154 154 return r
155 155
156 156 r = False
157 157 hooks = [(hname, cmd) for hname, cmd in self.ui.configitems("hooks")
158 158 if hname.split(".", 1)[0] == name and cmd]
159 159 hooks.sort()
160 160 for hname, cmd in hooks:
161 161 if cmd.startswith('python:'):
162 162 r = callhook(hname, cmd[7:].strip()) or r
163 163 else:
164 164 r = runhook(hname, cmd) or r
165 165 return r
166 166
167 167 tag_disallowed = ':\r\n'
168 168
169 169 def tag(self, name, node, local=False, message=None, user=None, date=None):
170 170 '''tag a revision with a symbolic name.
171 171
172 172 if local is True, the tag is stored in a per-repository file.
173 173 otherwise, it is stored in the .hgtags file, and a new
174 174 changeset is committed with the change.
175 175
176 176 keyword arguments:
177 177
178 178 local: whether to store tag in non-version-controlled file
179 179 (default False)
180 180
181 181 message: commit message to use if committing
182 182
183 183 user: name of user to use if committing
184 184
185 185 date: date tuple to use if committing'''
186 186
187 187 for c in self.tag_disallowed:
188 188 if c in name:
189 189 raise util.Abort(_('%r cannot be used in a tag name') % c)
190 190
191 191 self.hook('pretag', throw=True, node=node, tag=name, local=local)
192 192
193 193 if local:
194 194 self.opener('localtags', 'a').write('%s %s\n' % (node, name))
195 195 self.hook('tag', node=node, tag=name, local=local)
196 196 return
197 197
198 198 for x in self.changes():
199 199 if '.hgtags' in x:
200 200 raise util.Abort(_('working copy of .hgtags is changed '
201 201 '(please commit .hgtags manually)'))
202 202
203 203 self.wfile('.hgtags', 'ab').write('%s %s\n' % (node, name))
204 204 if self.dirstate.state('.hgtags') == '?':
205 205 self.add(['.hgtags'])
206 206
207 207 if not message:
208 208 message = _('Added tag %s for changeset %s') % (name, node)
209 209
210 210 self.commit(['.hgtags'], message, user, date)
211 211 self.hook('tag', node=node, tag=name, local=local)
212 212
213 213 def tags(self):
214 214 '''return a mapping of tag to node'''
215 215 if not self.tagscache:
216 216 self.tagscache = {}
217 217
218 218 def parsetag(line, context):
219 219 if not line:
220 220 return
221 221 s = l.split(" ", 1)
222 222 if len(s) != 2:
223 223 self.ui.warn(_("%s: cannot parse entry\n") % context)
224 224 return
225 225 node, key = s
226 226 key = key.strip()
227 227 try:
228 228 bin_n = bin(node)
229 229 except TypeError:
230 230 self.ui.warn(_("%s: node '%s' is not well formed\n") %
231 231 (context, node))
232 232 return
233 233 if bin_n not in self.changelog.nodemap:
234 234 self.ui.warn(_("%s: tag '%s' refers to unknown node\n") %
235 235 (context, key))
236 236 return
237 237 self.tagscache[key] = bin_n
238 238
239 239 # read the tags file from each head, ending with the tip,
240 240 # and add each tag found to the map, with "newer" ones
241 241 # taking precedence
242 242 heads = self.heads()
243 243 heads.reverse()
244 244 fl = self.file(".hgtags")
245 245 for node in heads:
246 246 change = self.changelog.read(node)
247 247 rev = self.changelog.rev(node)
248 248 fn, ff = self.manifest.find(change[0], '.hgtags')
249 249 if fn is None: continue
250 250 count = 0
251 251 for l in fl.read(fn).splitlines():
252 252 count += 1
253 253 parsetag(l, _(".hgtags (rev %d:%s), line %d") %
254 254 (rev, short(node), count))
255 255 try:
256 256 f = self.opener("localtags")
257 257 count = 0
258 258 for l in f:
259 259 count += 1
260 260 parsetag(l, _("localtags, line %d") % count)
261 261 except IOError:
262 262 pass
263 263
264 264 self.tagscache['tip'] = self.changelog.tip()
265 265
266 266 return self.tagscache
267 267
268 268 def tagslist(self):
269 269 '''return a list of tags ordered by revision'''
270 270 l = []
271 271 for t, n in self.tags().items():
272 272 try:
273 273 r = self.changelog.rev(n)
274 274 except:
275 275 r = -2 # sort to the beginning of the list if unknown
276 276 l.append((r, t, n))
277 277 l.sort()
278 278 return [(t, n) for r, t, n in l]
279 279
280 280 def nodetags(self, node):
281 281 '''return the tags associated with a node'''
282 282 if not self.nodetagscache:
283 283 self.nodetagscache = {}
284 284 for t, n in self.tags().items():
285 285 self.nodetagscache.setdefault(n, []).append(t)
286 286 return self.nodetagscache.get(node, [])
287 287
288 288 def lookup(self, key):
289 289 try:
290 290 return self.tags()[key]
291 291 except KeyError:
292 292 try:
293 293 return self.changelog.lookup(key)
294 294 except:
295 295 raise repo.RepoError(_("unknown revision '%s'") % key)
296 296
297 297 def dev(self):
298 298 return os.lstat(self.path).st_dev
299 299
300 300 def local(self):
301 301 return True
302 302
303 303 def join(self, f):
304 304 return os.path.join(self.path, f)
305 305
306 306 def wjoin(self, f):
307 307 return os.path.join(self.root, f)
308 308
309 309 def file(self, f):
310 310 if f[0] == '/':
311 311 f = f[1:]
312 312 return filelog.filelog(self.opener, f, self.revlogversion)
313 313
314 314 def changectx(self, changeid):
315 315 return context.changectx(self, changeid)
316 316
317 317 def filectx(self, path, changeid=None, fileid=None):
318 318 """changeid can be a changeset revision, node, or tag.
319 319 fileid can be a file revision or node."""
320 320 return context.filectx(self, path, changeid, fileid)
321 321
322 322 def getcwd(self):
323 323 return self.dirstate.getcwd()
324 324
325 325 def wfile(self, f, mode='r'):
326 326 return self.wopener(f, mode)
327 327
328 328 def wread(self, filename):
329 329 if self.encodepats == None:
330 330 l = []
331 331 for pat, cmd in self.ui.configitems("encode"):
332 332 mf = util.matcher(self.root, "", [pat], [], [])[1]
333 333 l.append((mf, cmd))
334 334 self.encodepats = l
335 335
336 336 data = self.wopener(filename, 'r').read()
337 337
338 338 for mf, cmd in self.encodepats:
339 339 if mf(filename):
340 340 self.ui.debug(_("filtering %s through %s\n") % (filename, cmd))
341 341 data = util.filter(data, cmd)
342 342 break
343 343
344 344 return data
345 345
346 346 def wwrite(self, filename, data, fd=None):
347 347 if self.decodepats == None:
348 348 l = []
349 349 for pat, cmd in self.ui.configitems("decode"):
350 350 mf = util.matcher(self.root, "", [pat], [], [])[1]
351 351 l.append((mf, cmd))
352 352 self.decodepats = l
353 353
354 354 for mf, cmd in self.decodepats:
355 355 if mf(filename):
356 356 self.ui.debug(_("filtering %s through %s\n") % (filename, cmd))
357 357 data = util.filter(data, cmd)
358 358 break
359 359
360 360 if fd:
361 361 return fd.write(data)
362 362 return self.wopener(filename, 'w').write(data)
363 363
364 364 def transaction(self):
365 365 tr = self.transhandle
366 366 if tr != None and tr.running():
367 367 return tr.nest()
368 368
369 369 # save dirstate for rollback
370 370 try:
371 371 ds = self.opener("dirstate").read()
372 372 except IOError:
373 373 ds = ""
374 374 self.opener("journal.dirstate", "w").write(ds)
375 375
376 376 tr = transaction.transaction(self.ui.warn, self.opener,
377 377 self.join("journal"),
378 378 aftertrans(self.path))
379 379 self.transhandle = tr
380 380 return tr
381 381
382 382 def recover(self):
383 383 l = self.lock()
384 384 if os.path.exists(self.join("journal")):
385 385 self.ui.status(_("rolling back interrupted transaction\n"))
386 386 transaction.rollback(self.opener, self.join("journal"))
387 387 self.reload()
388 388 return True
389 389 else:
390 390 self.ui.warn(_("no interrupted transaction available\n"))
391 391 return False
392 392
393 393 def rollback(self, wlock=None):
394 394 if not wlock:
395 395 wlock = self.wlock()
396 396 l = self.lock()
397 397 if os.path.exists(self.join("undo")):
398 398 self.ui.status(_("rolling back last transaction\n"))
399 399 transaction.rollback(self.opener, self.join("undo"))
400 400 util.rename(self.join("undo.dirstate"), self.join("dirstate"))
401 401 self.reload()
402 402 self.wreload()
403 403 else:
404 404 self.ui.warn(_("no rollback information available\n"))
405 405
406 406 def wreload(self):
407 407 self.dirstate.read()
408 408
409 409 def reload(self):
410 410 self.changelog.load()
411 411 self.manifest.load()
412 412 self.tagscache = None
413 413 self.nodetagscache = None
414 414
415 415 def do_lock(self, lockname, wait, releasefn=None, acquirefn=None,
416 416 desc=None):
417 417 try:
418 418 l = lock.lock(self.join(lockname), 0, releasefn, desc=desc)
419 419 except lock.LockHeld, inst:
420 420 if not wait:
421 421 raise
422 422 self.ui.warn(_("waiting for lock on %s held by %s\n") %
423 423 (desc, inst.args[0]))
424 424 # default to 600 seconds timeout
425 425 l = lock.lock(self.join(lockname),
426 426 int(self.ui.config("ui", "timeout") or 600),
427 427 releasefn, desc=desc)
428 428 if acquirefn:
429 429 acquirefn()
430 430 return l
431 431
432 432 def lock(self, wait=1):
433 433 return self.do_lock("lock", wait, acquirefn=self.reload,
434 434 desc=_('repository %s') % self.origroot)
435 435
436 436 def wlock(self, wait=1):
437 437 return self.do_lock("wlock", wait, self.dirstate.write,
438 438 self.wreload,
439 439 desc=_('working directory of %s') % self.origroot)
440 440
441 441 def checkfilemerge(self, filename, text, filelog, manifest1, manifest2):
442 442 "determine whether a new filenode is needed"
443 443 fp1 = manifest1.get(filename, nullid)
444 444 fp2 = manifest2.get(filename, nullid)
445 445
446 446 if fp2 != nullid:
447 447 # is one parent an ancestor of the other?
448 448 fpa = filelog.ancestor(fp1, fp2)
449 449 if fpa == fp1:
450 450 fp1, fp2 = fp2, nullid
451 451 elif fpa == fp2:
452 452 fp2 = nullid
453 453
454 454 # is the file unmodified from the parent? report existing entry
455 455 if fp2 == nullid and text == filelog.read(fp1):
456 456 return (fp1, None, None)
457 457
458 458 return (None, fp1, fp2)
459 459
460 460 def rawcommit(self, files, text, user, date, p1=None, p2=None, wlock=None):
461 461 orig_parent = self.dirstate.parents()[0] or nullid
462 462 p1 = p1 or self.dirstate.parents()[0] or nullid
463 463 p2 = p2 or self.dirstate.parents()[1] or nullid
464 464 c1 = self.changelog.read(p1)
465 465 c2 = self.changelog.read(p2)
466 466 m1 = self.manifest.read(c1[0])
467 467 mf1 = self.manifest.readflags(c1[0])
468 468 m2 = self.manifest.read(c2[0])
469 469 changed = []
470 470
471 471 if orig_parent == p1:
472 472 update_dirstate = 1
473 473 else:
474 474 update_dirstate = 0
475 475
476 476 if not wlock:
477 477 wlock = self.wlock()
478 478 l = self.lock()
479 479 tr = self.transaction()
480 480 mm = m1.copy()
481 481 mfm = mf1.copy()
482 482 linkrev = self.changelog.count()
483 483 for f in files:
484 484 try:
485 485 t = self.wread(f)
486 486 tm = util.is_exec(self.wjoin(f), mfm.get(f, False))
487 487 r = self.file(f)
488 488 mfm[f] = tm
489 489
490 490 (entry, fp1, fp2) = self.checkfilemerge(f, t, r, m1, m2)
491 491 if entry:
492 492 mm[f] = entry
493 493 continue
494 494
495 495 mm[f] = r.add(t, {}, tr, linkrev, fp1, fp2)
496 496 changed.append(f)
497 497 if update_dirstate:
498 498 self.dirstate.update([f], "n")
499 499 except IOError:
500 500 try:
501 501 del mm[f]
502 502 del mfm[f]
503 503 if update_dirstate:
504 504 self.dirstate.forget([f])
505 505 except:
506 506 # deleted from p2?
507 507 pass
508 508
509 509 mnode = self.manifest.add(mm, mfm, tr, linkrev, c1[0], c2[0])
510 510 user = user or self.ui.username()
511 511 n = self.changelog.add(mnode, changed, text, tr, p1, p2, user, date)
512 512 tr.close()
513 513 if update_dirstate:
514 514 self.dirstate.setparents(n, nullid)
515 515
516 516 def commit(self, files=None, text="", user=None, date=None,
517 517 match=util.always, force=False, lock=None, wlock=None,
518 518 force_editor=False):
519 519 commit = []
520 520 remove = []
521 521 changed = []
522 522
523 523 if files:
524 524 for f in files:
525 525 s = self.dirstate.state(f)
526 526 if s in 'nmai':
527 527 commit.append(f)
528 528 elif s == 'r':
529 529 remove.append(f)
530 530 else:
531 531 self.ui.warn(_("%s not tracked!\n") % f)
532 532 else:
533 533 modified, added, removed, deleted, unknown = self.changes(match=match)
534 534 commit = modified + added
535 535 remove = removed
536 536
537 537 p1, p2 = self.dirstate.parents()
538 538 c1 = self.changelog.read(p1)
539 539 c2 = self.changelog.read(p2)
540 540 m1 = self.manifest.read(c1[0])
541 541 mf1 = self.manifest.readflags(c1[0])
542 542 m2 = self.manifest.read(c2[0])
543 543
544 544 if not commit and not remove and not force and p2 == nullid:
545 545 self.ui.status(_("nothing changed\n"))
546 546 return None
547 547
548 548 xp1 = hex(p1)
549 549 if p2 == nullid: xp2 = ''
550 550 else: xp2 = hex(p2)
551 551
552 552 self.hook("precommit", throw=True, parent1=xp1, parent2=xp2)
553 553
554 554 if not wlock:
555 555 wlock = self.wlock()
556 556 if not lock:
557 557 lock = self.lock()
558 558 tr = self.transaction()
559 559
560 560 # check in files
561 561 new = {}
562 562 linkrev = self.changelog.count()
563 563 commit.sort()
564 564 for f in commit:
565 565 self.ui.note(f + "\n")
566 566 try:
567 567 mf1[f] = util.is_exec(self.wjoin(f), mf1.get(f, False))
568 568 t = self.wread(f)
569 569 except IOError:
570 570 self.ui.warn(_("trouble committing %s!\n") % f)
571 571 raise
572 572
573 573 r = self.file(f)
574 574
575 575 meta = {}
576 576 cp = self.dirstate.copied(f)
577 577 if cp:
578 578 meta["copy"] = cp
579 579 meta["copyrev"] = hex(m1.get(cp, m2.get(cp, nullid)))
580 580 self.ui.debug(_(" %s: copy %s:%s\n") % (f, cp, meta["copyrev"]))
581 581 fp1, fp2 = nullid, nullid
582 582 else:
583 583 entry, fp1, fp2 = self.checkfilemerge(f, t, r, m1, m2)
584 584 if entry:
585 585 new[f] = entry
586 586 continue
587 587
588 588 new[f] = r.add(t, meta, tr, linkrev, fp1, fp2)
589 589 # remember what we've added so that we can later calculate
590 590 # the files to pull from a set of changesets
591 591 changed.append(f)
592 592
593 593 # update manifest
594 594 m1 = m1.copy()
595 595 m1.update(new)
596 596 for f in remove:
597 597 if f in m1:
598 598 del m1[f]
599 599 mn = self.manifest.add(m1, mf1, tr, linkrev, c1[0], c2[0],
600 600 (new, remove))
601 601
602 602 # add changeset
603 603 new = new.keys()
604 604 new.sort()
605 605
606 606 user = user or self.ui.username()
607 607 if not text or force_editor:
608 608 edittext = []
609 609 if text:
610 610 edittext.append(text)
611 611 edittext.append("")
612 612 if p2 != nullid:
613 613 edittext.append("HG: branch merge")
614 614 edittext.extend(["HG: changed %s" % f for f in changed])
615 615 edittext.extend(["HG: removed %s" % f for f in remove])
616 616 if not changed and not remove:
617 617 edittext.append("HG: no files changed")
618 618 edittext.append("")
619 619 # run editor in the repository root
620 620 olddir = os.getcwd()
621 621 os.chdir(self.root)
622 622 text = self.ui.edit("\n".join(edittext), user)
623 623 os.chdir(olddir)
624 624
625 625 lines = [line.rstrip() for line in text.rstrip().splitlines()]
626 626 while lines and not lines[0]:
627 627 del lines[0]
628 628 if not lines:
629 629 return None
630 630 text = '\n'.join(lines)
631 631 n = self.changelog.add(mn, changed + remove, text, tr, p1, p2, user, date)
632 632 self.hook('pretxncommit', throw=True, node=hex(n), parent1=xp1,
633 633 parent2=xp2)
634 634 tr.close()
635 635
636 636 self.dirstate.setparents(n)
637 637 self.dirstate.update(new, "n")
638 638 self.dirstate.forget(remove)
639 639
640 640 self.hook("commit", node=hex(n), parent1=xp1, parent2=xp2)
641 641 return n
642 642
643 643 def walk(self, node=None, files=[], match=util.always, badmatch=None):
644 644 if node:
645 645 fdict = dict.fromkeys(files)
646 646 for fn in self.manifest.read(self.changelog.read(node)[0]):
647 647 fdict.pop(fn, None)
648 648 if match(fn):
649 649 yield 'm', fn
650 650 for fn in fdict:
651 651 if badmatch and badmatch(fn):
652 652 if match(fn):
653 653 yield 'b', fn
654 654 else:
655 655 self.ui.warn(_('%s: No such file in rev %s\n') % (
656 656 util.pathto(self.getcwd(), fn), short(node)))
657 657 else:
658 658 for src, fn in self.dirstate.walk(files, match, badmatch=badmatch):
659 659 yield src, fn
660 660
661 661 def changes(self, node1=None, node2=None, files=[], match=util.always,
662 662 wlock=None, show_ignored=None):
663 663 """return changes between two nodes or node and working directory
664 664
665 665 If node1 is None, use the first dirstate parent instead.
666 666 If node2 is None, compare node1 with working directory.
667 667 """
668 668
669 669 def fcmp(fn, mf):
670 670 t1 = self.wread(fn)
671 671 t2 = self.file(fn).read(mf.get(fn, nullid))
672 672 return cmp(t1, t2)
673 673
674 674 def mfmatches(node):
675 675 change = self.changelog.read(node)
676 676 mf = dict(self.manifest.read(change[0]))
677 677 for fn in mf.keys():
678 678 if not match(fn):
679 679 del mf[fn]
680 680 return mf
681 681
682 682 modified, added, removed, deleted, unknown, ignored = [],[],[],[],[],[]
683 683 compareworking = False
684 684 if not node1 or (not node2 and node1 == self.dirstate.parents()[0]):
685 685 compareworking = True
686 686
687 687 if not compareworking:
688 688 # read the manifest from node1 before the manifest from node2,
689 689 # so that we'll hit the manifest cache if we're going through
690 690 # all the revisions in parent->child order.
691 691 mf1 = mfmatches(node1)
692 692
693 693 # are we comparing the working directory?
694 694 if not node2:
695 695 if not wlock:
696 696 try:
697 697 wlock = self.wlock(wait=0)
698 698 except lock.LockException:
699 699 wlock = None
700 700 lookup, modified, added, removed, deleted, unknown, ignored = (
701 701 self.dirstate.changes(files, match, show_ignored))
702 702
703 703 # are we comparing working dir against its parent?
704 704 if compareworking:
705 705 if lookup:
706 706 # do a full compare of any files that might have changed
707 707 mf2 = mfmatches(self.dirstate.parents()[0])
708 708 for f in lookup:
709 709 if fcmp(f, mf2):
710 710 modified.append(f)
711 711 elif wlock is not None:
712 712 self.dirstate.update([f], "n")
713 713 else:
714 714 # we are comparing working dir against non-parent
715 715 # generate a pseudo-manifest for the working dir
716 716 mf2 = mfmatches(self.dirstate.parents()[0])
717 717 for f in lookup + modified + added:
718 718 mf2[f] = ""
719 719 for f in removed:
720 720 if f in mf2:
721 721 del mf2[f]
722 722 else:
723 723 # we are comparing two revisions
724 724 deleted, unknown, ignored = [], [], []
725 725 mf2 = mfmatches(node2)
726 726
727 727 if not compareworking:
728 728 # flush lists from dirstate before comparing manifests
729 729 modified, added = [], []
730 730
731 731 # make sure to sort the files so we talk to the disk in a
732 732 # reasonable order
733 733 mf2keys = mf2.keys()
734 734 mf2keys.sort()
735 735 for fn in mf2keys:
736 736 if mf1.has_key(fn):
737 737 if mf1[fn] != mf2[fn] and (mf2[fn] != "" or fcmp(fn, mf1)):
738 738 modified.append(fn)
739 739 del mf1[fn]
740 740 else:
741 741 added.append(fn)
742 742
743 743 removed = mf1.keys()
744 744
745 745 # sort and return results:
746 746 for l in modified, added, removed, deleted, unknown, ignored:
747 747 l.sort()
748 748 if show_ignored is None:
749 749 return (modified, added, removed, deleted, unknown)
750 750 else:
751 751 return (modified, added, removed, deleted, unknown, ignored)
752 752
753 753 def add(self, list, wlock=None):
754 754 if not wlock:
755 755 wlock = self.wlock()
756 756 for f in list:
757 757 p = self.wjoin(f)
758 758 if not os.path.exists(p):
759 759 self.ui.warn(_("%s does not exist!\n") % f)
760 760 elif not os.path.isfile(p):
761 761 self.ui.warn(_("%s not added: only files supported currently\n")
762 762 % f)
763 763 elif self.dirstate.state(f) in 'an':
764 764 self.ui.warn(_("%s already tracked!\n") % f)
765 765 else:
766 766 self.dirstate.update([f], "a")
767 767
768 768 def forget(self, list, wlock=None):
769 769 if not wlock:
770 770 wlock = self.wlock()
771 771 for f in list:
772 772 if self.dirstate.state(f) not in 'ai':
773 773 self.ui.warn(_("%s not added!\n") % f)
774 774 else:
775 775 self.dirstate.forget([f])
776 776
777 777 def remove(self, list, unlink=False, wlock=None):
778 778 if unlink:
779 779 for f in list:
780 780 try:
781 781 util.unlink(self.wjoin(f))
782 782 except OSError, inst:
783 783 if inst.errno != errno.ENOENT:
784 784 raise
785 785 if not wlock:
786 786 wlock = self.wlock()
787 787 for f in list:
788 788 p = self.wjoin(f)
789 789 if os.path.exists(p):
790 790 self.ui.warn(_("%s still exists!\n") % f)
791 791 elif self.dirstate.state(f) == 'a':
792 792 self.dirstate.forget([f])
793 793 elif f not in self.dirstate:
794 794 self.ui.warn(_("%s not tracked!\n") % f)
795 795 else:
796 796 self.dirstate.update([f], "r")
797 797
798 798 def undelete(self, list, wlock=None):
799 799 p = self.dirstate.parents()[0]
800 800 mn = self.changelog.read(p)[0]
801 801 mf = self.manifest.readflags(mn)
802 802 m = self.manifest.read(mn)
803 803 if not wlock:
804 804 wlock = self.wlock()
805 805 for f in list:
806 806 if self.dirstate.state(f) not in "r":
807 807 self.ui.warn("%s not removed!\n" % f)
808 808 else:
809 809 t = self.file(f).read(m[f])
810 810 self.wwrite(f, t)
811 811 util.set_exec(self.wjoin(f), mf[f])
812 812 self.dirstate.update([f], "n")
813 813
814 814 def copy(self, source, dest, wlock=None):
815 815 p = self.wjoin(dest)
816 816 if not os.path.exists(p):
817 817 self.ui.warn(_("%s does not exist!\n") % dest)
818 818 elif not os.path.isfile(p):
819 819 self.ui.warn(_("copy failed: %s is not a file\n") % dest)
820 820 else:
821 821 if not wlock:
822 822 wlock = self.wlock()
823 823 if self.dirstate.state(dest) == '?':
824 824 self.dirstate.update([dest], "a")
825 825 self.dirstate.copy(source, dest)
826 826
827 827 def heads(self, start=None):
828 828 heads = self.changelog.heads(start)
829 829 # sort the output in rev descending order
830 830 heads = [(-self.changelog.rev(h), h) for h in heads]
831 831 heads.sort()
832 832 return [n for (r, n) in heads]
833 833
834 834 # branchlookup returns a dict giving a list of branches for
835 835 # each head. A branch is defined as the tag of a node or
836 836 # the branch of the node's parents. If a node has multiple
837 837 # branch tags, tags are eliminated if they are visible from other
838 838 # branch tags.
839 839 #
840 840 # So, for this graph: a->b->c->d->e
841 841 # \ /
842 842 # aa -----/
843 843 # a has tag 2.6.12
844 844 # d has tag 2.6.13
845 845 # e would have branch tags for 2.6.12 and 2.6.13. Because the node
846 846 # for 2.6.12 can be reached from the node 2.6.13, that is eliminated
847 847 # from the list.
848 848 #
849 849 # It is possible that more than one head will have the same branch tag.
850 850 # callers need to check the result for multiple heads under the same
851 851 # branch tag if that is a problem for them (ie checkout of a specific
852 852 # branch).
853 853 #
854 854 # passing in a specific branch will limit the depth of the search
855 855 # through the parents. It won't limit the branches returned in the
856 856 # result though.
857 857 def branchlookup(self, heads=None, branch=None):
858 858 if not heads:
859 859 heads = self.heads()
860 860 headt = [ h for h in heads ]
861 861 chlog = self.changelog
862 862 branches = {}
863 863 merges = []
864 864 seenmerge = {}
865 865
866 866 # traverse the tree once for each head, recording in the branches
867 867 # dict which tags are visible from this head. The branches
868 868 # dict also records which tags are visible from each tag
869 869 # while we traverse.
870 870 while headt or merges:
871 871 if merges:
872 872 n, found = merges.pop()
873 873 visit = [n]
874 874 else:
875 875 h = headt.pop()
876 876 visit = [h]
877 877 found = [h]
878 878 seen = {}
879 879 while visit:
880 880 n = visit.pop()
881 881 if n in seen:
882 882 continue
883 883 pp = chlog.parents(n)
884 884 tags = self.nodetags(n)
885 885 if tags:
886 886 for x in tags:
887 887 if x == 'tip':
888 888 continue
889 889 for f in found:
890 890 branches.setdefault(f, {})[n] = 1
891 891 branches.setdefault(n, {})[n] = 1
892 892 break
893 893 if n not in found:
894 894 found.append(n)
895 895 if branch in tags:
896 896 continue
897 897 seen[n] = 1
898 898 if pp[1] != nullid and n not in seenmerge:
899 899 merges.append((pp[1], [x for x in found]))
900 900 seenmerge[n] = 1
901 901 if pp[0] != nullid:
902 902 visit.append(pp[0])
903 903 # traverse the branches dict, eliminating branch tags from each
904 904 # head that are visible from another branch tag for that head.
905 905 out = {}
906 906 viscache = {}
907 907 for h in heads:
908 908 def visible(node):
909 909 if node in viscache:
910 910 return viscache[node]
911 911 ret = {}
912 912 visit = [node]
913 913 while visit:
914 914 x = visit.pop()
915 915 if x in viscache:
916 916 ret.update(viscache[x])
917 917 elif x not in ret:
918 918 ret[x] = 1
919 919 if x in branches:
920 920 visit[len(visit):] = branches[x].keys()
921 921 viscache[node] = ret
922 922 return ret
923 923 if h not in branches:
924 924 continue
925 925 # O(n^2), but somewhat limited. This only searches the
926 926 # tags visible from a specific head, not all the tags in the
927 927 # whole repo.
928 928 for b in branches[h]:
929 929 vis = False
930 930 for bb in branches[h].keys():
931 931 if b != bb:
932 932 if b in visible(bb):
933 933 vis = True
934 934 break
935 935 if not vis:
936 936 l = out.setdefault(h, [])
937 937 l[len(l):] = self.nodetags(b)
938 938 return out
939 939
940 940 def branches(self, nodes):
941 941 if not nodes:
942 942 nodes = [self.changelog.tip()]
943 943 b = []
944 944 for n in nodes:
945 945 t = n
946 946 while 1:
947 947 p = self.changelog.parents(n)
948 948 if p[1] != nullid or p[0] == nullid:
949 949 b.append((t, n, p[0], p[1]))
950 950 break
951 951 n = p[0]
952 952 return b
953 953
954 954 def between(self, pairs):
955 955 r = []
956 956
957 957 for top, bottom in pairs:
958 958 n, l, i = top, [], 0
959 959 f = 1
960 960
961 961 while n != bottom:
962 962 p = self.changelog.parents(n)[0]
963 963 if i == f:
964 964 l.append(n)
965 965 f = f * 2
966 966 n = p
967 967 i += 1
968 968
969 969 r.append(l)
970 970
971 971 return r
972 972
973 973 def findincoming(self, remote, base=None, heads=None, force=False):
974 974 """Return list of roots of the subsets of missing nodes from remote
975 975
976 976 If base dict is specified, assume that these nodes and their parents
977 977 exist on the remote side and that no child of a node of base exists
978 978 in both remote and self.
979 979 Furthermore base will be updated to include the nodes that exists
980 980 in self and remote but no children exists in self and remote.
981 981 If a list of heads is specified, return only nodes which are heads
982 982 or ancestors of these heads.
983 983
984 984 All the ancestors of base are in self and in remote.
985 985 All the descendants of the list returned are missing in self.
986 986 (and so we know that the rest of the nodes are missing in remote, see
987 987 outgoing)
988 988 """
989 989 m = self.changelog.nodemap
990 990 search = []
991 991 fetch = {}
992 992 seen = {}
993 993 seenbranch = {}
994 994 if base == None:
995 995 base = {}
996 996
997 997 if not heads:
998 998 heads = remote.heads()
999 999
1000 1000 if self.changelog.tip() == nullid:
1001 1001 base[nullid] = 1
1002 1002 if heads != [nullid]:
1003 1003 return [nullid]
1004 1004 return []
1005 1005
1006 1006 # assume we're closer to the tip than the root
1007 1007 # and start by examining the heads
1008 1008 self.ui.status(_("searching for changes\n"))
1009 1009
1010 1010 unknown = []
1011 1011 for h in heads:
1012 1012 if h not in m:
1013 1013 unknown.append(h)
1014 1014 else:
1015 1015 base[h] = 1
1016 1016
1017 1017 if not unknown:
1018 1018 return []
1019 1019
1020 1020 req = dict.fromkeys(unknown)
1021 1021 reqcnt = 0
1022 1022
1023 1023 # search through remote branches
1024 1024 # a 'branch' here is a linear segment of history, with four parts:
1025 1025 # head, root, first parent, second parent
1026 1026 # (a branch always has two parents (or none) by definition)
1027 1027 unknown = remote.branches(unknown)
1028 1028 while unknown:
1029 1029 r = []
1030 1030 while unknown:
1031 1031 n = unknown.pop(0)
1032 1032 if n[0] in seen:
1033 1033 continue
1034 1034
1035 1035 self.ui.debug(_("examining %s:%s\n")
1036 1036 % (short(n[0]), short(n[1])))
1037 1037 if n[0] == nullid: # found the end of the branch
1038 1038 pass
1039 1039 elif n in seenbranch:
1040 1040 self.ui.debug(_("branch already found\n"))
1041 1041 continue
1042 1042 elif n[1] and n[1] in m: # do we know the base?
1043 1043 self.ui.debug(_("found incomplete branch %s:%s\n")
1044 1044 % (short(n[0]), short(n[1])))
1045 1045 search.append(n) # schedule branch range for scanning
1046 1046 seenbranch[n] = 1
1047 1047 else:
1048 1048 if n[1] not in seen and n[1] not in fetch:
1049 1049 if n[2] in m and n[3] in m:
1050 1050 self.ui.debug(_("found new changeset %s\n") %
1051 1051 short(n[1]))
1052 1052 fetch[n[1]] = 1 # earliest unknown
1053 1053 for p in n[2:4]:
1054 1054 if p in m:
1055 1055 base[p] = 1 # latest known
1056 1056
1057 1057 for p in n[2:4]:
1058 1058 if p not in req and p not in m:
1059 1059 r.append(p)
1060 1060 req[p] = 1
1061 1061 seen[n[0]] = 1
1062 1062
1063 1063 if r:
1064 1064 reqcnt += 1
1065 1065 self.ui.debug(_("request %d: %s\n") %
1066 1066 (reqcnt, " ".join(map(short, r))))
1067 1067 for p in range(0, len(r), 10):
1068 1068 for b in remote.branches(r[p:p+10]):
1069 1069 self.ui.debug(_("received %s:%s\n") %
1070 1070 (short(b[0]), short(b[1])))
1071 1071 unknown.append(b)
1072 1072
1073 1073 # do binary search on the branches we found
1074 1074 while search:
1075 1075 n = search.pop(0)
1076 1076 reqcnt += 1
1077 1077 l = remote.between([(n[0], n[1])])[0]
1078 1078 l.append(n[1])
1079 1079 p = n[0]
1080 1080 f = 1
1081 1081 for i in l:
1082 1082 self.ui.debug(_("narrowing %d:%d %s\n") % (f, len(l), short(i)))
1083 1083 if i in m:
1084 1084 if f <= 2:
1085 1085 self.ui.debug(_("found new branch changeset %s\n") %
1086 1086 short(p))
1087 1087 fetch[p] = 1
1088 1088 base[i] = 1
1089 1089 else:
1090 1090 self.ui.debug(_("narrowed branch search to %s:%s\n")
1091 1091 % (short(p), short(i)))
1092 1092 search.append((p, i))
1093 1093 break
1094 1094 p, f = i, f * 2
1095 1095
1096 1096 # sanity check our fetch list
1097 1097 for f in fetch.keys():
1098 1098 if f in m:
1099 1099 raise repo.RepoError(_("already have changeset ") + short(f[:4]))
1100 1100
1101 1101 if base.keys() == [nullid]:
1102 1102 if force:
1103 1103 self.ui.warn(_("warning: repository is unrelated\n"))
1104 1104 else:
1105 1105 raise util.Abort(_("repository is unrelated"))
1106 1106
1107 1107 self.ui.note(_("found new changesets starting at ") +
1108 1108 " ".join([short(f) for f in fetch]) + "\n")
1109 1109
1110 1110 self.ui.debug(_("%d total queries\n") % reqcnt)
1111 1111
1112 1112 return fetch.keys()
1113 1113
1114 1114 def findoutgoing(self, remote, base=None, heads=None, force=False):
1115 1115 """Return list of nodes that are roots of subsets not in remote
1116 1116
1117 1117 If base dict is specified, assume that these nodes and their parents
1118 1118 exist on the remote side.
1119 1119 If a list of heads is specified, return only nodes which are heads
1120 1120 or ancestors of these heads, and return a second element which
1121 1121 contains all remote heads which get new children.
1122 1122 """
1123 1123 if base == None:
1124 1124 base = {}
1125 1125 self.findincoming(remote, base, heads, force=force)
1126 1126
1127 1127 self.ui.debug(_("common changesets up to ")
1128 1128 + " ".join(map(short, base.keys())) + "\n")
1129 1129
1130 1130 remain = dict.fromkeys(self.changelog.nodemap)
1131 1131
1132 1132 # prune everything remote has from the tree
1133 1133 del remain[nullid]
1134 1134 remove = base.keys()
1135 1135 while remove:
1136 1136 n = remove.pop(0)
1137 1137 if n in remain:
1138 1138 del remain[n]
1139 1139 for p in self.changelog.parents(n):
1140 1140 remove.append(p)
1141 1141
1142 1142 # find every node whose parents have been pruned
1143 1143 subset = []
1144 1144 # find every remote head that will get new children
1145 1145 updated_heads = {}
1146 1146 for n in remain:
1147 1147 p1, p2 = self.changelog.parents(n)
1148 1148 if p1 not in remain and p2 not in remain:
1149 1149 subset.append(n)
1150 1150 if heads:
1151 1151 if p1 in heads:
1152 1152 updated_heads[p1] = True
1153 1153 if p2 in heads:
1154 1154 updated_heads[p2] = True
1155 1155
1156 1156 # this is the set of all roots we have to push
1157 1157 if heads:
1158 1158 return subset, updated_heads.keys()
1159 1159 else:
1160 1160 return subset
1161 1161
1162 1162 def pull(self, remote, heads=None, force=False):
1163 1163 l = self.lock()
1164 1164
1165 1165 fetch = self.findincoming(remote, force=force)
1166 1166 if fetch == [nullid]:
1167 1167 self.ui.status(_("requesting all changes\n"))
1168 1168
1169 1169 if not fetch:
1170 1170 self.ui.status(_("no changes found\n"))
1171 1171 return 0
1172 1172
1173 1173 if heads is None:
1174 1174 cg = remote.changegroup(fetch, 'pull')
1175 1175 else:
1176 1176 cg = remote.changegroupsubset(fetch, heads, 'pull')
1177 1177 return self.addchangegroup(cg, 'pull')
1178 1178
1179 1179 def push(self, remote, force=False, revs=None):
1180 1180 # there are two ways to push to remote repo:
1181 1181 #
1182 1182 # addchangegroup assumes local user can lock remote
1183 1183 # repo (local filesystem, old ssh servers).
1184 1184 #
1185 1185 # unbundle assumes local user cannot lock remote repo (new ssh
1186 1186 # servers, http servers).
1187 1187
1188 1188 if remote.capable('unbundle'):
1189 1189 return self.push_unbundle(remote, force, revs)
1190 1190 return self.push_addchangegroup(remote, force, revs)
1191 1191
1192 1192 def prepush(self, remote, force, revs):
1193 1193 base = {}
1194 1194 remote_heads = remote.heads()
1195 1195 inc = self.findincoming(remote, base, remote_heads, force=force)
1196 1196 if not force and inc:
1197 1197 self.ui.warn(_("abort: unsynced remote changes!\n"))
1198 1198 self.ui.status(_("(did you forget to sync?"
1199 1199 " use push -f to force)\n"))
1200 1200 return None, 1
1201 1201
1202 1202 update, updated_heads = self.findoutgoing(remote, base, remote_heads)
1203 1203 if revs is not None:
1204 1204 msng_cl, bases, heads = self.changelog.nodesbetween(update, revs)
1205 1205 else:
1206 1206 bases, heads = update, self.changelog.heads()
1207 1207
1208 1208 if not bases:
1209 1209 self.ui.status(_("no changes found\n"))
1210 1210 return None, 1
1211 1211 elif not force:
1212 1212 # FIXME we don't properly detect creation of new heads
1213 1213 # in the push -r case, assume the user knows what he's doing
1214 1214 if not revs and len(remote_heads) < len(heads) \
1215 1215 and remote_heads != [nullid]:
1216 1216 self.ui.warn(_("abort: push creates new remote branches!\n"))
1217 1217 self.ui.status(_("(did you forget to merge?"
1218 1218 " use push -f to force)\n"))
1219 1219 return None, 1
1220 1220
1221 1221 if revs is None:
1222 1222 cg = self.changegroup(update, 'push')
1223 1223 else:
1224 1224 cg = self.changegroupsubset(update, revs, 'push')
1225 1225 return cg, remote_heads
1226 1226
1227 1227 def push_addchangegroup(self, remote, force, revs):
1228 1228 lock = remote.lock()
1229 1229
1230 1230 ret = self.prepush(remote, force, revs)
1231 1231 if ret[0] is not None:
1232 1232 cg, remote_heads = ret
1233 1233 return remote.addchangegroup(cg, 'push')
1234 1234 return ret[1]
1235 1235
1236 1236 def push_unbundle(self, remote, force, revs):
1237 1237 # local repo finds heads on server, finds out what revs it
1238 1238 # must push. once revs transferred, if server finds it has
1239 1239 # different heads (someone else won commit/push race), server
1240 1240 # aborts.
1241 1241
1242 1242 ret = self.prepush(remote, force, revs)
1243 1243 if ret[0] is not None:
1244 1244 cg, remote_heads = ret
1245 1245 if force: remote_heads = ['force']
1246 1246 return remote.unbundle(cg, remote_heads, 'push')
1247 1247 return ret[1]
1248 1248
1249 1249 def changegroupsubset(self, bases, heads, source):
1250 1250 """This function generates a changegroup consisting of all the nodes
1251 1251 that are descendents of any of the bases, and ancestors of any of
1252 1252 the heads.
1253 1253
1254 1254 It is fairly complex as determining which filenodes and which
1255 1255 manifest nodes need to be included for the changeset to be complete
1256 1256 is non-trivial.
1257 1257
1258 1258 Another wrinkle is doing the reverse, figuring out which changeset in
1259 1259 the changegroup a particular filenode or manifestnode belongs to."""
1260 1260
1261 1261 self.hook('preoutgoing', throw=True, source=source)
1262 1262
1263 1263 # Set up some initial variables
1264 1264 # Make it easy to refer to self.changelog
1265 1265 cl = self.changelog
1266 1266 # msng is short for missing - compute the list of changesets in this
1267 1267 # changegroup.
1268 1268 msng_cl_lst, bases, heads = cl.nodesbetween(bases, heads)
1269 1269 # Some bases may turn out to be superfluous, and some heads may be
1270 1270 # too. nodesbetween will return the minimal set of bases and heads
1271 1271 # necessary to re-create the changegroup.
1272 1272
1273 1273 # Known heads are the list of heads that it is assumed the recipient
1274 1274 # of this changegroup will know about.
1275 1275 knownheads = {}
1276 1276 # We assume that all parents of bases are known heads.
1277 1277 for n in bases:
1278 1278 for p in cl.parents(n):
1279 1279 if p != nullid:
1280 1280 knownheads[p] = 1
1281 1281 knownheads = knownheads.keys()
1282 1282 if knownheads:
1283 1283 # Now that we know what heads are known, we can compute which
1284 1284 # changesets are known. The recipient must know about all
1285 1285 # changesets required to reach the known heads from the null
1286 1286 # changeset.
1287 1287 has_cl_set, junk, junk = cl.nodesbetween(None, knownheads)
1288 1288 junk = None
1289 1289 # Transform the list into an ersatz set.
1290 1290 has_cl_set = dict.fromkeys(has_cl_set)
1291 1291 else:
1292 1292 # If there were no known heads, the recipient cannot be assumed to
1293 1293 # know about any changesets.
1294 1294 has_cl_set = {}
1295 1295
1296 1296 # Make it easy to refer to self.manifest
1297 1297 mnfst = self.manifest
1298 1298 # We don't know which manifests are missing yet
1299 1299 msng_mnfst_set = {}
1300 1300 # Nor do we know which filenodes are missing.
1301 1301 msng_filenode_set = {}
1302 1302
1303 1303 junk = mnfst.index[mnfst.count() - 1] # Get around a bug in lazyindex
1304 1304 junk = None
1305 1305
1306 1306 # A changeset always belongs to itself, so the changenode lookup
1307 1307 # function for a changenode is identity.
1308 1308 def identity(x):
1309 1309 return x
1310 1310
1311 1311 # A function generating function. Sets up an environment for the
1312 1312 # inner function.
1313 1313 def cmp_by_rev_func(revlog):
1314 1314 # Compare two nodes by their revision number in the environment's
1315 1315 # revision history. Since the revision number both represents the
1316 1316 # most efficient order to read the nodes in, and represents a
1317 1317 # topological sorting of the nodes, this function is often useful.
1318 1318 def cmp_by_rev(a, b):
1319 1319 return cmp(revlog.rev(a), revlog.rev(b))
1320 1320 return cmp_by_rev
1321 1321
1322 1322 # If we determine that a particular file or manifest node must be a
1323 1323 # node that the recipient of the changegroup will already have, we can
1324 1324 # also assume the recipient will have all the parents. This function
1325 1325 # prunes them from the set of missing nodes.
1326 1326 def prune_parents(revlog, hasset, msngset):
1327 1327 haslst = hasset.keys()
1328 1328 haslst.sort(cmp_by_rev_func(revlog))
1329 1329 for node in haslst:
1330 1330 parentlst = [p for p in revlog.parents(node) if p != nullid]
1331 1331 while parentlst:
1332 1332 n = parentlst.pop()
1333 1333 if n not in hasset:
1334 1334 hasset[n] = 1
1335 1335 p = [p for p in revlog.parents(n) if p != nullid]
1336 1336 parentlst.extend(p)
1337 1337 for n in hasset:
1338 1338 msngset.pop(n, None)
1339 1339
1340 1340 # This is a function generating function used to set up an environment
1341 1341 # for the inner function to execute in.
1342 1342 def manifest_and_file_collector(changedfileset):
1343 1343 # This is an information gathering function that gathers
1344 1344 # information from each changeset node that goes out as part of
1345 1345 # the changegroup. The information gathered is a list of which
1346 1346 # manifest nodes are potentially required (the recipient may
1347 1347 # already have them) and total list of all files which were
1348 1348 # changed in any changeset in the changegroup.
1349 1349 #
1350 1350 # We also remember the first changenode we saw any manifest
1351 1351 # referenced by so we can later determine which changenode 'owns'
1352 1352 # the manifest.
1353 1353 def collect_manifests_and_files(clnode):
1354 1354 c = cl.read(clnode)
1355 1355 for f in c[3]:
1356 1356 # This is to make sure we only have one instance of each
1357 1357 # filename string for each filename.
1358 1358 changedfileset.setdefault(f, f)
1359 1359 msng_mnfst_set.setdefault(c[0], clnode)
1360 1360 return collect_manifests_and_files
1361 1361
1362 1362 # Figure out which manifest nodes (of the ones we think might be part
1363 1363 # of the changegroup) the recipient must know about and remove them
1364 1364 # from the changegroup.
1365 1365 def prune_manifests():
1366 1366 has_mnfst_set = {}
1367 1367 for n in msng_mnfst_set:
1368 1368 # If a 'missing' manifest thinks it belongs to a changenode
1369 1369 # the recipient is assumed to have, obviously the recipient
1370 1370 # must have that manifest.
1371 1371 linknode = cl.node(mnfst.linkrev(n))
1372 1372 if linknode in has_cl_set:
1373 1373 has_mnfst_set[n] = 1
1374 1374 prune_parents(mnfst, has_mnfst_set, msng_mnfst_set)
1375 1375
1376 1376 # Use the information collected in collect_manifests_and_files to say
1377 1377 # which changenode any manifestnode belongs to.
1378 1378 def lookup_manifest_link(mnfstnode):
1379 1379 return msng_mnfst_set[mnfstnode]
1380 1380
1381 1381 # A function generating function that sets up the initial environment
1382 1382 # the inner function.
1383 1383 def filenode_collector(changedfiles):
1384 1384 next_rev = [0]
1385 1385 # This gathers information from each manifestnode included in the
1386 1386 # changegroup about which filenodes the manifest node references
1387 1387 # so we can include those in the changegroup too.
1388 1388 #
1389 1389 # It also remembers which changenode each filenode belongs to. It
1390 1390 # does this by assuming the a filenode belongs to the changenode
1391 1391 # the first manifest that references it belongs to.
1392 1392 def collect_msng_filenodes(mnfstnode):
1393 1393 r = mnfst.rev(mnfstnode)
1394 1394 if r == next_rev[0]:
1395 1395 # If the last rev we looked at was the one just previous,
1396 1396 # we only need to see a diff.
1397 1397 delta = mdiff.patchtext(mnfst.delta(mnfstnode))
1398 1398 # For each line in the delta
1399 1399 for dline in delta.splitlines():
1400 1400 # get the filename and filenode for that line
1401 1401 f, fnode = dline.split('\0')
1402 1402 fnode = bin(fnode[:40])
1403 1403 f = changedfiles.get(f, None)
1404 1404 # And if the file is in the list of files we care
1405 1405 # about.
1406 1406 if f is not None:
1407 1407 # Get the changenode this manifest belongs to
1408 1408 clnode = msng_mnfst_set[mnfstnode]
1409 1409 # Create the set of filenodes for the file if
1410 1410 # there isn't one already.
1411 1411 ndset = msng_filenode_set.setdefault(f, {})
1412 1412 # And set the filenode's changelog node to the
1413 1413 # manifest's if it hasn't been set already.
1414 1414 ndset.setdefault(fnode, clnode)
1415 1415 else:
1416 1416 # Otherwise we need a full manifest.
1417 1417 m = mnfst.read(mnfstnode)
1418 1418 # For every file in we care about.
1419 1419 for f in changedfiles:
1420 1420 fnode = m.get(f, None)
1421 1421 # If it's in the manifest
1422 1422 if fnode is not None:
1423 1423 # See comments above.
1424 1424 clnode = msng_mnfst_set[mnfstnode]
1425 1425 ndset = msng_filenode_set.setdefault(f, {})
1426 1426 ndset.setdefault(fnode, clnode)
1427 1427 # Remember the revision we hope to see next.
1428 1428 next_rev[0] = r + 1
1429 1429 return collect_msng_filenodes
1430 1430
1431 1431 # We have a list of filenodes we think we need for a file, lets remove
1432 1432 # all those we now the recipient must have.
1433 1433 def prune_filenodes(f, filerevlog):
1434 1434 msngset = msng_filenode_set[f]
1435 1435 hasset = {}
1436 1436 # If a 'missing' filenode thinks it belongs to a changenode we
1437 1437 # assume the recipient must have, then the recipient must have
1438 1438 # that filenode.
1439 1439 for n in msngset:
1440 1440 clnode = cl.node(filerevlog.linkrev(n))
1441 1441 if clnode in has_cl_set:
1442 1442 hasset[n] = 1
1443 1443 prune_parents(filerevlog, hasset, msngset)
1444 1444
1445 1445 # A function generator function that sets up the a context for the
1446 1446 # inner function.
1447 1447 def lookup_filenode_link_func(fname):
1448 1448 msngset = msng_filenode_set[fname]
1449 1449 # Lookup the changenode the filenode belongs to.
1450 1450 def lookup_filenode_link(fnode):
1451 1451 return msngset[fnode]
1452 1452 return lookup_filenode_link
1453 1453
1454 1454 # Now that we have all theses utility functions to help out and
1455 1455 # logically divide up the task, generate the group.
1456 1456 def gengroup():
1457 1457 # The set of changed files starts empty.
1458 1458 changedfiles = {}
1459 1459 # Create a changenode group generator that will call our functions
1460 1460 # back to lookup the owning changenode and collect information.
1461 1461 group = cl.group(msng_cl_lst, identity,
1462 1462 manifest_and_file_collector(changedfiles))
1463 1463 for chnk in group:
1464 1464 yield chnk
1465 1465
1466 1466 # The list of manifests has been collected by the generator
1467 1467 # calling our functions back.
1468 1468 prune_manifests()
1469 1469 msng_mnfst_lst = msng_mnfst_set.keys()
1470 1470 # Sort the manifestnodes by revision number.
1471 1471 msng_mnfst_lst.sort(cmp_by_rev_func(mnfst))
1472 1472 # Create a generator for the manifestnodes that calls our lookup
1473 1473 # and data collection functions back.
1474 1474 group = mnfst.group(msng_mnfst_lst, lookup_manifest_link,
1475 1475 filenode_collector(changedfiles))
1476 1476 for chnk in group:
1477 1477 yield chnk
1478 1478
1479 1479 # These are no longer needed, dereference and toss the memory for
1480 1480 # them.
1481 1481 msng_mnfst_lst = None
1482 1482 msng_mnfst_set.clear()
1483 1483
1484 1484 changedfiles = changedfiles.keys()
1485 1485 changedfiles.sort()
1486 1486 # Go through all our files in order sorted by name.
1487 1487 for fname in changedfiles:
1488 1488 filerevlog = self.file(fname)
1489 1489 # Toss out the filenodes that the recipient isn't really
1490 1490 # missing.
1491 1491 if msng_filenode_set.has_key(fname):
1492 1492 prune_filenodes(fname, filerevlog)
1493 1493 msng_filenode_lst = msng_filenode_set[fname].keys()
1494 1494 else:
1495 1495 msng_filenode_lst = []
1496 1496 # If any filenodes are left, generate the group for them,
1497 1497 # otherwise don't bother.
1498 1498 if len(msng_filenode_lst) > 0:
1499 1499 yield changegroup.genchunk(fname)
1500 1500 # Sort the filenodes by their revision #
1501 1501 msng_filenode_lst.sort(cmp_by_rev_func(filerevlog))
1502 1502 # Create a group generator and only pass in a changenode
1503 1503 # lookup function as we need to collect no information
1504 1504 # from filenodes.
1505 1505 group = filerevlog.group(msng_filenode_lst,
1506 1506 lookup_filenode_link_func(fname))
1507 1507 for chnk in group:
1508 1508 yield chnk
1509 1509 if msng_filenode_set.has_key(fname):
1510 1510 # Don't need this anymore, toss it to free memory.
1511 1511 del msng_filenode_set[fname]
1512 1512 # Signal that no more groups are left.
1513 1513 yield changegroup.closechunk()
1514 1514
1515 1515 if msng_cl_lst:
1516 1516 self.hook('outgoing', node=hex(msng_cl_lst[0]), source=source)
1517 1517
1518 1518 return util.chunkbuffer(gengroup())
1519 1519
1520 1520 def changegroup(self, basenodes, source):
1521 1521 """Generate a changegroup of all nodes that we have that a recipient
1522 1522 doesn't.
1523 1523
1524 1524 This is much easier than the previous function as we can assume that
1525 1525 the recipient has any changenode we aren't sending them."""
1526 1526
1527 1527 self.hook('preoutgoing', throw=True, source=source)
1528 1528
1529 1529 cl = self.changelog
1530 1530 nodes = cl.nodesbetween(basenodes, None)[0]
1531 1531 revset = dict.fromkeys([cl.rev(n) for n in nodes])
1532 1532
1533 1533 def identity(x):
1534 1534 return x
1535 1535
1536 1536 def gennodelst(revlog):
1537 1537 for r in xrange(0, revlog.count()):
1538 1538 n = revlog.node(r)
1539 1539 if revlog.linkrev(n) in revset:
1540 1540 yield n
1541 1541
1542 1542 def changed_file_collector(changedfileset):
1543 1543 def collect_changed_files(clnode):
1544 1544 c = cl.read(clnode)
1545 1545 for fname in c[3]:
1546 1546 changedfileset[fname] = 1
1547 1547 return collect_changed_files
1548 1548
1549 1549 def lookuprevlink_func(revlog):
1550 1550 def lookuprevlink(n):
1551 1551 return cl.node(revlog.linkrev(n))
1552 1552 return lookuprevlink
1553 1553
1554 1554 def gengroup():
1555 1555 # construct a list of all changed files
1556 1556 changedfiles = {}
1557 1557
1558 1558 for chnk in cl.group(nodes, identity,
1559 1559 changed_file_collector(changedfiles)):
1560 1560 yield chnk
1561 1561 changedfiles = changedfiles.keys()
1562 1562 changedfiles.sort()
1563 1563
1564 1564 mnfst = self.manifest
1565 1565 nodeiter = gennodelst(mnfst)
1566 1566 for chnk in mnfst.group(nodeiter, lookuprevlink_func(mnfst)):
1567 1567 yield chnk
1568 1568
1569 1569 for fname in changedfiles:
1570 1570 filerevlog = self.file(fname)
1571 1571 nodeiter = gennodelst(filerevlog)
1572 1572 nodeiter = list(nodeiter)
1573 1573 if nodeiter:
1574 1574 yield changegroup.genchunk(fname)
1575 1575 lookup = lookuprevlink_func(filerevlog)
1576 1576 for chnk in filerevlog.group(nodeiter, lookup):
1577 1577 yield chnk
1578 1578
1579 1579 yield changegroup.closechunk()
1580 1580
1581 1581 if nodes:
1582 1582 self.hook('outgoing', node=hex(nodes[0]), source=source)
1583 1583
1584 1584 return util.chunkbuffer(gengroup())
1585 1585
1586 1586 def addchangegroup(self, source, srctype):
1587 1587 """add changegroup to repo.
1588 1588 returns number of heads modified or added + 1."""
1589 1589
1590 1590 def csmap(x):
1591 1591 self.ui.debug(_("add changeset %s\n") % short(x))
1592 1592 return cl.count()
1593 1593
1594 1594 def revmap(x):
1595 1595 return cl.rev(x)
1596 1596
1597 1597 if not source:
1598 1598 return 0
1599 1599
1600 1600 self.hook('prechangegroup', throw=True, source=srctype)
1601 1601
1602 1602 changesets = files = revisions = 0
1603 1603
1604 1604 tr = self.transaction()
1605 1605
1606 1606 # write changelog data to temp files so concurrent readers will not see
1607 1607 # inconsistent view
1608 1608 cl = None
1609 1609 try:
1610 1610 cl = appendfile.appendchangelog(self.opener, self.changelog.version)
1611 1611
1612 1612 oldheads = len(cl.heads())
1613 1613
1614 1614 # pull off the changeset group
1615 1615 self.ui.status(_("adding changesets\n"))
1616 1616 cor = cl.count() - 1
1617 1617 chunkiter = changegroup.chunkiter(source)
1618 1618 if cl.addgroup(chunkiter, csmap, tr, 1) is None:
1619 1619 raise util.Abort(_("received changelog group is empty"))
1620 1620 cnr = cl.count() - 1
1621 1621 changesets = cnr - cor
1622 1622
1623 1623 # pull off the manifest group
1624 1624 self.ui.status(_("adding manifests\n"))
1625 1625 chunkiter = changegroup.chunkiter(source)
1626 1626 # no need to check for empty manifest group here:
1627 1627 # if the result of the merge of 1 and 2 is the same in 3 and 4,
1628 1628 # no new manifest will be created and the manifest group will
1629 1629 # be empty during the pull
1630 1630 self.manifest.addgroup(chunkiter, revmap, tr)
1631 1631
1632 1632 # process the files
1633 1633 self.ui.status(_("adding file changes\n"))
1634 1634 while 1:
1635 1635 f = changegroup.getchunk(source)
1636 1636 if not f:
1637 1637 break
1638 1638 self.ui.debug(_("adding %s revisions\n") % f)
1639 1639 fl = self.file(f)
1640 1640 o = fl.count()
1641 1641 chunkiter = changegroup.chunkiter(source)
1642 1642 if fl.addgroup(chunkiter, revmap, tr) is None:
1643 1643 raise util.Abort(_("received file revlog group is empty"))
1644 1644 revisions += fl.count() - o
1645 1645 files += 1
1646 1646
1647 1647 cl.writedata()
1648 1648 finally:
1649 1649 if cl:
1650 1650 cl.cleanup()
1651 1651
1652 1652 # make changelog see real files again
1653 1653 self.changelog = changelog.changelog(self.opener, self.changelog.version)
1654 1654 self.changelog.checkinlinesize(tr)
1655 1655
1656 1656 newheads = len(self.changelog.heads())
1657 1657 heads = ""
1658 1658 if oldheads and newheads != oldheads:
1659 1659 heads = _(" (%+d heads)") % (newheads - oldheads)
1660 1660
1661 1661 self.ui.status(_("added %d changesets"
1662 1662 " with %d changes to %d files%s\n")
1663 1663 % (changesets, revisions, files, heads))
1664 1664
1665 1665 if changesets > 0:
1666 1666 self.hook('pretxnchangegroup', throw=True,
1667 1667 node=hex(self.changelog.node(cor+1)), source=srctype)
1668 1668
1669 1669 tr.close()
1670 1670
1671 1671 if changesets > 0:
1672 1672 self.hook("changegroup", node=hex(self.changelog.node(cor+1)),
1673 1673 source=srctype)
1674 1674
1675 1675 for i in range(cor + 1, cnr + 1):
1676 1676 self.hook("incoming", node=hex(self.changelog.node(i)),
1677 1677 source=srctype)
1678 1678
1679 1679 return newheads - oldheads + 1
1680 1680
1681 1681 def update(self, node, allow=False, force=False, choose=None,
1682 1682 moddirstate=True, forcemerge=False, wlock=None, show_stats=True):
1683 1683 pl = self.dirstate.parents()
1684 1684 if not force and pl[1] != nullid:
1685 1685 raise util.Abort(_("outstanding uncommitted merges"))
1686 1686
1687 1687 err = False
1688 1688
1689 1689 p1, p2 = pl[0], node
1690 1690 pa = self.changelog.ancestor(p1, p2)
1691 1691 m1n = self.changelog.read(p1)[0]
1692 1692 m2n = self.changelog.read(p2)[0]
1693 1693 man = self.manifest.ancestor(m1n, m2n)
1694 1694 m1 = self.manifest.read(m1n)
1695 1695 mf1 = self.manifest.readflags(m1n)
1696 1696 m2 = self.manifest.read(m2n).copy()
1697 1697 mf2 = self.manifest.readflags(m2n)
1698 1698 ma = self.manifest.read(man)
1699 1699 mfa = self.manifest.readflags(man)
1700 1700
1701 1701 modified, added, removed, deleted, unknown = self.changes()
1702 1702
1703 1703 # is this a jump, or a merge? i.e. is there a linear path
1704 1704 # from p1 to p2?
1705 1705 linear_path = (pa == p1 or pa == p2)
1706 1706
1707 1707 if allow and linear_path:
1708 1708 raise util.Abort(_("there is nothing to merge, just use "
1709 1709 "'hg update' or look at 'hg heads'"))
1710 1710 if allow and not forcemerge:
1711 1711 if modified or added or removed:
1712 1712 raise util.Abort(_("outstanding uncommitted changes"))
1713 1713
1714 1714 if not forcemerge and not force:
1715 1715 for f in unknown:
1716 1716 if f in m2:
1717 1717 t1 = self.wread(f)
1718 1718 t2 = self.file(f).read(m2[f])
1719 1719 if cmp(t1, t2) != 0:
1720 1720 raise util.Abort(_("'%s' already exists in the working"
1721 1721 " dir and differs from remote") % f)
1722 1722
1723 1723 # resolve the manifest to determine which files
1724 1724 # we care about merging
1725 1725 self.ui.note(_("resolving manifests\n"))
1726 1726 self.ui.debug(_(" force %s allow %s moddirstate %s linear %s\n") %
1727 1727 (force, allow, moddirstate, linear_path))
1728 1728 self.ui.debug(_(" ancestor %s local %s remote %s\n") %
1729 1729 (short(man), short(m1n), short(m2n)))
1730 1730
1731 1731 merge = {}
1732 1732 get = {}
1733 1733 remove = []
1734 1734
1735 1735 # construct a working dir manifest
1736 1736 mw = m1.copy()
1737 1737 mfw = mf1.copy()
1738 1738 umap = dict.fromkeys(unknown)
1739 1739
1740 1740 for f in added + modified + unknown:
1741 1741 mw[f] = ""
1742 1742 mfw[f] = util.is_exec(self.wjoin(f), mfw.get(f, False))
1743 1743
1744 1744 if moddirstate and not wlock:
1745 1745 wlock = self.wlock()
1746 1746
1747 1747 for f in deleted + removed:
1748 1748 if f in mw:
1749 1749 del mw[f]
1750 1750
1751 1751 # If we're jumping between revisions (as opposed to merging),
1752 1752 # and if neither the working directory nor the target rev has
1753 1753 # the file, then we need to remove it from the dirstate, to
1754 1754 # prevent the dirstate from listing the file when it is no
1755 1755 # longer in the manifest.
1756 1756 if moddirstate and linear_path and f not in m2:
1757 1757 self.dirstate.forget((f,))
1758 1758
1759 1759 # Compare manifests
1760 1760 for f, n in mw.iteritems():
1761 1761 if choose and not choose(f):
1762 1762 continue
1763 1763 if f in m2:
1764 1764 s = 0
1765 1765
1766 1766 # is the wfile new since m1, and match m2?
1767 1767 if f not in m1:
1768 1768 t1 = self.wread(f)
1769 1769 t2 = self.file(f).read(m2[f])
1770 1770 if cmp(t1, t2) == 0:
1771 1771 n = m2[f]
1772 1772 del t1, t2
1773 1773
1774 1774 # are files different?
1775 1775 if n != m2[f]:
1776 1776 a = ma.get(f, nullid)
1777 1777 # are both different from the ancestor?
1778 1778 if n != a and m2[f] != a:
1779 1779 self.ui.debug(_(" %s versions differ, resolve\n") % f)
1780 1780 # merge executable bits
1781 1781 # "if we changed or they changed, change in merge"
1782 1782 a, b, c = mfa.get(f, 0), mfw[f], mf2[f]
1783 1783 mode = ((a^b) | (a^c)) ^ a
1784 1784 merge[f] = (m1.get(f, nullid), m2[f], mode)
1785 1785 s = 1
1786 1786 # are we clobbering?
1787 1787 # is remote's version newer?
1788 1788 # or are we going back in time?
1789 1789 elif force or m2[f] != a or (p2 == pa and mw[f] == m1[f]):
1790 1790 self.ui.debug(_(" remote %s is newer, get\n") % f)
1791 1791 get[f] = m2[f]
1792 1792 s = 1
1793 1793 elif f in umap or f in added:
1794 1794 # this unknown file is the same as the checkout
1795 1795 # we need to reset the dirstate if the file was added
1796 1796 get[f] = m2[f]
1797 1797
1798 1798 if not s and mfw[f] != mf2[f]:
1799 1799 if force:
1800 1800 self.ui.debug(_(" updating permissions for %s\n") % f)
1801 1801 util.set_exec(self.wjoin(f), mf2[f])
1802 1802 else:
1803 1803 a, b, c = mfa.get(f, 0), mfw[f], mf2[f]
1804 1804 mode = ((a^b) | (a^c)) ^ a
1805 1805 if mode != b:
1806 1806 self.ui.debug(_(" updating permissions for %s\n")
1807 1807 % f)
1808 1808 util.set_exec(self.wjoin(f), mode)
1809 1809 del m2[f]
1810 1810 elif f in ma:
1811 1811 if n != ma[f]:
1812 1812 r = _("d")
1813 1813 if not force and (linear_path or allow):
1814 1814 r = self.ui.prompt(
1815 1815 (_(" local changed %s which remote deleted\n") % f) +
1816 1816 _("(k)eep or (d)elete?"), _("[kd]"), _("k"))
1817 1817 if r == _("d"):
1818 1818 remove.append(f)
1819 1819 else:
1820 1820 self.ui.debug(_("other deleted %s\n") % f)
1821 1821 remove.append(f) # other deleted it
1822 1822 else:
1823 1823 # file is created on branch or in working directory
1824 1824 if force and f not in umap:
1825 1825 self.ui.debug(_("remote deleted %s, clobbering\n") % f)
1826 1826 remove.append(f)
1827 1827 elif n == m1.get(f, nullid): # same as parent
1828 1828 if p2 == pa: # going backwards?
1829 1829 self.ui.debug(_("remote deleted %s\n") % f)
1830 1830 remove.append(f)
1831 1831 else:
1832 1832 self.ui.debug(_("local modified %s, keeping\n") % f)
1833 1833 else:
1834 1834 self.ui.debug(_("working dir created %s, keeping\n") % f)
1835 1835
1836 1836 for f, n in m2.iteritems():
1837 1837 if choose and not choose(f):
1838 1838 continue
1839 1839 if f[0] == "/":
1840 1840 continue
1841 1841 if f in ma and n != ma[f]:
1842 1842 r = _("k")
1843 1843 if not force and (linear_path or allow):
1844 1844 r = self.ui.prompt(
1845 1845 (_("remote changed %s which local deleted\n") % f) +
1846 1846 _("(k)eep or (d)elete?"), _("[kd]"), _("k"))
1847 1847 if r == _("k"):
1848 1848 get[f] = n
1849 1849 elif f not in ma:
1850 1850 self.ui.debug(_("remote created %s\n") % f)
1851 1851 get[f] = n
1852 1852 else:
1853 1853 if force or p2 == pa: # going backwards?
1854 1854 self.ui.debug(_("local deleted %s, recreating\n") % f)
1855 1855 get[f] = n
1856 1856 else:
1857 1857 self.ui.debug(_("local deleted %s\n") % f)
1858 1858
1859 1859 del mw, m1, m2, ma
1860 1860
1861 1861 if force:
1862 1862 for f in merge:
1863 1863 get[f] = merge[f][1]
1864 1864 merge = {}
1865 1865
1866 1866 if linear_path or force:
1867 1867 # we don't need to do any magic, just jump to the new rev
1868 1868 branch_merge = False
1869 1869 p1, p2 = p2, nullid
1870 1870 else:
1871 1871 if not allow:
1872 1872 self.ui.status(_("this update spans a branch"
1873 1873 " affecting the following files:\n"))
1874 1874 fl = merge.keys() + get.keys()
1875 1875 fl.sort()
1876 1876 for f in fl:
1877 1877 cf = ""
1878 1878 if f in merge:
1879 1879 cf = _(" (resolve)")
1880 1880 self.ui.status(" %s%s\n" % (f, cf))
1881 1881 self.ui.warn(_("aborting update spanning branches!\n"))
1882 1882 self.ui.status(_("(use 'hg merge' to merge across branches"
1883 1883 " or 'hg update -C' to lose changes)\n"))
1884 1884 return 1
1885 1885 branch_merge = True
1886 1886
1887 1887 xp1 = hex(p1)
1888 1888 xp2 = hex(p2)
1889 1889 if p2 == nullid: xxp2 = ''
1890 1890 else: xxp2 = xp2
1891 1891
1892 1892 self.hook('preupdate', throw=True, parent1=xp1, parent2=xxp2)
1893 1893
1894 1894 # get the files we don't need to change
1895 1895 files = get.keys()
1896 1896 files.sort()
1897 1897 for f in files:
1898 1898 if f[0] == "/":
1899 1899 continue
1900 1900 self.ui.note(_("getting %s\n") % f)
1901 1901 t = self.file(f).read(get[f])
1902 1902 self.wwrite(f, t)
1903 1903 util.set_exec(self.wjoin(f), mf2[f])
1904 1904 if moddirstate:
1905 1905 if branch_merge:
1906 1906 self.dirstate.update([f], 'n', st_mtime=-1)
1907 1907 else:
1908 1908 self.dirstate.update([f], 'n')
1909 1909
1910 1910 # merge the tricky bits
1911 1911 failedmerge = []
1912 1912 files = merge.keys()
1913 1913 files.sort()
1914 1914 for f in files:
1915 1915 self.ui.status(_("merging %s\n") % f)
1916 1916 my, other, flag = merge[f]
1917 1917 ret = self.merge3(f, my, other, xp1, xp2)
1918 1918 if ret:
1919 1919 err = True
1920 1920 failedmerge.append(f)
1921 1921 util.set_exec(self.wjoin(f), flag)
1922 1922 if moddirstate:
1923 1923 if branch_merge:
1924 1924 # We've done a branch merge, mark this file as merged
1925 1925 # so that we properly record the merger later
1926 1926 self.dirstate.update([f], 'm')
1927 1927 else:
1928 1928 # We've update-merged a locally modified file, so
1929 1929 # we set the dirstate to emulate a normal checkout
1930 1930 # of that file some time in the past. Thus our
1931 1931 # merge will appear as a normal local file
1932 1932 # modification.
1933 1933 f_len = len(self.file(f).read(other))
1934 1934 self.dirstate.update([f], 'n', st_size=f_len, st_mtime=-1)
1935 1935
1936 1936 remove.sort()
1937 1937 for f in remove:
1938 1938 self.ui.note(_("removing %s\n") % f)
1939 1939 util.audit_path(f)
1940 1940 try:
1941 1941 util.unlink(self.wjoin(f))
1942 1942 except OSError, inst:
1943 1943 if inst.errno != errno.ENOENT:
1944 1944 self.ui.warn(_("update failed to remove %s: %s!\n") %
1945 1945 (f, inst.strerror))
1946 1946 if moddirstate:
1947 1947 if branch_merge:
1948 1948 self.dirstate.update(remove, 'r')
1949 1949 else:
1950 1950 self.dirstate.forget(remove)
1951 1951
1952 1952 if moddirstate:
1953 1953 self.dirstate.setparents(p1, p2)
1954 1954
1955 1955 if show_stats:
1956 1956 stats = ((len(get), _("updated")),
1957 1957 (len(merge) - len(failedmerge), _("merged")),
1958 1958 (len(remove), _("removed")),
1959 1959 (len(failedmerge), _("unresolved")))
1960 1960 note = ", ".join([_("%d files %s") % s for s in stats])
1961 1961 self.ui.status("%s\n" % note)
1962 1962 if moddirstate:
1963 1963 if branch_merge:
1964 1964 if failedmerge:
1965 1965 self.ui.status(_("There are unresolved merges,"
1966 1966 " you can redo the full merge using:\n"
1967 1967 " hg update -C %s\n"
1968 1968 " hg merge %s\n"
1969 1969 % (self.changelog.rev(p1),
1970 1970 self.changelog.rev(p2))))
1971 1971 else:
1972 1972 self.ui.status(_("(branch merge, don't forget to commit)\n"))
1973 1973 elif failedmerge:
1974 1974 self.ui.status(_("There are unresolved merges with"
1975 1975 " locally modified files.\n"))
1976 1976
1977 1977 self.hook('update', parent1=xp1, parent2=xxp2, error=int(err))
1978 1978 return err
1979 1979
1980 1980 def merge3(self, fn, my, other, p1, p2):
1981 1981 """perform a 3-way merge in the working directory"""
1982 1982
1983 1983 def temp(prefix, node):
1984 1984 pre = "%s~%s." % (os.path.basename(fn), prefix)
1985 1985 (fd, name) = tempfile.mkstemp(prefix=pre)
1986 1986 f = os.fdopen(fd, "wb")
1987 1987 self.wwrite(fn, fl.read(node), f)
1988 1988 f.close()
1989 1989 return name
1990 1990
1991 1991 fl = self.file(fn)
1992 1992 base = fl.ancestor(my, other)
1993 1993 a = self.wjoin(fn)
1994 1994 b = temp("base", base)
1995 1995 c = temp("other", other)
1996 1996
1997 1997 self.ui.note(_("resolving %s\n") % fn)
1998 1998 self.ui.debug(_("file %s: my %s other %s ancestor %s\n") %
1999 1999 (fn, short(my), short(other), short(base)))
2000 2000
2001 2001 cmd = (os.environ.get("HGMERGE") or self.ui.config("ui", "merge")
2002 2002 or "hgmerge")
2003 2003 r = util.system('%s "%s" "%s" "%s"' % (cmd, a, b, c), cwd=self.root,
2004 2004 environ={'HG_FILE': fn,
2005 2005 'HG_MY_NODE': p1,
2006 2006 'HG_OTHER_NODE': p2,
2007 2007 'HG_FILE_MY_NODE': hex(my),
2008 2008 'HG_FILE_OTHER_NODE': hex(other),
2009 2009 'HG_FILE_BASE_NODE': hex(base)})
2010 2010 if r:
2011 2011 self.ui.warn(_("merging %s failed!\n") % fn)
2012 2012
2013 2013 os.unlink(b)
2014 2014 os.unlink(c)
2015 2015 return r
2016 2016
2017 2017 def verify(self):
2018 2018 filelinkrevs = {}
2019 2019 filenodes = {}
2020 2020 changesets = revisions = files = 0
2021 2021 errors = [0]
2022 2022 warnings = [0]
2023 2023 neededmanifests = {}
2024 2024
2025 2025 def err(msg):
2026 2026 self.ui.warn(msg + "\n")
2027 2027 errors[0] += 1
2028 2028
2029 2029 def warn(msg):
2030 2030 self.ui.warn(msg + "\n")
2031 2031 warnings[0] += 1
2032 2032
2033 2033 def checksize(obj, name):
2034 2034 d = obj.checksize()
2035 2035 if d[0]:
2036 2036 err(_("%s data length off by %d bytes") % (name, d[0]))
2037 2037 if d[1]:
2038 2038 err(_("%s index contains %d extra bytes") % (name, d[1]))
2039 2039
2040 2040 def checkversion(obj, name):
2041 2041 if obj.version != revlog.REVLOGV0:
2042 2042 if not revlogv1:
2043 2043 warn(_("warning: `%s' uses revlog format 1") % name)
2044 2044 elif revlogv1:
2045 2045 warn(_("warning: `%s' uses revlog format 0") % name)
2046 2046
2047 2047 revlogv1 = self.revlogversion != revlog.REVLOGV0
2048 2048 if self.ui.verbose or revlogv1 != self.revlogv1:
2049 2049 self.ui.status(_("repository uses revlog format %d\n") %
2050 2050 (revlogv1 and 1 or 0))
2051 2051
2052 2052 seen = {}
2053 2053 self.ui.status(_("checking changesets\n"))
2054 2054 checksize(self.changelog, "changelog")
2055 2055
2056 2056 for i in range(self.changelog.count()):
2057 2057 changesets += 1
2058 2058 n = self.changelog.node(i)
2059 2059 l = self.changelog.linkrev(n)
2060 2060 if l != i:
2061 2061 err(_("incorrect link (%d) for changeset revision %d") %(l, i))
2062 2062 if n in seen:
2063 2063 err(_("duplicate changeset at revision %d") % i)
2064 2064 seen[n] = 1
2065 2065
2066 2066 for p in self.changelog.parents(n):
2067 2067 if p not in self.changelog.nodemap:
2068 2068 err(_("changeset %s has unknown parent %s") %
2069 2069 (short(n), short(p)))
2070 2070 try:
2071 2071 changes = self.changelog.read(n)
2072 2072 except KeyboardInterrupt:
2073 2073 self.ui.warn(_("interrupted"))
2074 2074 raise
2075 2075 except Exception, inst:
2076 2076 err(_("unpacking changeset %s: %s") % (short(n), inst))
2077 2077 continue
2078 2078
2079 2079 neededmanifests[changes[0]] = n
2080 2080
2081 2081 for f in changes[3]:
2082 2082 filelinkrevs.setdefault(f, []).append(i)
2083 2083
2084 2084 seen = {}
2085 2085 self.ui.status(_("checking manifests\n"))
2086 2086 checkversion(self.manifest, "manifest")
2087 2087 checksize(self.manifest, "manifest")
2088 2088
2089 2089 for i in range(self.manifest.count()):
2090 2090 n = self.manifest.node(i)
2091 2091 l = self.manifest.linkrev(n)
2092 2092
2093 2093 if l < 0 or l >= self.changelog.count():
2094 2094 err(_("bad manifest link (%d) at revision %d") % (l, i))
2095 2095
2096 2096 if n in neededmanifests:
2097 2097 del neededmanifests[n]
2098 2098
2099 2099 if n in seen:
2100 2100 err(_("duplicate manifest at revision %d") % i)
2101 2101
2102 2102 seen[n] = 1
2103 2103
2104 2104 for p in self.manifest.parents(n):
2105 2105 if p not in self.manifest.nodemap:
2106 2106 err(_("manifest %s has unknown parent %s") %
2107 2107 (short(n), short(p)))
2108 2108
2109 2109 try:
2110 2110 delta = mdiff.patchtext(self.manifest.delta(n))
2111 2111 except KeyboardInterrupt:
2112 2112 self.ui.warn(_("interrupted"))
2113 2113 raise
2114 2114 except Exception, inst:
2115 2115 err(_("unpacking manifest %s: %s") % (short(n), inst))
2116 2116 continue
2117 2117
2118 2118 try:
2119 2119 ff = [ l.split('\0') for l in delta.splitlines() ]
2120 2120 for f, fn in ff:
2121 2121 filenodes.setdefault(f, {})[bin(fn[:40])] = 1
2122 2122 except (ValueError, TypeError), inst:
2123 2123 err(_("broken delta in manifest %s: %s") % (short(n), inst))
2124 2124
2125 2125 self.ui.status(_("crosschecking files in changesets and manifests\n"))
2126 2126
2127 2127 for m, c in neededmanifests.items():
2128 2128 err(_("Changeset %s refers to unknown manifest %s") %
2129 2129 (short(m), short(c)))
2130 2130 del neededmanifests
2131 2131
2132 2132 for f in filenodes:
2133 2133 if f not in filelinkrevs:
2134 2134 err(_("file %s in manifest but not in changesets") % f)
2135 2135
2136 2136 for f in filelinkrevs:
2137 2137 if f not in filenodes:
2138 2138 err(_("file %s in changeset but not in manifest") % f)
2139 2139
2140 2140 self.ui.status(_("checking files\n"))
2141 2141 ff = filenodes.keys()
2142 2142 ff.sort()
2143 2143 for f in ff:
2144 2144 if f == "/dev/null":
2145 2145 continue
2146 2146 files += 1
2147 2147 if not f:
2148 2148 err(_("file without name in manifest %s") % short(n))
2149 2149 continue
2150 2150 fl = self.file(f)
2151 2151 checkversion(fl, f)
2152 2152 checksize(fl, f)
2153 2153
2154 2154 nodes = {nullid: 1}
2155 2155 seen = {}
2156 2156 for i in range(fl.count()):
2157 2157 revisions += 1
2158 2158 n = fl.node(i)
2159 2159
2160 2160 if n in seen:
2161 2161 err(_("%s: duplicate revision %d") % (f, i))
2162 2162 if n not in filenodes[f]:
2163 2163 err(_("%s: %d:%s not in manifests") % (f, i, short(n)))
2164 2164 else:
2165 2165 del filenodes[f][n]
2166 2166
2167 2167 flr = fl.linkrev(n)
2168 2168 if flr not in filelinkrevs.get(f, []):
2169 2169 err(_("%s:%s points to unexpected changeset %d")
2170 2170 % (f, short(n), flr))
2171 2171 else:
2172 2172 filelinkrevs[f].remove(flr)
2173 2173
2174 2174 # verify contents
2175 2175 try:
2176 2176 t = fl.read(n)
2177 2177 except KeyboardInterrupt:
2178 2178 self.ui.warn(_("interrupted"))
2179 2179 raise
2180 2180 except Exception, inst:
2181 2181 err(_("unpacking file %s %s: %s") % (f, short(n), inst))
2182 2182
2183 2183 # verify parents
2184 2184 (p1, p2) = fl.parents(n)
2185 2185 if p1 not in nodes:
2186 2186 err(_("file %s:%s unknown parent 1 %s") %
2187 2187 (f, short(n), short(p1)))
2188 2188 if p2 not in nodes:
2189 2189 err(_("file %s:%s unknown parent 2 %s") %
2190 2190 (f, short(n), short(p1)))
2191 2191 nodes[n] = 1
2192 2192
2193 2193 # cross-check
2194 2194 for node in filenodes[f]:
2195 2195 err(_("node %s in manifests not in %s") % (hex(node), f))
2196 2196
2197 2197 self.ui.status(_("%d files, %d changesets, %d total revisions\n") %
2198 2198 (files, changesets, revisions))
2199 2199
2200 2200 if warnings[0]:
2201 2201 self.ui.warn(_("%d warnings encountered!\n") % warnings[0])
2202 2202 if errors[0]:
2203 2203 self.ui.warn(_("%d integrity errors encountered!\n") % errors[0])
2204 2204 return 1
2205 2205
2206 2206 def stream_in(self, remote):
2207 fp = remote.stream_out()
2208 resp = int(fp.readline())
2209 if resp != 0:
2210 raise util.Abort(_('operation forbidden by server'))
2207 2211 self.ui.status(_('streaming all changes\n'))
2208 fp = remote.stream_out()
2209 2212 total_files, total_bytes = map(int, fp.readline().split(' ', 1))
2210 2213 self.ui.status(_('%d files to transfer, %s of data\n') %
2211 2214 (total_files, util.bytecount(total_bytes)))
2212 2215 start = time.time()
2213 2216 for i in xrange(total_files):
2214 2217 name, size = fp.readline().split('\0', 1)
2215 2218 size = int(size)
2216 2219 self.ui.debug('adding %s (%s)\n' % (name, util.bytecount(size)))
2217 2220 ofp = self.opener(name, 'w')
2218 2221 for chunk in util.filechunkiter(fp, limit=size):
2219 2222 ofp.write(chunk)
2220 2223 ofp.close()
2221 2224 elapsed = time.time() - start
2222 2225 self.ui.status(_('transferred %s in %.1f seconds (%s/sec)\n') %
2223 2226 (util.bytecount(total_bytes), elapsed,
2224 2227 util.bytecount(total_bytes / elapsed)))
2225 2228 self.reload()
2226 2229 return len(self.heads()) + 1
2227 2230
2228 2231 def clone(self, remote, heads=[], stream=False):
2229 2232 '''clone remote repository.
2230 2233
2231 2234 keyword arguments:
2232 2235 heads: list of revs to clone (forces use of pull)
2233 pull: force use of pull, even if remote can stream'''
2236 stream: use streaming clone if possible'''
2234 2237
2235 # now, all clients that can stream can read repo formats
2236 # supported by all servers that can stream.
2238 # now, all clients that can request uncompressed clones can
2239 # read repo formats supported by all servers that can serve
2240 # them.
2237 2241
2238 2242 # if revlog format changes, client will have to check version
2239 # and format flags on "stream" capability, and stream only if
2240 # compatible.
2243 # and format flags on "stream" capability, and use
2244 # uncompressed only if compatible.
2241 2245
2242 2246 if stream and not heads and remote.capable('stream'):
2243 2247 return self.stream_in(remote)
2244 2248 return self.pull(remote, heads)
2245 2249
2246 2250 # used to avoid circular references so destructors work
2247 2251 def aftertrans(base):
2248 2252 p = base
2249 2253 def a():
2250 2254 util.rename(os.path.join(p, "journal"), os.path.join(p, "undo"))
2251 2255 util.rename(os.path.join(p, "journal.dirstate"),
2252 2256 os.path.join(p, "undo.dirstate"))
2253 2257 return a
2254 2258
@@ -1,171 +1,173 b''
1 1 # sshserver.py - ssh protocol server support for mercurial
2 2 #
3 3 # Copyright 2005 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms
6 6 # of the GNU General Public License, incorporated herein by reference.
7 7
8 8 from demandload import demandload
9 9 from i18n import gettext as _
10 10 from node import *
11 11 demandload(globals(), "os streamclone sys tempfile util")
12 12
13 13 class sshserver(object):
14 14 def __init__(self, ui, repo):
15 15 self.ui = ui
16 16 self.repo = repo
17 17 self.lock = None
18 18 self.fin = sys.stdin
19 19 self.fout = sys.stdout
20 20
21 21 sys.stdout = sys.stderr
22 22
23 23 # Prevent insertion/deletion of CRs
24 24 util.set_binary(self.fin)
25 25 util.set_binary(self.fout)
26 26
27 27 def getarg(self):
28 28 argline = self.fin.readline()[:-1]
29 29 arg, l = argline.split()
30 30 val = self.fin.read(int(l))
31 31 return arg, val
32 32
33 33 def respond(self, v):
34 34 self.fout.write("%d\n" % len(v))
35 35 self.fout.write(v)
36 36 self.fout.flush()
37 37
38 38 def serve_forever(self):
39 39 while self.serve_one(): pass
40 40 sys.exit(0)
41 41
42 42 def serve_one(self):
43 43 cmd = self.fin.readline()[:-1]
44 44 if cmd:
45 45 impl = getattr(self, 'do_' + cmd, None)
46 46 if impl: impl()
47 47 else: self.respond("")
48 48 return cmd != ''
49 49
50 50 def do_heads(self):
51 51 h = self.repo.heads()
52 52 self.respond(" ".join(map(hex, h)) + "\n")
53 53
54 54 def do_hello(self):
55 55 '''the hello command returns a set of lines describing various
56 56 interesting things about the server, in an RFC822-like format.
57 57 Currently the only one defined is "capabilities", which
58 58 consists of a line in the form:
59 59
60 60 capabilities: space separated list of tokens
61 61 '''
62 62
63 r = "capabilities: unbundle stream=%d\n" % (self.repo.revlogversion,)
64 self.respond(r)
63 caps = ['unbundle']
64 if self.ui.configbool('server', 'uncompressed'):
65 caps.append('stream=%d' % self.repo.revlogversion)
66 self.respond("capabilities: %s\n" % (' '.join(caps),))
65 67
66 68 def do_lock(self):
67 69 '''DEPRECATED - allowing remote client to lock repo is not safe'''
68 70
69 71 self.lock = self.repo.lock()
70 72 self.respond("")
71 73
72 74 def do_unlock(self):
73 75 '''DEPRECATED'''
74 76
75 77 if self.lock:
76 78 self.lock.release()
77 79 self.lock = None
78 80 self.respond("")
79 81
80 82 def do_branches(self):
81 83 arg, nodes = self.getarg()
82 84 nodes = map(bin, nodes.split(" "))
83 85 r = []
84 86 for b in self.repo.branches(nodes):
85 87 r.append(" ".join(map(hex, b)) + "\n")
86 88 self.respond("".join(r))
87 89
88 90 def do_between(self):
89 91 arg, pairs = self.getarg()
90 92 pairs = [map(bin, p.split("-")) for p in pairs.split(" ")]
91 93 r = []
92 94 for b in self.repo.between(pairs):
93 95 r.append(" ".join(map(hex, b)) + "\n")
94 96 self.respond("".join(r))
95 97
96 98 def do_changegroup(self):
97 99 nodes = []
98 100 arg, roots = self.getarg()
99 101 nodes = map(bin, roots.split(" "))
100 102
101 103 cg = self.repo.changegroup(nodes, 'serve')
102 104 while True:
103 105 d = cg.read(4096)
104 106 if not d:
105 107 break
106 108 self.fout.write(d)
107 109
108 110 self.fout.flush()
109 111
110 112 def do_addchangegroup(self):
111 113 '''DEPRECATED'''
112 114
113 115 if not self.lock:
114 116 self.respond("not locked")
115 117 return
116 118
117 119 self.respond("")
118 120 r = self.repo.addchangegroup(self.fin, 'serve')
119 121 self.respond(str(r))
120 122
121 123 def do_unbundle(self):
122 124 their_heads = self.getarg()[1].split()
123 125
124 126 def check_heads():
125 127 heads = map(hex, self.repo.heads())
126 128 return their_heads == [hex('force')] or their_heads == heads
127 129
128 130 # fail early if possible
129 131 if not check_heads():
130 132 self.respond(_('unsynced changes'))
131 133 return
132 134
133 135 self.respond('')
134 136
135 137 # write bundle data to temporary file because it can be big
136 138
137 139 try:
138 140 fd, tempname = tempfile.mkstemp(prefix='hg-unbundle-')
139 141 fp = os.fdopen(fd, 'wb+')
140 142
141 143 count = int(self.fin.readline())
142 144 while count:
143 145 fp.write(self.fin.read(count))
144 146 count = int(self.fin.readline())
145 147
146 148 was_locked = self.lock is not None
147 149 if not was_locked:
148 150 self.lock = self.repo.lock()
149 151 try:
150 152 if not check_heads():
151 153 # someone else committed/pushed/unbundled while we
152 154 # were transferring data
153 155 self.respond(_('unsynced changes'))
154 156 return
155 157 self.respond('')
156 158
157 159 # push can proceed
158 160
159 161 fp.seek(0)
160 162 r = self.repo.addchangegroup(fp, 'serve')
161 163 self.respond(str(r))
162 164 finally:
163 165 if not was_locked:
164 166 self.lock.release()
165 167 self.lock = None
166 168 finally:
167 169 fp.close()
168 170 os.unlink(tempname)
169 171
170 172 def do_stream_out(self):
171 173 streamclone.stream_out(self.repo, self.fout)
@@ -1,82 +1,90 b''
1 1 # streamclone.py - streaming clone server support for mercurial
2 2 #
3 3 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
4 4 #
5 5 # This software may be used and distributed according to the terms
6 6 # of the GNU General Public License, incorporated herein by reference.
7 7
8 8 from demandload import demandload
9 9 from i18n import gettext as _
10 10 demandload(globals(), "os stat util")
11 11
12 12 # if server supports streaming clone, it advertises "stream"
13 13 # capability with value that is version+flags of repo it is serving.
14 14 # client only streams if it can read that repo format.
15 15
16 16 def walkrepo(root):
17 17 '''iterate over metadata files in repository.
18 18 walk in natural (sorted) order.
19 19 yields 2-tuples: name of .d or .i file, size of file.'''
20 20
21 21 strip_count = len(root) + len(os.sep)
22 22 def walk(path, recurse):
23 23 ents = os.listdir(path)
24 24 ents.sort()
25 25 for e in ents:
26 26 pe = os.path.join(path, e)
27 27 st = os.lstat(pe)
28 28 if stat.S_ISDIR(st.st_mode):
29 29 if recurse:
30 30 for x in walk(pe, True):
31 31 yield x
32 32 else:
33 33 if not stat.S_ISREG(st.st_mode) or len(e) < 2:
34 34 continue
35 35 sfx = e[-2:]
36 36 if sfx in ('.d', '.i'):
37 37 yield pe[strip_count:], st.st_size
38 38 # write file data first
39 39 for x in walk(os.path.join(root, 'data'), True):
40 40 yield x
41 41 # write manifest before changelog
42 42 meta = list(walk(root, False))
43 meta.sort(reverse=True)
43 meta.sort()
44 meta.reverse()
44 45 for x in meta:
45 46 yield x
46 47
47 48 # stream file format is simple.
48 49 #
49 50 # server writes out line that says how many files, how many total
50 51 # bytes. separator is ascii space, byte counts are strings.
51 52 #
52 53 # then for each file:
53 54 #
54 55 # server writes out line that says file name, how many bytes in
55 56 # file. separator is ascii nul, byte count is string.
56 57 #
57 58 # server writes out raw file data.
58 59
59 60 def stream_out(repo, fileobj):
60 61 '''stream out all metadata files in repository.
61 62 writes to file-like object, must support write() and optional flush().'''
63
64 if not repo.ui.configbool('server', 'uncompressed'):
65 fileobj.write('1\n')
66 return
67
68 fileobj.write('0\n')
69
62 70 # get consistent snapshot of repo. lock during scan so lock not
63 71 # needed while we stream, and commits can happen.
64 72 lock = repo.lock()
65 73 repo.ui.debug('scanning\n')
66 74 entries = []
67 75 total_bytes = 0
68 76 for name, size in walkrepo(repo.path):
69 77 entries.append((name, size))
70 78 total_bytes += size
71 79 lock.release()
72 80
73 81 repo.ui.debug('%d files, %d bytes to transfer\n' %
74 82 (len(entries), total_bytes))
75 83 fileobj.write('%d %d\n' % (len(entries), total_bytes))
76 84 for name, size in entries:
77 85 repo.ui.debug('sending %s (%d bytes)\n' % (name, size))
78 86 fileobj.write('%s\0%d\n' % (name, size))
79 87 for chunk in util.filechunkiter(repo.opener(name), limit=size):
80 88 fileobj.write(chunk)
81 89 flush = getattr(fileobj, 'flush', None)
82 90 if flush: flush()
@@ -1,363 +1,363 b''
1 1 # ui.py - user interface bits for mercurial
2 2 #
3 3 # Copyright 2005 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms
6 6 # of the GNU General Public License, incorporated herein by reference.
7 7
8 8 from i18n import gettext as _
9 9 from demandload import *
10 10 demandload(globals(), "errno getpass os re smtplib socket sys tempfile")
11 11 demandload(globals(), "ConfigParser templater traceback util")
12 12
13 13 class ui(object):
14 14 def __init__(self, verbose=False, debug=False, quiet=False,
15 15 interactive=True, traceback=False, parentui=None):
16 16 self.overlay = {}
17 17 if parentui is None:
18 18 # this is the parent of all ui children
19 19 self.parentui = None
20 20 self.cdata = ConfigParser.SafeConfigParser()
21 21 self.readconfig(util.rcpath())
22 22
23 23 self.quiet = self.configbool("ui", "quiet")
24 24 self.verbose = self.configbool("ui", "verbose")
25 25 self.debugflag = self.configbool("ui", "debug")
26 26 self.interactive = self.configbool("ui", "interactive", True)
27 27 self.traceback = traceback
28 28
29 29 self.updateopts(verbose, debug, quiet, interactive)
30 30 self.diffcache = None
31 31 self.header = []
32 32 self.prev_header = []
33 33 self.revlogopts = self.configrevlog()
34 34 else:
35 35 # parentui may point to an ui object which is already a child
36 36 self.parentui = parentui.parentui or parentui
37 37 parent_cdata = self.parentui.cdata
38 38 self.cdata = ConfigParser.SafeConfigParser(parent_cdata.defaults())
39 39 # make interpolation work
40 40 for section in parent_cdata.sections():
41 41 self.cdata.add_section(section)
42 42 for name, value in parent_cdata.items(section, raw=True):
43 43 self.cdata.set(section, name, value)
44 44
45 45 def __getattr__(self, key):
46 46 return getattr(self.parentui, key)
47 47
48 48 def updateopts(self, verbose=False, debug=False, quiet=False,
49 49 interactive=True, traceback=False, config=[]):
50 50 self.quiet = (self.quiet or quiet) and not verbose and not debug
51 51 self.verbose = (self.verbose or verbose) or debug
52 52 self.debugflag = (self.debugflag or debug)
53 53 self.interactive = (self.interactive and interactive)
54 54 self.traceback = self.traceback or traceback
55 55 for cfg in config:
56 56 try:
57 57 name, value = cfg.split('=', 1)
58 58 section, name = name.split('.', 1)
59 59 if not self.cdata.has_section(section):
60 60 self.cdata.add_section(section)
61 61 if not section or not name:
62 62 raise IndexError
63 63 self.cdata.set(section, name, value)
64 64 except (IndexError, ValueError):
65 65 raise util.Abort(_('malformed --config option: %s') % cfg)
66 66
67 67 def readconfig(self, fn, root=None):
68 68 if isinstance(fn, basestring):
69 69 fn = [fn]
70 70 for f in fn:
71 71 try:
72 72 self.cdata.read(f)
73 73 except ConfigParser.ParsingError, inst:
74 74 raise util.Abort(_("Failed to parse %s\n%s") % (f, inst))
75 75 # translate paths relative to root (or home) into absolute paths
76 76 if root is None:
77 77 root = os.path.expanduser('~')
78 78 for name, path in self.configitems("paths"):
79 79 if path and "://" not in path and not os.path.isabs(path):
80 80 self.cdata.set("paths", name, os.path.join(root, path))
81 81
82 82 def setconfig(self, section, name, val):
83 83 self.overlay[(section, name)] = val
84 84
85 85 def config(self, section, name, default=None):
86 86 if self.overlay.has_key((section, name)):
87 87 return self.overlay[(section, name)]
88 88 if self.cdata.has_option(section, name):
89 89 try:
90 90 return self.cdata.get(section, name)
91 91 except ConfigParser.InterpolationError, inst:
92 92 raise util.Abort(_("Error in configuration:\n%s") % inst)
93 93 if self.parentui is None:
94 94 return default
95 95 else:
96 96 return self.parentui.config(section, name, default)
97 97
98 98 def configlist(self, section, name, default=None):
99 99 """Return a list of comma/space separated strings"""
100 100 result = self.config(section, name)
101 101 if result is None:
102 102 result = default or []
103 103 if isinstance(result, basestring):
104 104 result = result.replace(",", " ").split()
105 105 return result
106 106
107 107 def configbool(self, section, name, default=False):
108 108 if self.overlay.has_key((section, name)):
109 109 return self.overlay[(section, name)]
110 110 if self.cdata.has_option(section, name):
111 111 try:
112 112 return self.cdata.getboolean(section, name)
113 113 except ConfigParser.InterpolationError, inst:
114 114 raise util.Abort(_("Error in configuration:\n%s") % inst)
115 115 if self.parentui is None:
116 116 return default
117 117 else:
118 118 return self.parentui.configbool(section, name, default)
119 119
120 120 def has_config(self, section):
121 121 '''tell whether section exists in config.'''
122 122 return self.cdata.has_section(section)
123 123
124 124 def configitems(self, section):
125 125 items = {}
126 126 if self.parentui is not None:
127 127 items = dict(self.parentui.configitems(section))
128 128 if self.cdata.has_section(section):
129 129 try:
130 130 items.update(dict(self.cdata.items(section)))
131 131 except ConfigParser.InterpolationError, inst:
132 132 raise util.Abort(_("Error in configuration:\n%s") % inst)
133 133 x = items.items()
134 134 x.sort()
135 135 return x
136 136
137 137 def walkconfig(self, seen=None):
138 138 if seen is None:
139 139 seen = {}
140 140 for (section, name), value in self.overlay.iteritems():
141 141 yield section, name, value
142 142 seen[section, name] = 1
143 143 for section in self.cdata.sections():
144 144 for name, value in self.cdata.items(section):
145 145 if (section, name) in seen: continue
146 146 yield section, name, value.replace('\n', '\\n')
147 147 seen[section, name] = 1
148 148 if self.parentui is not None:
149 149 for parent in self.parentui.walkconfig(seen):
150 150 yield parent
151 151
152 152 def extensions(self):
153 153 result = self.configitems("extensions")
154 154 for i, (key, value) in enumerate(result):
155 155 if value:
156 156 result[i] = (key, os.path.expanduser(value))
157 157 return result
158 158
159 159 def hgignorefiles(self):
160 160 result = []
161 161 for key, value in self.configitems("ui"):
162 162 if key == 'ignore' or key.startswith('ignore.'):
163 163 result.append(os.path.expanduser(value))
164 164 return result
165 165
166 166 def configrevlog(self):
167 167 result = {}
168 168 for key, value in self.configitems("revlog"):
169 169 result[key.lower()] = value
170 170 return result
171 171
172 172 def diffopts(self):
173 173 if self.diffcache:
174 174 return self.diffcache
175 175 result = {'showfunc': True, 'ignorews': False,
176 176 'ignorewsamount': False, 'ignoreblanklines': False}
177 177 for key, value in self.configitems("diff"):
178 178 if value:
179 179 result[key.lower()] = (value.lower() == 'true')
180 180 self.diffcache = result
181 181 return result
182 182
183 183 def username(self):
184 184 """Return default username to be used in commits.
185 185
186 186 Searched in this order: $HGUSER, [ui] section of hgrcs, $EMAIL
187 187 and stop searching if one of these is set.
188 188 Abort if found username is an empty string to force specifying
189 189 the commit user elsewhere, e.g. with line option or repo hgrc.
190 190 If not found, use ($LOGNAME or $USER or $LNAME or
191 191 $USERNAME) +"@full.hostname".
192 192 """
193 193 user = os.environ.get("HGUSER")
194 194 if user is None:
195 195 user = self.config("ui", "username")
196 196 if user is None:
197 197 user = os.environ.get("EMAIL")
198 198 if user is None:
199 199 try:
200 200 user = '%s@%s' % (getpass.getuser(), socket.getfqdn())
201 201 except KeyError:
202 202 raise util.Abort(_("Please specify a username."))
203 203 return user
204 204
205 205 def shortuser(self, user):
206 206 """Return a short representation of a user name or email address."""
207 207 if not self.verbose: user = util.shortuser(user)
208 208 return user
209 209
210 210 def expandpath(self, loc, default=None):
211 211 """Return repository location relative to cwd or from [paths]"""
212 if "://" in loc or os.path.exists(loc):
212 if "://" in loc or os.path.isdir(loc):
213 213 return loc
214 214
215 215 path = self.config("paths", loc)
216 216 if not path and default is not None:
217 217 path = self.config("paths", default)
218 218 return path or loc
219 219
220 220 def setconfig_remoteopts(self, **opts):
221 221 if opts.get('ssh'):
222 222 self.setconfig("ui", "ssh", opts['ssh'])
223 223 if opts.get('remotecmd'):
224 224 self.setconfig("ui", "remotecmd", opts['remotecmd'])
225 225
226 226 def write(self, *args):
227 227 if self.header:
228 228 if self.header != self.prev_header:
229 229 self.prev_header = self.header
230 230 self.write(*self.header)
231 231 self.header = []
232 232 for a in args:
233 233 sys.stdout.write(str(a))
234 234
235 235 def write_header(self, *args):
236 236 for a in args:
237 237 self.header.append(str(a))
238 238
239 239 def write_err(self, *args):
240 240 try:
241 241 if not sys.stdout.closed: sys.stdout.flush()
242 242 for a in args:
243 243 sys.stderr.write(str(a))
244 244 except IOError, inst:
245 245 if inst.errno != errno.EPIPE:
246 246 raise
247 247
248 248 def flush(self):
249 249 try: sys.stdout.flush()
250 250 except: pass
251 251 try: sys.stderr.flush()
252 252 except: pass
253 253
254 254 def readline(self):
255 255 return sys.stdin.readline()[:-1]
256 256 def prompt(self, msg, pat=None, default="y"):
257 257 if not self.interactive: return default
258 258 while 1:
259 259 self.write(msg, " ")
260 260 r = self.readline()
261 261 if not pat or re.match(pat, r):
262 262 return r
263 263 else:
264 264 self.write(_("unrecognized response\n"))
265 265 def getpass(self, prompt=None, default=None):
266 266 if not self.interactive: return default
267 267 return getpass.getpass(prompt or _('password: '))
268 268 def status(self, *msg):
269 269 if not self.quiet: self.write(*msg)
270 270 def warn(self, *msg):
271 271 self.write_err(*msg)
272 272 def note(self, *msg):
273 273 if self.verbose: self.write(*msg)
274 274 def debug(self, *msg):
275 275 if self.debugflag: self.write(*msg)
276 276 def edit(self, text, user):
277 277 (fd, name) = tempfile.mkstemp(prefix="hg-editor-", suffix=".txt",
278 278 text=True)
279 279 try:
280 280 f = os.fdopen(fd, "w")
281 281 f.write(text)
282 282 f.close()
283 283
284 284 editor = (os.environ.get("HGEDITOR") or
285 285 self.config("ui", "editor") or
286 286 os.environ.get("EDITOR", "vi"))
287 287
288 288 util.system("%s \"%s\"" % (editor, name),
289 289 environ={'HGUSER': user},
290 290 onerr=util.Abort, errprefix=_("edit failed"))
291 291
292 292 f = open(name)
293 293 t = f.read()
294 294 f.close()
295 295 t = re.sub("(?m)^HG:.*\n", "", t)
296 296 finally:
297 297 os.unlink(name)
298 298
299 299 return t
300 300
301 301 def sendmail(self):
302 302 '''send mail message. object returned has one method, sendmail.
303 303 call as sendmail(sender, list-of-recipients, msg).'''
304 304
305 305 def smtp():
306 306 '''send mail using smtp.'''
307 307
308 308 local_hostname = self.config('smtp', 'local_hostname')
309 309 s = smtplib.SMTP(local_hostname=local_hostname)
310 310 mailhost = self.config('smtp', 'host')
311 311 if not mailhost:
312 312 raise util.Abort(_('no [smtp]host in hgrc - cannot send mail'))
313 313 mailport = int(self.config('smtp', 'port', 25))
314 314 self.note(_('sending mail: smtp host %s, port %s\n') %
315 315 (mailhost, mailport))
316 316 s.connect(host=mailhost, port=mailport)
317 317 if self.configbool('smtp', 'tls'):
318 318 self.note(_('(using tls)\n'))
319 319 s.ehlo()
320 320 s.starttls()
321 321 s.ehlo()
322 322 username = self.config('smtp', 'username')
323 323 password = self.config('smtp', 'password')
324 324 if username and password:
325 325 self.note(_('(authenticating to mail server as %s)\n') %
326 326 (username))
327 327 s.login(username, password)
328 328 return s
329 329
330 330 class sendmail(object):
331 331 '''send mail using sendmail.'''
332 332
333 333 def __init__(self, ui, program):
334 334 self.ui = ui
335 335 self.program = program
336 336
337 337 def sendmail(self, sender, recipients, msg):
338 338 cmdline = '%s -f %s %s' % (
339 339 self.program, templater.email(sender),
340 340 ' '.join(map(templater.email, recipients)))
341 341 self.ui.note(_('sending mail: %s\n') % cmdline)
342 342 fp = os.popen(cmdline, 'w')
343 343 fp.write(msg)
344 344 ret = fp.close()
345 345 if ret:
346 346 raise util.Abort('%s %s' % (
347 347 os.path.basename(self.program.split(None, 1)[0]),
348 348 util.explain_exit(ret)[0]))
349 349
350 350 method = self.config('email', 'method', 'smtp')
351 351 if method == 'smtp':
352 352 mail = smtp()
353 353 else:
354 354 mail = sendmail(self, method)
355 355 return mail
356 356
357 357 def print_exc(self):
358 358 '''print exception traceback if traceback printing enabled.
359 359 only to call in exception handler. returns true if traceback
360 360 printed.'''
361 361 if self.traceback:
362 362 traceback.print_exc()
363 363 return self.traceback
@@ -1,31 +1,51 b''
1 1 # basic operation
2 2 adding a
3 3 reverting a
4 4 changeset 2:b38a34ddfd9f backs out changeset 1:a820f4f40a57
5 5 a
6 6 # file that was removed is recreated
7 7 adding a
8 8 adding a
9 9 changeset 2:44cd84c7349a backs out changeset 1:76862dcce372
10 10 content
11 11 # backout of backout is as if nothing happened
12 12 removing a
13 13 changeset 3:0dd8a0ed5e99 backs out changeset 2:44cd84c7349a
14 14 cat: a: No such file or directory
15 15 # backout with merge
16 16 adding a
17 17 reverting a
18 18 changeset 3:6c77ecc28460 backs out changeset 1:314f55b1bf23
19 19 merging with changeset 2:b66ea5b77abb
20 20 merging a
21 21 0 files updated, 1 files merged, 0 files removed, 0 files unresolved
22 22 (branch merge, don't forget to commit)
23 23 line 1
24 24 # backout should not back out subsequent changesets
25 25 adding a
26 26 adding b
27 27 reverting a
28 28 changeset 3:4cbb1e70196a backs out changeset 1:22bca4c721e5
29 29 the backout changeset is a new head - do not forget to merge
30 30 (use "backout -m" if you want to auto-merge)
31 31 b: No such file or directory
32 adding a
33 adding b
34 adding c
35 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
36 adding d
37 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
38 (branch merge, don't forget to commit)
39 # backout of merge should fail
40 abort: cannot back out a merge changeset without --parent
41 # backout of merge with bad parent should fail
42 abort: cb9a9f314b8b is not a parent of b2f3bb92043e
43 # backout of non-merge with parent should fail
44 abort: cannot use --parent on non-merge changeset
45 # backout with valid parent should be ok
46 removing d
47 changeset 5:11fbd9be634c backs out changeset 4:b2f3bb92043e
48 rolling back last transaction
49 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
50 removing c
51 changeset 5:1a5f1a63bf2c backs out changeset 4:b2f3bb92043e
@@ -1,25 +1,25 b''
1 1 #!/bin/sh
2 2
3 mkdir test
3 hg init test
4 4 cd test
5 5 echo foo>foo
6 hg init
7 hg addremove
8 hg commit -m 1
9 hg verify
10 hg serve -p 20059 -d --pid-file=hg.pid
11 cat hg.pid >> $DAEMON_PIDS
6 hg commit -A -d '0 0' -m 1
7 hg --config server.uncompressed=True serve -p 20059 -d --pid-file=hg1.pid
8 cat hg1.pid >> $DAEMON_PIDS
9 hg serve -p 20060 -d --pid-file=hg2.pid
10 cat hg2.pid >> $DAEMON_PIDS
12 11 cd ..
13 12
14 13 echo % clone via stream
15 http_proxy= hg clone --stream http://localhost:20059/ copy 2>&1 | \
14 http_proxy= hg clone --uncompressed http://localhost:20059/ copy 2>&1 | \
16 15 sed -e 's/[0-9][0-9.]*/XXX/g'
17 16 cd copy
18 17 hg verify
19 18
20 cd ..
19 echo % try to clone via stream, should use pull instead
20 http_proxy= hg clone --uncompressed http://localhost:20060/ copy2
21 21
22 22 echo % clone via pull
23 23 http_proxy= hg clone http://localhost:20059/ copy-pull
24 24 cd copy-pull
25 25 hg verify
@@ -1,41 +1,41 b''
1 1 #!/bin/sh
2 2
3 3 hg init a
4 4 cd a
5 5 echo a > a
6 6 hg ci -Ama -d '1123456789 0'
7 hg serve -p 20059 -d --pid-file=hg.pid
7 hg --config server.uncompressed=True serve -p 20059 -d --pid-file=hg.pid
8 8 cat hg.pid >> $DAEMON_PIDS
9 9
10 10 cd ..
11 11 ("$TESTDIR/tinyproxy.py" 20060 localhost >proxy.log 2>&1 </dev/null &
12 12 echo $! > proxy.pid)
13 13 cat proxy.pid >> $DAEMON_PIDS
14 14 sleep 2
15 15
16 16 echo %% url for proxy, stream
17 http_proxy=http://localhost:20060/ hg --config http_proxy.always=True clone --stream http://localhost:20059/ b | \
17 http_proxy=http://localhost:20060/ hg --config http_proxy.always=True clone --uncompressed http://localhost:20059/ b | \
18 18 sed -e 's/[0-9][0-9.]*/XXX/g'
19 19 cd b
20 20 hg verify
21 21 cd ..
22 22
23 23 echo %% url for proxy, pull
24 24 http_proxy=http://localhost:20060/ hg --config http_proxy.always=True clone http://localhost:20059/ b-pull
25 25 cd b-pull
26 26 hg verify
27 27 cd ..
28 28
29 29 echo %% host:port for proxy
30 30 http_proxy=localhost:20060 hg clone --config http_proxy.always=True http://localhost:20059/ c
31 31
32 32 echo %% proxy url with user name and password
33 33 http_proxy=http://user:passwd@localhost:20060 hg clone --config http_proxy.always=True http://localhost:20059/ d
34 34
35 35 echo %% url with user name and password
36 36 http_proxy=http://user:passwd@localhost:20060 hg clone --config http_proxy.always=True http://user:passwd@localhost:20059/ e
37 37
38 38 echo %% bad host:port for proxy
39 39 http_proxy=localhost:20061 hg clone --config http_proxy.always=True http://localhost:20059/ f
40 40
41 41 exit 0
@@ -1,29 +1,30 b''
1 (the addremove command is deprecated; use add and remove --after instead)
2 1 adding foo
3 checking changesets
4 checking manifests
5 crosschecking files in changesets and manifests
6 checking files
7 1 files, 1 changesets, 1 total revisions
8 2 % clone via stream
9 3 streaming all changes
10 4 XXX files to transfer, XXX bytes of data
11 5 transferred XXX bytes in XXX seconds (XXX KB/sec)
12 6 XXX files updated, XXX files merged, XXX files removed, XXX files unresolved
13 7 checking changesets
14 8 checking manifests
15 9 crosschecking files in changesets and manifests
16 10 checking files
17 11 1 files, 1 changesets, 1 total revisions
12 % try to clone via stream, should use pull instead
13 requesting all changes
14 adding changesets
15 adding manifests
16 adding file changes
17 added 1 changesets with 1 changes to 1 files
18 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
18 19 % clone via pull
19 20 requesting all changes
20 21 adding changesets
21 22 adding manifests
22 23 adding file changes
23 24 added 1 changesets with 1 changes to 1 files
24 25 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
25 26 checking changesets
26 27 checking manifests
27 28 crosschecking files in changesets and manifests
28 29 checking files
29 30 1 files, 1 changesets, 1 total revisions
@@ -1,90 +1,92 b''
1 1 #!/bin/sh
2 2
3 3 # This test tries to exercise the ssh functionality with a dummy script
4 4
5 5 cat <<'EOF' > dummyssh
6 6 #!/bin/sh
7 7 # this attempts to deal with relative pathnames
8 8 cd `dirname $0`
9 9
10 10 # check for proper args
11 11 if [ $1 != "user@dummy" ] ; then
12 12 exit -1
13 13 fi
14 14
15 15 # check that we're in the right directory
16 16 if [ ! -x dummyssh ] ; then
17 17 exit -1
18 18 fi
19 19
20 20 echo Got arguments 1:$1 2:$2 3:$3 4:$4 5:$5 >> dummylog
21 21 $2
22 22 EOF
23 23 chmod +x dummyssh
24 24
25 25 echo "# creating 'remote'"
26 26 hg init remote
27 27 cd remote
28 28 echo this > foo
29 29 hg ci -A -m "init" -d "1000000 0" foo
30 echo '[server]' > .hg/hgrc
31 echo 'uncompressed = True' >> .hg/hgrc
30 32
31 33 cd ..
32 34
33 35 echo "# clone remote via stream"
34 hg clone -e ./dummyssh --stream ssh://user@dummy/remote local-stream 2>&1 | \
36 hg clone -e ./dummyssh --uncompressed ssh://user@dummy/remote local-stream 2>&1 | \
35 37 sed -e 's/[0-9][0-9.]*/XXX/g'
36 38 cd local-stream
37 39 hg verify
38 40 cd ..
39 41
40 42 echo "# clone remote via pull"
41 43 hg clone -e ./dummyssh ssh://user@dummy/remote local
42 44
43 45 echo "# verify"
44 46 cd local
45 47 hg verify
46 48
47 49 echo "# empty default pull"
48 50 hg paths
49 51 hg pull -e ../dummyssh
50 52
51 53 echo "# local change"
52 54 echo bleah > foo
53 55 hg ci -m "add" -d "1000000 0"
54 56
55 57 echo "# updating rc"
56 58 echo "default-push = ssh://user@dummy/remote" >> .hg/hgrc
57 59 echo "[ui]" >> .hg/hgrc
58 60 echo "ssh = ../dummyssh" >> .hg/hgrc
59 61
60 62 echo "# find outgoing"
61 63 hg out ssh://user@dummy/remote
62 64
63 65 echo "# find incoming on the remote side"
64 66 hg incoming -R ../remote -e ../dummyssh ssh://user@dummy/local
65 67
66 68 echo "# push"
67 69 hg push
68 70
69 71 cd ../remote
70 72
71 73 echo "# check remote tip"
72 74 hg tip
73 75 hg verify
74 76 hg cat foo
75 77
76 78 echo z > z
77 79 hg ci -A -m z -d '1000001 0' z
78 80
79 81 cd ../local
80 82 echo r > r
81 83 hg ci -A -m z -d '1000002 0' r
82 84
83 85 echo "# push should fail"
84 86 hg push
85 87
86 88 echo "# push should succeed"
87 89 hg push -f
88 90
89 91 cd ..
90 92 cat dummylog
General Comments 0
You need to be logged in to leave comments. Login now