##// END OF EJS Templates
clone: disable stream support on server side by default....
Vadim Gelfer -
r2621:5a5852a4 default
parent child Browse files
Show More
@@ -1,452 +1,464 b''
1 1 HGRC(5)
2 2 =======
3 3 Bryan O'Sullivan <bos@serpentine.com>
4 4
5 5 NAME
6 6 ----
7 7 hgrc - configuration files for Mercurial
8 8
9 9 SYNOPSIS
10 10 --------
11 11
12 12 The Mercurial system uses a set of configuration files to control
13 13 aspects of its behaviour.
14 14
15 15 FILES
16 16 -----
17 17
18 18 Mercurial reads configuration data from several files, if they exist.
19 19 The names of these files depend on the system on which Mercurial is
20 20 installed.
21 21
22 22 (Unix) <install-root>/etc/mercurial/hgrc.d/*.rc::
23 23 (Unix) <install-root>/etc/mercurial/hgrc::
24 24 Per-installation configuration files, searched for in the
25 25 directory where Mercurial is installed. For example, if installed
26 26 in /shared/tools, Mercurial will look in
27 27 /shared/tools/etc/mercurial/hgrc. Options in these files apply to
28 28 all Mercurial commands executed by any user in any directory.
29 29
30 30 (Unix) /etc/mercurial/hgrc.d/*.rc::
31 31 (Unix) /etc/mercurial/hgrc::
32 32 (Windows) C:\Mercurial\Mercurial.ini::
33 33 Per-system configuration files, for the system on which Mercurial
34 34 is running. Options in these files apply to all Mercurial
35 35 commands executed by any user in any directory. Options in these
36 36 files override per-installation options.
37 37
38 38 (Unix) $HOME/.hgrc::
39 39 (Windows) C:\Documents and Settings\USERNAME\Mercurial.ini::
40 40 (Windows) $HOME\Mercurial.ini::
41 41 Per-user configuration file, for the user running Mercurial.
42 42 Options in this file apply to all Mercurial commands executed by
43 43 any user in any directory. Options in this file override
44 44 per-installation and per-system options.
45 45 On Windows system, one of these is chosen exclusively according
46 46 to definition of HOME environment variable.
47 47
48 48 (Unix, Windows) <repo>/.hg/hgrc::
49 49 Per-repository configuration options that only apply in a
50 50 particular repository. This file is not version-controlled, and
51 51 will not get transferred during a "clone" operation. Options in
52 52 this file override options in all other configuration files.
53 53
54 54 SYNTAX
55 55 ------
56 56
57 57 A configuration file consists of sections, led by a "[section]" header
58 58 and followed by "name: value" entries; "name=value" is also accepted.
59 59
60 60 [spam]
61 61 eggs=ham
62 62 green=
63 63 eggs
64 64
65 65 Each line contains one entry. If the lines that follow are indented,
66 66 they are treated as continuations of that entry.
67 67
68 68 Leading whitespace is removed from values. Empty lines are skipped.
69 69
70 70 The optional values can contain format strings which refer to other
71 71 values in the same section, or values in a special DEFAULT section.
72 72
73 73 Lines beginning with "#" or ";" are ignored and may be used to provide
74 74 comments.
75 75
76 76 SECTIONS
77 77 --------
78 78
79 79 This section describes the different sections that may appear in a
80 80 Mercurial "hgrc" file, the purpose of each section, its possible
81 81 keys, and their possible values.
82 82
83 83 decode/encode::
84 84 Filters for transforming files on checkout/checkin. This would
85 85 typically be used for newline processing or other
86 86 localization/canonicalization of files.
87 87
88 88 Filters consist of a filter pattern followed by a filter command.
89 89 Filter patterns are globs by default, rooted at the repository
90 90 root. For example, to match any file ending in ".txt" in the root
91 91 directory only, use the pattern "*.txt". To match any file ending
92 92 in ".c" anywhere in the repository, use the pattern "**.c".
93 93
94 94 The filter command can start with a specifier, either "pipe:" or
95 95 "tempfile:". If no specifier is given, "pipe:" is used by default.
96 96
97 97 A "pipe:" command must accept data on stdin and return the
98 98 transformed data on stdout.
99 99
100 100 Pipe example:
101 101
102 102 [encode]
103 103 # uncompress gzip files on checkin to improve delta compression
104 104 # note: not necessarily a good idea, just an example
105 105 *.gz = pipe: gunzip
106 106
107 107 [decode]
108 108 # recompress gzip files when writing them to the working dir (we
109 109 # can safely omit "pipe:", because it's the default)
110 110 *.gz = gzip
111 111
112 112 A "tempfile:" command is a template. The string INFILE is replaced
113 113 with the name of a temporary file that contains the data to be
114 114 filtered by the command. The string OUTFILE is replaced with the
115 115 name of an empty temporary file, where the filtered data must be
116 116 written by the command.
117 117
118 118 NOTE: the tempfile mechanism is recommended for Windows systems,
119 119 where the standard shell I/O redirection operators often have
120 120 strange effects. In particular, if you are doing line ending
121 121 conversion on Windows using the popular dos2unix and unix2dos
122 122 programs, you *must* use the tempfile mechanism, as using pipes will
123 123 corrupt the contents of your files.
124 124
125 125 Tempfile example:
126 126
127 127 [encode]
128 128 # convert files to unix line ending conventions on checkin
129 129 **.txt = tempfile: dos2unix -n INFILE OUTFILE
130 130
131 131 [decode]
132 132 # convert files to windows line ending conventions when writing
133 133 # them to the working dir
134 134 **.txt = tempfile: unix2dos -n INFILE OUTFILE
135 135
136 136 email::
137 137 Settings for extensions that send email messages.
138 138 from;;
139 139 Optional. Email address to use in "From" header and SMTP envelope
140 140 of outgoing messages.
141 141 method;;
142 142 Optional. Method to use to send email messages. If value is
143 143 "smtp" (default), use SMTP (see section "[mail]" for
144 144 configuration). Otherwise, use as name of program to run that
145 145 acts like sendmail (takes "-f" option for sender, list of
146 146 recipients on command line, message on stdin). Normally, setting
147 147 this to "sendmail" or "/usr/sbin/sendmail" is enough to use
148 148 sendmail to send messages.
149 149
150 150 Email example:
151 151
152 152 [email]
153 153 from = Joseph User <joe.user@example.com>
154 154 method = /usr/sbin/sendmail
155 155
156 156 extensions::
157 157 Mercurial has an extension mechanism for adding new features. To
158 158 enable an extension, create an entry for it in this section.
159 159
160 160 If you know that the extension is already in Python's search path,
161 161 you can give the name of the module, followed by "=", with nothing
162 162 after the "=".
163 163
164 164 Otherwise, give a name that you choose, followed by "=", followed by
165 165 the path to the ".py" file (including the file name extension) that
166 166 defines the extension.
167 167
168 168 Example for ~/.hgrc:
169 169
170 170 [extensions]
171 171 # (the mq extension will get loaded from mercurial's path)
172 172 hgext.mq =
173 173 # (this extension will get loaded from the file specified)
174 174 myfeature = ~/.hgext/myfeature.py
175 175
176 176 hooks::
177 177 Commands or Python functions that get automatically executed by
178 178 various actions such as starting or finishing a commit. Multiple
179 179 hooks can be run for the same action by appending a suffix to the
180 180 action. Overriding a site-wide hook can be done by changing its
181 181 value or setting it to an empty string.
182 182
183 183 Example .hg/hgrc:
184 184
185 185 [hooks]
186 186 # do not use the site-wide hook
187 187 incoming =
188 188 incoming.email = /my/email/hook
189 189 incoming.autobuild = /my/build/hook
190 190
191 191 Most hooks are run with environment variables set that give added
192 192 useful information. For each hook below, the environment variables
193 193 it is passed are listed with names of the form "$HG_foo".
194 194
195 195 changegroup;;
196 196 Run after a changegroup has been added via push, pull or
197 197 unbundle. ID of the first new changeset is in $HG_NODE.
198 198 commit;;
199 199 Run after a changeset has been created in the local repository.
200 200 ID of the newly created changeset is in $HG_NODE. Parent
201 201 changeset IDs are in $HG_PARENT1 and $HG_PARENT2.
202 202 incoming;;
203 203 Run after a changeset has been pulled, pushed, or unbundled into
204 204 the local repository. The ID of the newly arrived changeset is in
205 205 $HG_NODE.
206 206 outgoing;;
207 207 Run after sending changes from local repository to another. ID of
208 208 first changeset sent is in $HG_NODE. Source of operation is in
209 209 $HG_SOURCE; see "preoutgoing" hook for description.
210 210 prechangegroup;;
211 211 Run before a changegroup is added via push, pull or unbundle.
212 212 Exit status 0 allows the changegroup to proceed. Non-zero status
213 213 will cause the push, pull or unbundle to fail.
214 214 precommit;;
215 215 Run before starting a local commit. Exit status 0 allows the
216 216 commit to proceed. Non-zero status will cause the commit to fail.
217 217 Parent changeset IDs are in $HG_PARENT1 and $HG_PARENT2.
218 218 preoutgoing;;
219 219 Run before computing changes to send from the local repository to
220 220 another. Non-zero status will cause failure. This lets you
221 221 prevent pull over http or ssh. Also prevents against local pull,
222 222 push (outbound) or bundle commands, but not effective, since you
223 223 can just copy files instead then. Source of operation is in
224 224 $HG_SOURCE. If "serve", operation is happening on behalf of
225 225 remote ssh or http repository. If "push", "pull" or "bundle",
226 226 operation is happening on behalf of repository on same system.
227 227 pretag;;
228 228 Run before creating a tag. Exit status 0 allows the tag to be
229 229 created. Non-zero status will cause the tag to fail. ID of
230 230 changeset to tag is in $HG_NODE. Name of tag is in $HG_TAG. Tag
231 231 is local if $HG_LOCAL=1, in repo if $HG_LOCAL=0.
232 232 pretxnchangegroup;;
233 233 Run after a changegroup has been added via push, pull or unbundle,
234 234 but before the transaction has been committed. Changegroup is
235 235 visible to hook program. This lets you validate incoming changes
236 236 before accepting them. Passed the ID of the first new changeset
237 237 in $HG_NODE. Exit status 0 allows the transaction to commit.
238 238 Non-zero status will cause the transaction to be rolled back and
239 239 the push, pull or unbundle will fail.
240 240 pretxncommit;;
241 241 Run after a changeset has been created but the transaction not yet
242 242 committed. Changeset is visible to hook program. This lets you
243 243 validate commit message and changes. Exit status 0 allows the
244 244 commit to proceed. Non-zero status will cause the transaction to
245 245 be rolled back. ID of changeset is in $HG_NODE. Parent changeset
246 246 IDs are in $HG_PARENT1 and $HG_PARENT2.
247 247 preupdate;;
248 248 Run before updating the working directory. Exit status 0 allows
249 249 the update to proceed. Non-zero status will prevent the update.
250 250 Changeset ID of first new parent is in $HG_PARENT1. If merge, ID
251 251 of second new parent is in $HG_PARENT2.
252 252 tag;;
253 253 Run after a tag is created. ID of tagged changeset is in
254 254 $HG_NODE. Name of tag is in $HG_TAG. Tag is local if
255 255 $HG_LOCAL=1, in repo if $HG_LOCAL=0.
256 256 update;;
257 257 Run after updating the working directory. Changeset ID of first
258 258 new parent is in $HG_PARENT1. If merge, ID of second new parent
259 259 is in $HG_PARENT2. If update succeeded, $HG_ERROR=0. If update
260 260 failed (e.g. because conflicts not resolved), $HG_ERROR=1.
261 261
262 262 Note: In earlier releases, the names of hook environment variables
263 263 did not have a "HG_" prefix. The old unprefixed names are no longer
264 264 provided in the environment.
265 265
266 266 The syntax for Python hooks is as follows:
267 267
268 268 hookname = python:modulename.submodule.callable
269 269
270 270 Python hooks are run within the Mercurial process. Each hook is
271 271 called with at least three keyword arguments: a ui object (keyword
272 272 "ui"), a repository object (keyword "repo"), and a "hooktype"
273 273 keyword that tells what kind of hook is used. Arguments listed as
274 274 environment variables above are passed as keyword arguments, with no
275 275 "HG_" prefix, and names in lower case.
276 276
277 277 A Python hook must return a "true" value to succeed. Returning a
278 278 "false" value or raising an exception is treated as failure of the
279 279 hook.
280 280
281 281 http_proxy::
282 282 Used to access web-based Mercurial repositories through a HTTP
283 283 proxy.
284 284 host;;
285 285 Host name and (optional) port of the proxy server, for example
286 286 "myproxy:8000".
287 287 no;;
288 288 Optional. Comma-separated list of host names that should bypass
289 289 the proxy.
290 290 passwd;;
291 291 Optional. Password to authenticate with at the proxy server.
292 292 user;;
293 293 Optional. User name to authenticate with at the proxy server.
294 294
295 295 smtp::
296 296 Configuration for extensions that need to send email messages.
297 297 host;;
298 298 Optional. Host name of mail server. Default: "mail".
299 299 port;;
300 300 Optional. Port to connect to on mail server. Default: 25.
301 301 tls;;
302 302 Optional. Whether to connect to mail server using TLS. True or
303 303 False. Default: False.
304 304 username;;
305 305 Optional. User name to authenticate to SMTP server with.
306 306 If username is specified, password must also be specified.
307 307 Default: none.
308 308 password;;
309 309 Optional. Password to authenticate to SMTP server with.
310 310 If username is specified, password must also be specified.
311 311 Default: none.
312 312 local_hostname;;
313 313 Optional. It's the hostname that the sender can use to identify itself
314 314 to the MTA.
315 315
316 316 paths::
317 317 Assigns symbolic names to repositories. The left side is the
318 318 symbolic name, and the right gives the directory or URL that is the
319 319 location of the repository. Default paths can be declared by
320 setting the following entries.
320 setting the following entries.
321 321 default;;
322 322 Directory or URL to use when pulling if no source is specified.
323 323 Default is set to repository from which the current repository
324 324 was cloned.
325 325 default-push;;
326 326 Optional. Directory or URL to use when pushing if no destination
327 327 is specified.
328 328
329 server::
330 Controls generic server settings.
331 stream;;
332 Whether to allow clients to clone a repo using the uncompressed
333 streaming protocol. This transfers about 40% more data than a
334 regular clone, but uses less memory and CPU on both server and
335 client. Over a LAN (100Mbps or better) or a very fast WAN, an
336 uncompressed streaming clone is a lot faster (~10x) than a regular
337 clone. Over most WAN connections (anything slower than about
338 6Mbps), uncompressed streaming is slower, because of the extra
339 data transfer overhead. Default is False.
340
329 341 ui::
330 342 User interface controls.
331 343 debug;;
332 344 Print debugging information. True or False. Default is False.
333 345 editor;;
334 346 The editor to use during a commit. Default is $EDITOR or "vi".
335 347 ignore;;
336 348 A file to read per-user ignore patterns from. This file should be in
337 349 the same format as a repository-wide .hgignore file. This option
338 350 supports hook syntax, so if you want to specify multiple ignore
339 351 files, you can do so by setting something like
340 352 "ignore.other = ~/.hgignore2". For details of the ignore file
341 353 format, see the hgignore(5) man page.
342 354 interactive;;
343 355 Allow to prompt the user. True or False. Default is True.
344 356 logtemplate;;
345 357 Template string for commands that print changesets.
346 358 style;;
347 359 Name of style to use for command output.
348 360 merge;;
349 361 The conflict resolution program to use during a manual merge.
350 362 Default is "hgmerge".
351 363 quiet;;
352 364 Reduce the amount of output printed. True or False. Default is False.
353 365 remotecmd;;
354 366 remote command to use for clone/push/pull operations. Default is 'hg'.
355 367 ssh;;
356 368 command to use for SSH connections. Default is 'ssh'.
357 369 timeout;;
358 370 The timeout used when a lock is held (in seconds), a negative value
359 371 means no timeout. Default is 600.
360 372 username;;
361 373 The committer of a changeset created when running "commit".
362 374 Typically a person's name and email address, e.g. "Fred Widget
363 375 <fred@example.com>". Default is $EMAIL or username@hostname, unless
364 376 username is set to an empty string, which enforces specifying the
365 377 username manually.
366 378 verbose;;
367 379 Increase the amount of output printed. True or False. Default is False.
368 380
369 381
370 382 web::
371 383 Web interface configuration.
372 384 accesslog;;
373 385 Where to output the access log. Default is stdout.
374 386 address;;
375 387 Interface address to bind to. Default is all.
376 388 allow_archive;;
377 389 List of archive format (bz2, gz, zip) allowed for downloading.
378 390 Default is empty.
379 391 allowbz2;;
380 392 (DEPRECATED) Whether to allow .tar.bz2 downloading of repo revisions.
381 393 Default is false.
382 394 allowgz;;
383 395 (DEPRECATED) Whether to allow .tar.gz downloading of repo revisions.
384 396 Default is false.
385 397 allowpull;;
386 398 Whether to allow pulling from the repository. Default is true.
387 399 allow_push;;
388 400 Whether to allow pushing to the repository. If empty or not set,
389 401 push is not allowed. If the special value "*", any remote user
390 402 can push, including unauthenticated users. Otherwise, the remote
391 403 user must have been authenticated, and the authenticated user name
392 404 must be present in this list (separated by whitespace or ",").
393 405 The contents of the allow_push list are examined after the
394 406 deny_push list.
395 407 allowzip;;
396 408 (DEPRECATED) Whether to allow .zip downloading of repo revisions.
397 409 Default is false. This feature creates temporary files.
398 410 baseurl;;
399 411 Base URL to use when publishing URLs in other locations, so
400 412 third-party tools like email notification hooks can construct URLs.
401 413 Example: "http://hgserver/repos/"
402 414 contact;;
403 415 Name or email address of the person in charge of the repository.
404 416 Default is "unknown".
405 417 deny_push;;
406 418 Whether to deny pushing to the repository. If empty or not set,
407 419 push is not denied. If the special value "*", all remote users
408 420 are denied push. Otherwise, unauthenticated users are all denied,
409 421 and any authenticated user name present in this list (separated by
410 422 whitespace or ",") is also denied. The contents of the deny_push
411 423 list are examined before the allow_push list.
412 424 description;;
413 425 Textual description of the repository's purpose or contents.
414 426 Default is "unknown".
415 427 errorlog;;
416 428 Where to output the error log. Default is stderr.
417 429 ipv6;;
418 430 Whether to use IPv6. Default is false.
419 431 name;;
420 432 Repository name to use in the web interface. Default is current
421 433 working directory.
422 434 maxchanges;;
423 435 Maximum number of changes to list on the changelog. Default is 10.
424 436 maxfiles;;
425 437 Maximum number of files to list per changeset. Default is 10.
426 438 port;;
427 439 Port to listen on. Default is 8000.
428 440 push_ssl;;
429 441 Whether to require that inbound pushes be transported over SSL to
430 442 prevent password sniffing. Default is true.
431 443 style;;
432 444 Which template map style to use.
433 445 templates;;
434 446 Where to find the HTML templates. Default is install path.
435 447
436 448
437 449 AUTHOR
438 450 ------
439 451 Bryan O'Sullivan <bos@serpentine.com>.
440 452
441 453 Mercurial was written by Matt Mackall <mpm@selenic.com>.
442 454
443 455 SEE ALSO
444 456 --------
445 457 hg(1), hgignore(5)
446 458
447 459 COPYING
448 460 -------
449 461 This manual page is copyright 2005 Bryan O'Sullivan.
450 462 Mercurial is copyright 2005, 2006 Matt Mackall.
451 463 Free use of this software is granted under the terms of the GNU General
452 464 Public License (GPL).
@@ -1,208 +1,209 b''
1 1 # hg.py - repository classes for mercurial
2 2 #
3 3 # Copyright 2005 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms
6 6 # of the GNU General Public License, incorporated herein by reference.
7 7
8 8 from node import *
9 9 from repo import *
10 10 from demandload import *
11 11 from i18n import gettext as _
12 12 demandload(globals(), "localrepo bundlerepo httprepo sshrepo statichttprepo")
13 13 demandload(globals(), "errno lock os shutil util")
14 14
15 15 def bundle(ui, path):
16 16 if path.startswith('bundle://'):
17 17 path = path[9:]
18 18 else:
19 19 path = path[7:]
20 20 s = path.split("+", 1)
21 21 if len(s) == 1:
22 22 repopath, bundlename = "", s[0]
23 23 else:
24 24 repopath, bundlename = s
25 25 return bundlerepo.bundlerepository(ui, repopath, bundlename)
26 26
27 27 def hg(ui, path):
28 28 ui.warn(_("hg:// syntax is deprecated, please use http:// instead\n"))
29 29 return httprepo.httprepository(ui, path.replace("hg://", "http://"))
30 30
31 31 def local_(ui, path, create=0):
32 32 if path.startswith('file:'):
33 33 path = path[5:]
34 34 return localrepo.localrepository(ui, path, create)
35 35
36 36 def ssh_(ui, path, create=0):
37 37 return sshrepo.sshrepository(ui, path, create)
38 38
39 39 def old_http(ui, path):
40 40 ui.warn(_("old-http:// syntax is deprecated, "
41 41 "please use static-http:// instead\n"))
42 42 return statichttprepo.statichttprepository(
43 43 ui, path.replace("old-http://", "http://"))
44 44
45 45 def static_http(ui, path):
46 46 return statichttprepo.statichttprepository(
47 47 ui, path.replace("static-http://", "http://"))
48 48
49 49 schemes = {
50 50 'bundle': bundle,
51 51 'file': local_,
52 52 'hg': hg,
53 53 'http': lambda ui, path: httprepo.httprepository(ui, path),
54 54 'https': lambda ui, path: httprepo.httpsrepository(ui, path),
55 55 'old-http': old_http,
56 56 'ssh': ssh_,
57 57 'static-http': static_http,
58 58 }
59 59
60 60 def repository(ui, path=None, create=0):
61 61 scheme = None
62 62 if path:
63 63 c = path.find(':')
64 64 if c > 0:
65 65 scheme = schemes.get(path[:c])
66 66 else:
67 67 path = ''
68 68 ctor = scheme or schemes['file']
69 69 if create:
70 70 try:
71 71 return ctor(ui, path, create)
72 72 except TypeError:
73 73 raise util.Abort(_('cannot create new repository over "%s" protocol') %
74 74 scheme)
75 75 return ctor(ui, path)
76 76
77 77 def clone(ui, source, dest=None, pull=False, rev=None, update=True,
78 78 stream=False):
79 79 """Make a copy of an existing repository.
80 80
81 81 Create a copy of an existing repository in a new directory. The
82 82 source and destination are URLs, as passed to the repository
83 83 function. Returns a pair of repository objects, the source and
84 84 newly created destination.
85 85
86 86 The location of the source is added to the new repository's
87 87 .hg/hgrc file, as the default to be used for future pulls and
88 88 pushes.
89 89
90 90 If an exception is raised, the partly cloned/updated destination
91 91 repository will be deleted.
92 92
93 93 Keyword arguments:
94 94
95 95 dest: URL of destination repository to create (defaults to base
96 96 name of source repository)
97 97
98 98 pull: always pull from source repository, even in local case
99 99
100 stream: stream from repository (fast over LAN, slow over WAN)
100 stream: stream raw data uncompressed from repository (fast over
101 LAN, slow over WAN)
101 102
102 103 rev: revision to clone up to (implies pull=True)
103 104
104 105 update: update working directory after clone completes, if
105 106 destination is local repository
106 107 """
107 108 if dest is None:
108 109 dest = os.path.basename(os.path.normpath(source))
109 110
110 111 if os.path.exists(dest):
111 112 raise util.Abort(_("destination '%s' already exists"), dest)
112 113
113 114 class DirCleanup(object):
114 115 def __init__(self, dir_):
115 116 self.rmtree = shutil.rmtree
116 117 self.dir_ = dir_
117 118 def close(self):
118 119 self.dir_ = None
119 120 def __del__(self):
120 121 if self.dir_:
121 122 self.rmtree(self.dir_, True)
122 123
123 124 src_repo = repository(ui, source)
124 125
125 126 dest_repo = None
126 127 try:
127 128 dest_repo = repository(ui, dest)
128 129 raise util.Abort(_("destination '%s' already exists." % dest))
129 130 except RepoError:
130 131 dest_repo = repository(ui, dest, create=True)
131 132
132 133 dest_path = None
133 134 dir_cleanup = None
134 135 if dest_repo.local():
135 136 dest_path = os.path.realpath(dest)
136 137 dir_cleanup = DirCleanup(dest_path)
137 138
138 139 abspath = source
139 140 copy = False
140 141 if src_repo.local() and dest_repo.local():
141 142 abspath = os.path.abspath(source)
142 143 copy = not pull and not rev
143 144
144 145 src_lock, dest_lock = None, None
145 146 if copy:
146 147 try:
147 148 # we use a lock here because if we race with commit, we
148 149 # can end up with extra data in the cloned revlogs that's
149 150 # not pointed to by changesets, thus causing verify to
150 151 # fail
151 152 src_lock = src_repo.lock()
152 153 except lock.LockException:
153 154 copy = False
154 155
155 156 if copy:
156 157 # we lock here to avoid premature writing to the target
157 158 dest_lock = lock.lock(os.path.join(dest_path, ".hg", "lock"))
158 159
159 160 # we need to remove the (empty) data dir in dest so copyfiles
160 161 # can do its work
161 162 os.rmdir(os.path.join(dest_path, ".hg", "data"))
162 163 files = "data 00manifest.d 00manifest.i 00changelog.d 00changelog.i"
163 164 for f in files.split():
164 165 src = os.path.join(source, ".hg", f)
165 166 dst = os.path.join(dest_path, ".hg", f)
166 167 try:
167 168 util.copyfiles(src, dst)
168 169 except OSError, inst:
169 170 if inst.errno != errno.ENOENT:
170 171 raise
171 172
172 173 # we need to re-init the repo after manually copying the data
173 174 # into it
174 175 dest_repo = repository(ui, dest)
175 176
176 177 else:
177 178 revs = None
178 179 if rev:
179 180 if not src_repo.local():
180 181 raise util.Abort(_("clone by revision not supported yet "
181 182 "for remote repositories"))
182 183 revs = [src_repo.lookup(r) for r in rev]
183 184
184 185 if dest_repo.local():
185 186 dest_repo.clone(src_repo, heads=revs, stream=stream)
186 187 elif src_repo.local():
187 188 src_repo.push(dest_repo, revs=revs)
188 189 else:
189 190 raise util.Abort(_("clone from remote to remote not supported"))
190 191
191 192 if src_lock:
192 193 src_lock.release()
193 194
194 195 if dest_repo.local():
195 196 fp = dest_repo.opener("hgrc", "w", text=True)
196 197 fp.write("[paths]\n")
197 198 fp.write("default = %s\n" % abspath)
198 199 fp.close()
199 200
200 201 if dest_lock:
201 202 dest_lock.release()
202 203
203 204 if update:
204 205 dest_repo.update(dest_repo.changelog.tip())
205 206 if dir_cleanup:
206 207 dir_cleanup.close()
207 208
208 209 return src_repo, dest_repo
@@ -1,957 +1,960 b''
1 1 # hgweb/hgweb_mod.py - Web interface for a repository.
2 2 #
3 3 # Copyright 21 May 2005 - (c) 2005 Jake Edge <jake@edge2.net>
4 4 # Copyright 2005 Matt Mackall <mpm@selenic.com>
5 5 #
6 6 # This software may be used and distributed according to the terms
7 7 # of the GNU General Public License, incorporated herein by reference.
8 8
9 9 import os
10 10 import os.path
11 11 import mimetypes
12 12 from mercurial.demandload import demandload
13 13 demandload(globals(), "re zlib ConfigParser mimetools cStringIO sys tempfile")
14 14 demandload(globals(), "mercurial:mdiff,ui,hg,util,archival,streamclone")
15 15 demandload(globals(), "mercurial:templater")
16 16 demandload(globals(), "mercurial.hgweb.common:get_mtime,staticfile")
17 17 from mercurial.node import *
18 18 from mercurial.i18n import gettext as _
19 19
20 20 def _up(p):
21 21 if p[0] != "/":
22 22 p = "/" + p
23 23 if p[-1] == "/":
24 24 p = p[:-1]
25 25 up = os.path.dirname(p)
26 26 if up == "/":
27 27 return "/"
28 28 return up + "/"
29 29
30 30 class hgweb(object):
31 31 def __init__(self, repo, name=None):
32 32 if type(repo) == type(""):
33 33 self.repo = hg.repository(ui.ui(), repo)
34 34 else:
35 35 self.repo = repo
36 36
37 37 self.mtime = -1
38 38 self.reponame = name
39 39 self.archives = 'zip', 'gz', 'bz2'
40 40 self.templatepath = self.repo.ui.config("web", "templates",
41 41 templater.templatepath())
42 42
43 43 def refresh(self):
44 44 mtime = get_mtime(self.repo.root)
45 45 if mtime != self.mtime:
46 46 self.mtime = mtime
47 47 self.repo = hg.repository(self.repo.ui, self.repo.root)
48 48 self.maxchanges = int(self.repo.ui.config("web", "maxchanges", 10))
49 49 self.maxfiles = int(self.repo.ui.config("web", "maxfiles", 10))
50 50 self.allowpull = self.repo.ui.configbool("web", "allowpull", True)
51 51
52 52 def archivelist(self, nodeid):
53 53 allowed = self.repo.ui.configlist("web", "allow_archive")
54 54 for i in self.archives:
55 55 if i in allowed or self.repo.ui.configbool("web", "allow" + i):
56 56 yield {"type" : i, "node" : nodeid, "url": ""}
57 57
58 58 def listfiles(self, files, mf):
59 59 for f in files[:self.maxfiles]:
60 60 yield self.t("filenodelink", node=hex(mf[f]), file=f)
61 61 if len(files) > self.maxfiles:
62 62 yield self.t("fileellipses")
63 63
64 64 def listfilediffs(self, files, changeset):
65 65 for f in files[:self.maxfiles]:
66 66 yield self.t("filedifflink", node=hex(changeset), file=f)
67 67 if len(files) > self.maxfiles:
68 68 yield self.t("fileellipses")
69 69
70 70 def siblings(self, siblings=[], rev=None, hiderev=None, **args):
71 71 if not rev:
72 72 rev = lambda x: ""
73 73 siblings = [s for s in siblings if s != nullid]
74 74 if len(siblings) == 1 and rev(siblings[0]) == hiderev:
75 75 return
76 76 for s in siblings:
77 77 yield dict(node=hex(s), rev=rev(s), **args)
78 78
79 79 def renamelink(self, fl, node):
80 80 r = fl.renamed(node)
81 81 if r:
82 82 return [dict(file=r[0], node=hex(r[1]))]
83 83 return []
84 84
85 85 def showtag(self, t1, node=nullid, **args):
86 86 for t in self.repo.nodetags(node):
87 87 yield self.t(t1, tag=t, **args)
88 88
89 89 def diff(self, node1, node2, files):
90 90 def filterfiles(filters, files):
91 91 l = [x for x in files if x in filters]
92 92
93 93 for t in filters:
94 94 if t and t[-1] != os.sep:
95 95 t += os.sep
96 96 l += [x for x in files if x.startswith(t)]
97 97 return l
98 98
99 99 parity = [0]
100 100 def diffblock(diff, f, fn):
101 101 yield self.t("diffblock",
102 102 lines=prettyprintlines(diff),
103 103 parity=parity[0],
104 104 file=f,
105 105 filenode=hex(fn or nullid))
106 106 parity[0] = 1 - parity[0]
107 107
108 108 def prettyprintlines(diff):
109 109 for l in diff.splitlines(1):
110 110 if l.startswith('+'):
111 111 yield self.t("difflineplus", line=l)
112 112 elif l.startswith('-'):
113 113 yield self.t("difflineminus", line=l)
114 114 elif l.startswith('@'):
115 115 yield self.t("difflineat", line=l)
116 116 else:
117 117 yield self.t("diffline", line=l)
118 118
119 119 r = self.repo
120 120 cl = r.changelog
121 121 mf = r.manifest
122 122 change1 = cl.read(node1)
123 123 change2 = cl.read(node2)
124 124 mmap1 = mf.read(change1[0])
125 125 mmap2 = mf.read(change2[0])
126 126 date1 = util.datestr(change1[2])
127 127 date2 = util.datestr(change2[2])
128 128
129 129 modified, added, removed, deleted, unknown = r.changes(node1, node2)
130 130 if files:
131 131 modified, added, removed = map(lambda x: filterfiles(files, x),
132 132 (modified, added, removed))
133 133
134 134 diffopts = self.repo.ui.diffopts()
135 135 showfunc = diffopts['showfunc']
136 136 ignorews = diffopts['ignorews']
137 137 ignorewsamount = diffopts['ignorewsamount']
138 138 ignoreblanklines = diffopts['ignoreblanklines']
139 139 for f in modified:
140 140 to = r.file(f).read(mmap1[f])
141 141 tn = r.file(f).read(mmap2[f])
142 142 yield diffblock(mdiff.unidiff(to, date1, tn, date2, f,
143 143 showfunc=showfunc, ignorews=ignorews,
144 144 ignorewsamount=ignorewsamount,
145 145 ignoreblanklines=ignoreblanklines), f, tn)
146 146 for f in added:
147 147 to = None
148 148 tn = r.file(f).read(mmap2[f])
149 149 yield diffblock(mdiff.unidiff(to, date1, tn, date2, f,
150 150 showfunc=showfunc, ignorews=ignorews,
151 151 ignorewsamount=ignorewsamount,
152 152 ignoreblanklines=ignoreblanklines), f, tn)
153 153 for f in removed:
154 154 to = r.file(f).read(mmap1[f])
155 155 tn = None
156 156 yield diffblock(mdiff.unidiff(to, date1, tn, date2, f,
157 157 showfunc=showfunc, ignorews=ignorews,
158 158 ignorewsamount=ignorewsamount,
159 159 ignoreblanklines=ignoreblanklines), f, tn)
160 160
161 161 def changelog(self, pos):
162 162 def changenav(**map):
163 163 def seq(factor, maxchanges=None):
164 164 if maxchanges:
165 165 yield maxchanges
166 166 if maxchanges >= 20 and maxchanges <= 40:
167 167 yield 50
168 168 else:
169 169 yield 1 * factor
170 170 yield 3 * factor
171 171 for f in seq(factor * 10):
172 172 yield f
173 173
174 174 l = []
175 175 last = 0
176 176 for f in seq(1, self.maxchanges):
177 177 if f < self.maxchanges or f <= last:
178 178 continue
179 179 if f > count:
180 180 break
181 181 last = f
182 182 r = "%d" % f
183 183 if pos + f < count:
184 184 l.append(("+" + r, pos + f))
185 185 if pos - f >= 0:
186 186 l.insert(0, ("-" + r, pos - f))
187 187
188 188 yield {"rev": 0, "label": "(0)"}
189 189
190 190 for label, rev in l:
191 191 yield {"label": label, "rev": rev}
192 192
193 193 yield {"label": "tip", "rev": "tip"}
194 194
195 195 def changelist(**map):
196 196 parity = (start - end) & 1
197 197 cl = self.repo.changelog
198 198 l = [] # build a list in forward order for efficiency
199 199 for i in range(start, end):
200 200 n = cl.node(i)
201 201 changes = cl.read(n)
202 202 hn = hex(n)
203 203
204 204 l.insert(0, {"parity": parity,
205 205 "author": changes[1],
206 206 "parent": self.siblings(cl.parents(n), cl.rev,
207 207 cl.rev(n) - 1),
208 208 "child": self.siblings(cl.children(n), cl.rev,
209 209 cl.rev(n) + 1),
210 210 "changelogtag": self.showtag("changelogtag",n),
211 211 "manifest": hex(changes[0]),
212 212 "desc": changes[4],
213 213 "date": changes[2],
214 214 "files": self.listfilediffs(changes[3], n),
215 215 "rev": i,
216 216 "node": hn})
217 217 parity = 1 - parity
218 218
219 219 for e in l:
220 220 yield e
221 221
222 222 cl = self.repo.changelog
223 223 mf = cl.read(cl.tip())[0]
224 224 count = cl.count()
225 225 start = max(0, pos - self.maxchanges + 1)
226 226 end = min(count, start + self.maxchanges)
227 227 pos = end - 1
228 228
229 229 yield self.t('changelog',
230 230 changenav=changenav,
231 231 manifest=hex(mf),
232 232 rev=pos, changesets=count, entries=changelist,
233 233 archives=self.archivelist("tip"))
234 234
235 235 def search(self, query):
236 236
237 237 def changelist(**map):
238 238 cl = self.repo.changelog
239 239 count = 0
240 240 qw = query.lower().split()
241 241
242 242 def revgen():
243 243 for i in range(cl.count() - 1, 0, -100):
244 244 l = []
245 245 for j in range(max(0, i - 100), i):
246 246 n = cl.node(j)
247 247 changes = cl.read(n)
248 248 l.append((n, j, changes))
249 249 l.reverse()
250 250 for e in l:
251 251 yield e
252 252
253 253 for n, i, changes in revgen():
254 254 miss = 0
255 255 for q in qw:
256 256 if not (q in changes[1].lower() or
257 257 q in changes[4].lower() or
258 258 q in " ".join(changes[3][:20]).lower()):
259 259 miss = 1
260 260 break
261 261 if miss:
262 262 continue
263 263
264 264 count += 1
265 265 hn = hex(n)
266 266
267 267 yield self.t('searchentry',
268 268 parity=count & 1,
269 269 author=changes[1],
270 270 parent=self.siblings(cl.parents(n), cl.rev),
271 271 child=self.siblings(cl.children(n), cl.rev),
272 272 changelogtag=self.showtag("changelogtag",n),
273 273 manifest=hex(changes[0]),
274 274 desc=changes[4],
275 275 date=changes[2],
276 276 files=self.listfilediffs(changes[3], n),
277 277 rev=i,
278 278 node=hn)
279 279
280 280 if count >= self.maxchanges:
281 281 break
282 282
283 283 cl = self.repo.changelog
284 284 mf = cl.read(cl.tip())[0]
285 285
286 286 yield self.t('search',
287 287 query=query,
288 288 manifest=hex(mf),
289 289 entries=changelist)
290 290
291 291 def changeset(self, nodeid):
292 292 cl = self.repo.changelog
293 293 n = self.repo.lookup(nodeid)
294 294 nodeid = hex(n)
295 295 changes = cl.read(n)
296 296 p1 = cl.parents(n)[0]
297 297
298 298 files = []
299 299 mf = self.repo.manifest.read(changes[0])
300 300 for f in changes[3]:
301 301 files.append(self.t("filenodelink",
302 302 filenode=hex(mf.get(f, nullid)), file=f))
303 303
304 304 def diff(**map):
305 305 yield self.diff(p1, n, None)
306 306
307 307 yield self.t('changeset',
308 308 diff=diff,
309 309 rev=cl.rev(n),
310 310 node=nodeid,
311 311 parent=self.siblings(cl.parents(n), cl.rev),
312 312 child=self.siblings(cl.children(n), cl.rev),
313 313 changesettag=self.showtag("changesettag",n),
314 314 manifest=hex(changes[0]),
315 315 author=changes[1],
316 316 desc=changes[4],
317 317 date=changes[2],
318 318 files=files,
319 319 archives=self.archivelist(nodeid))
320 320
321 321 def filelog(self, f, filenode):
322 322 cl = self.repo.changelog
323 323 fl = self.repo.file(f)
324 324 filenode = hex(fl.lookup(filenode))
325 325 count = fl.count()
326 326
327 327 def entries(**map):
328 328 l = []
329 329 parity = (count - 1) & 1
330 330
331 331 for i in range(count):
332 332 n = fl.node(i)
333 333 lr = fl.linkrev(n)
334 334 cn = cl.node(lr)
335 335 cs = cl.read(cl.node(lr))
336 336
337 337 l.insert(0, {"parity": parity,
338 338 "filenode": hex(n),
339 339 "filerev": i,
340 340 "file": f,
341 341 "node": hex(cn),
342 342 "author": cs[1],
343 343 "date": cs[2],
344 344 "rename": self.renamelink(fl, n),
345 345 "parent": self.siblings(fl.parents(n),
346 346 fl.rev, file=f),
347 347 "child": self.siblings(fl.children(n),
348 348 fl.rev, file=f),
349 349 "desc": cs[4]})
350 350 parity = 1 - parity
351 351
352 352 for e in l:
353 353 yield e
354 354
355 355 yield self.t("filelog", file=f, filenode=filenode, entries=entries)
356 356
357 357 def filerevision(self, f, node):
358 358 fl = self.repo.file(f)
359 359 n = fl.lookup(node)
360 360 node = hex(n)
361 361 text = fl.read(n)
362 362 changerev = fl.linkrev(n)
363 363 cl = self.repo.changelog
364 364 cn = cl.node(changerev)
365 365 cs = cl.read(cn)
366 366 mfn = cs[0]
367 367
368 368 mt = mimetypes.guess_type(f)[0]
369 369 rawtext = text
370 370 if util.binary(text):
371 371 mt = mt or 'application/octet-stream'
372 372 text = "(binary:%s)" % mt
373 373 mt = mt or 'text/plain'
374 374
375 375 def lines():
376 376 for l, t in enumerate(text.splitlines(1)):
377 377 yield {"line": t,
378 378 "linenumber": "% 6d" % (l + 1),
379 379 "parity": l & 1}
380 380
381 381 yield self.t("filerevision",
382 382 file=f,
383 383 filenode=node,
384 384 path=_up(f),
385 385 text=lines(),
386 386 raw=rawtext,
387 387 mimetype=mt,
388 388 rev=changerev,
389 389 node=hex(cn),
390 390 manifest=hex(mfn),
391 391 author=cs[1],
392 392 date=cs[2],
393 393 parent=self.siblings(fl.parents(n), fl.rev, file=f),
394 394 child=self.siblings(fl.children(n), fl.rev, file=f),
395 395 rename=self.renamelink(fl, n),
396 396 permissions=self.repo.manifest.readflags(mfn)[f])
397 397
398 398 def fileannotate(self, f, node):
399 399 bcache = {}
400 400 ncache = {}
401 401 fl = self.repo.file(f)
402 402 n = fl.lookup(node)
403 403 node = hex(n)
404 404 changerev = fl.linkrev(n)
405 405
406 406 cl = self.repo.changelog
407 407 cn = cl.node(changerev)
408 408 cs = cl.read(cn)
409 409 mfn = cs[0]
410 410
411 411 def annotate(**map):
412 412 parity = 1
413 413 last = None
414 414 for r, l in fl.annotate(n):
415 415 try:
416 416 cnode = ncache[r]
417 417 except KeyError:
418 418 cnode = ncache[r] = self.repo.changelog.node(r)
419 419
420 420 try:
421 421 name = bcache[r]
422 422 except KeyError:
423 423 cl = self.repo.changelog.read(cnode)
424 424 bcache[r] = name = self.repo.ui.shortuser(cl[1])
425 425
426 426 if last != cnode:
427 427 parity = 1 - parity
428 428 last = cnode
429 429
430 430 yield {"parity": parity,
431 431 "node": hex(cnode),
432 432 "rev": r,
433 433 "author": name,
434 434 "file": f,
435 435 "line": l}
436 436
437 437 yield self.t("fileannotate",
438 438 file=f,
439 439 filenode=node,
440 440 annotate=annotate,
441 441 path=_up(f),
442 442 rev=changerev,
443 443 node=hex(cn),
444 444 manifest=hex(mfn),
445 445 author=cs[1],
446 446 date=cs[2],
447 447 rename=self.renamelink(fl, n),
448 448 parent=self.siblings(fl.parents(n), fl.rev, file=f),
449 449 child=self.siblings(fl.children(n), fl.rev, file=f),
450 450 permissions=self.repo.manifest.readflags(mfn)[f])
451 451
452 452 def manifest(self, mnode, path):
453 453 man = self.repo.manifest
454 454 mn = man.lookup(mnode)
455 455 mnode = hex(mn)
456 456 mf = man.read(mn)
457 457 rev = man.rev(mn)
458 458 changerev = man.linkrev(mn)
459 459 node = self.repo.changelog.node(changerev)
460 460 mff = man.readflags(mn)
461 461
462 462 files = {}
463 463
464 464 p = path[1:]
465 465 if p and p[-1] != "/":
466 466 p += "/"
467 467 l = len(p)
468 468
469 469 for f,n in mf.items():
470 470 if f[:l] != p:
471 471 continue
472 472 remain = f[l:]
473 473 if "/" in remain:
474 474 short = remain[:remain.index("/") + 1] # bleah
475 475 files[short] = (f, None)
476 476 else:
477 477 short = os.path.basename(remain)
478 478 files[short] = (f, n)
479 479
480 480 def filelist(**map):
481 481 parity = 0
482 482 fl = files.keys()
483 483 fl.sort()
484 484 for f in fl:
485 485 full, fnode = files[f]
486 486 if not fnode:
487 487 continue
488 488
489 489 yield {"file": full,
490 490 "manifest": mnode,
491 491 "filenode": hex(fnode),
492 492 "parity": parity,
493 493 "basename": f,
494 494 "permissions": mff[full]}
495 495 parity = 1 - parity
496 496
497 497 def dirlist(**map):
498 498 parity = 0
499 499 fl = files.keys()
500 500 fl.sort()
501 501 for f in fl:
502 502 full, fnode = files[f]
503 503 if fnode:
504 504 continue
505 505
506 506 yield {"parity": parity,
507 507 "path": os.path.join(path, f),
508 508 "manifest": mnode,
509 509 "basename": f[:-1]}
510 510 parity = 1 - parity
511 511
512 512 yield self.t("manifest",
513 513 manifest=mnode,
514 514 rev=rev,
515 515 node=hex(node),
516 516 path=path,
517 517 up=_up(path),
518 518 fentries=filelist,
519 519 dentries=dirlist,
520 520 archives=self.archivelist(hex(node)))
521 521
522 522 def tags(self):
523 523 cl = self.repo.changelog
524 524 mf = cl.read(cl.tip())[0]
525 525
526 526 i = self.repo.tagslist()
527 527 i.reverse()
528 528
529 529 def entries(notip=False, **map):
530 530 parity = 0
531 531 for k,n in i:
532 532 if notip and k == "tip": continue
533 533 yield {"parity": parity,
534 534 "tag": k,
535 535 "tagmanifest": hex(cl.read(n)[0]),
536 536 "date": cl.read(n)[2],
537 537 "node": hex(n)}
538 538 parity = 1 - parity
539 539
540 540 yield self.t("tags",
541 541 manifest=hex(mf),
542 542 entries=lambda **x: entries(False, **x),
543 543 entriesnotip=lambda **x: entries(True, **x))
544 544
545 545 def summary(self):
546 546 cl = self.repo.changelog
547 547 mf = cl.read(cl.tip())[0]
548 548
549 549 i = self.repo.tagslist()
550 550 i.reverse()
551 551
552 552 def tagentries(**map):
553 553 parity = 0
554 554 count = 0
555 555 for k,n in i:
556 556 if k == "tip": # skip tip
557 557 continue;
558 558
559 559 count += 1
560 560 if count > 10: # limit to 10 tags
561 561 break;
562 562
563 563 c = cl.read(n)
564 564 m = c[0]
565 565 t = c[2]
566 566
567 567 yield self.t("tagentry",
568 568 parity = parity,
569 569 tag = k,
570 570 node = hex(n),
571 571 date = t,
572 572 tagmanifest = hex(m))
573 573 parity = 1 - parity
574 574
575 575 def changelist(**map):
576 576 parity = 0
577 577 cl = self.repo.changelog
578 578 l = [] # build a list in forward order for efficiency
579 579 for i in range(start, end):
580 580 n = cl.node(i)
581 581 changes = cl.read(n)
582 582 hn = hex(n)
583 583 t = changes[2]
584 584
585 585 l.insert(0, self.t(
586 586 'shortlogentry',
587 587 parity = parity,
588 588 author = changes[1],
589 589 manifest = hex(changes[0]),
590 590 desc = changes[4],
591 591 date = t,
592 592 rev = i,
593 593 node = hn))
594 594 parity = 1 - parity
595 595
596 596 yield l
597 597
598 598 cl = self.repo.changelog
599 599 mf = cl.read(cl.tip())[0]
600 600 count = cl.count()
601 601 start = max(0, count - self.maxchanges)
602 602 end = min(count, start + self.maxchanges)
603 603
604 604 yield self.t("summary",
605 605 desc = self.repo.ui.config("web", "description", "unknown"),
606 606 owner = (self.repo.ui.config("ui", "username") or # preferred
607 607 self.repo.ui.config("web", "contact") or # deprecated
608 608 self.repo.ui.config("web", "author", "unknown")), # also
609 609 lastchange = (0, 0), # FIXME
610 610 manifest = hex(mf),
611 611 tags = tagentries,
612 612 shortlog = changelist)
613 613
614 614 def filediff(self, file, changeset):
615 615 cl = self.repo.changelog
616 616 n = self.repo.lookup(changeset)
617 617 changeset = hex(n)
618 618 p1 = cl.parents(n)[0]
619 619 cs = cl.read(n)
620 620 mf = self.repo.manifest.read(cs[0])
621 621
622 622 def diff(**map):
623 623 yield self.diff(p1, n, [file])
624 624
625 625 yield self.t("filediff",
626 626 file=file,
627 627 filenode=hex(mf.get(file, nullid)),
628 628 node=changeset,
629 629 rev=self.repo.changelog.rev(n),
630 630 parent=self.siblings(cl.parents(n), cl.rev),
631 631 child=self.siblings(cl.children(n), cl.rev),
632 632 diff=diff)
633 633
634 634 archive_specs = {
635 635 'bz2': ('application/x-tar', 'tbz2', '.tar.bz2', None),
636 636 'gz': ('application/x-tar', 'tgz', '.tar.gz', None),
637 637 'zip': ('application/zip', 'zip', '.zip', None),
638 638 }
639 639
640 640 def archive(self, req, cnode, type_):
641 641 reponame = re.sub(r"\W+", "-", os.path.basename(self.reponame))
642 642 name = "%s-%s" % (reponame, short(cnode))
643 643 mimetype, artype, extension, encoding = self.archive_specs[type_]
644 644 headers = [('Content-type', mimetype),
645 645 ('Content-disposition', 'attachment; filename=%s%s' %
646 646 (name, extension))]
647 647 if encoding:
648 648 headers.append(('Content-encoding', encoding))
649 649 req.header(headers)
650 650 archival.archive(self.repo, req.out, cnode, artype, prefix=name)
651 651
652 652 # add tags to things
653 653 # tags -> list of changesets corresponding to tags
654 654 # find tag, changeset, file
655 655
656 656 def cleanpath(self, path):
657 657 p = util.normpath(path)
658 658 if p[:2] == "..":
659 659 raise Exception("suspicious path")
660 660 return p
661 661
662 662 def run(self):
663 663 if not os.environ.get('GATEWAY_INTERFACE', '').startswith("CGI/1."):
664 664 raise RuntimeError("This function is only intended to be called while running as a CGI script.")
665 665 import mercurial.hgweb.wsgicgi as wsgicgi
666 666 from request import wsgiapplication
667 667 def make_web_app():
668 668 return self
669 669 wsgicgi.launch(wsgiapplication(make_web_app))
670 670
671 671 def run_wsgi(self, req):
672 672 def header(**map):
673 673 header_file = cStringIO.StringIO(''.join(self.t("header", **map)))
674 674 msg = mimetools.Message(header_file, 0)
675 675 req.header(msg.items())
676 676 yield header_file.read()
677 677
678 678 def rawfileheader(**map):
679 679 req.header([('Content-type', map['mimetype']),
680 680 ('Content-disposition', 'filename=%s' % map['file']),
681 681 ('Content-length', str(len(map['raw'])))])
682 682 yield ''
683 683
684 684 def footer(**map):
685 685 yield self.t("footer",
686 686 motd=self.repo.ui.config("web", "motd", ""),
687 687 **map)
688 688
689 689 def expand_form(form):
690 690 shortcuts = {
691 691 'cl': [('cmd', ['changelog']), ('rev', None)],
692 692 'cs': [('cmd', ['changeset']), ('node', None)],
693 693 'f': [('cmd', ['file']), ('filenode', None)],
694 694 'fl': [('cmd', ['filelog']), ('filenode', None)],
695 695 'fd': [('cmd', ['filediff']), ('node', None)],
696 696 'fa': [('cmd', ['annotate']), ('filenode', None)],
697 697 'mf': [('cmd', ['manifest']), ('manifest', None)],
698 698 'ca': [('cmd', ['archive']), ('node', None)],
699 699 'tags': [('cmd', ['tags'])],
700 700 'tip': [('cmd', ['changeset']), ('node', ['tip'])],
701 701 'static': [('cmd', ['static']), ('file', None)]
702 702 }
703 703
704 704 for k in shortcuts.iterkeys():
705 705 if form.has_key(k):
706 706 for name, value in shortcuts[k]:
707 707 if value is None:
708 708 value = form[k]
709 709 form[name] = value
710 710 del form[k]
711 711
712 712 self.refresh()
713 713
714 714 expand_form(req.form)
715 715
716 716 m = os.path.join(self.templatepath, "map")
717 717 style = self.repo.ui.config("web", "style", "")
718 718 if req.form.has_key('style'):
719 719 style = req.form['style'][0]
720 720 if style:
721 721 b = os.path.basename("map-" + style)
722 722 p = os.path.join(self.templatepath, b)
723 723 if os.path.isfile(p):
724 724 m = p
725 725
726 726 port = req.env["SERVER_PORT"]
727 727 port = port != "80" and (":" + port) or ""
728 728 uri = req.env["REQUEST_URI"]
729 729 if "?" in uri:
730 730 uri = uri.split("?")[0]
731 731 url = "http://%s%s%s" % (req.env["SERVER_NAME"], port, uri)
732 732 if not self.reponame:
733 733 self.reponame = (self.repo.ui.config("web", "name")
734 734 or uri.strip('/') or self.repo.root)
735 735
736 736 self.t = templater.templater(m, templater.common_filters,
737 737 defaults={"url": url,
738 738 "repo": self.reponame,
739 739 "header": header,
740 740 "footer": footer,
741 741 "rawfileheader": rawfileheader,
742 742 })
743 743
744 744 if not req.form.has_key('cmd'):
745 745 req.form['cmd'] = [self.t.cache['default'],]
746 746
747 747 cmd = req.form['cmd'][0]
748 748
749 749 method = getattr(self, 'do_' + cmd, None)
750 750 if method:
751 751 method(req)
752 752 else:
753 753 req.write(self.t("error"))
754 754
755 755 def do_changelog(self, req):
756 756 hi = self.repo.changelog.count() - 1
757 757 if req.form.has_key('rev'):
758 758 hi = req.form['rev'][0]
759 759 try:
760 760 hi = self.repo.changelog.rev(self.repo.lookup(hi))
761 761 except hg.RepoError:
762 762 req.write(self.search(hi)) # XXX redirect to 404 page?
763 763 return
764 764
765 765 req.write(self.changelog(hi))
766 766
767 767 def do_changeset(self, req):
768 768 req.write(self.changeset(req.form['node'][0]))
769 769
770 770 def do_manifest(self, req):
771 771 req.write(self.manifest(req.form['manifest'][0],
772 772 self.cleanpath(req.form['path'][0])))
773 773
774 774 def do_tags(self, req):
775 775 req.write(self.tags())
776 776
777 777 def do_summary(self, req):
778 778 req.write(self.summary())
779 779
780 780 def do_filediff(self, req):
781 781 req.write(self.filediff(self.cleanpath(req.form['file'][0]),
782 782 req.form['node'][0]))
783 783
784 784 def do_file(self, req):
785 785 req.write(self.filerevision(self.cleanpath(req.form['file'][0]),
786 786 req.form['filenode'][0]))
787 787
788 788 def do_annotate(self, req):
789 789 req.write(self.fileannotate(self.cleanpath(req.form['file'][0]),
790 790 req.form['filenode'][0]))
791 791
792 792 def do_filelog(self, req):
793 793 req.write(self.filelog(self.cleanpath(req.form['file'][0]),
794 794 req.form['filenode'][0]))
795 795
796 796 def do_heads(self, req):
797 797 resp = " ".join(map(hex, self.repo.heads())) + "\n"
798 798 req.httphdr("application/mercurial-0.1", length=len(resp))
799 799 req.write(resp)
800 800
801 801 def do_branches(self, req):
802 802 nodes = []
803 803 if req.form.has_key('nodes'):
804 804 nodes = map(bin, req.form['nodes'][0].split(" "))
805 805 resp = cStringIO.StringIO()
806 806 for b in self.repo.branches(nodes):
807 807 resp.write(" ".join(map(hex, b)) + "\n")
808 808 resp = resp.getvalue()
809 809 req.httphdr("application/mercurial-0.1", length=len(resp))
810 810 req.write(resp)
811 811
812 812 def do_between(self, req):
813 813 nodes = []
814 814 if req.form.has_key('pairs'):
815 815 pairs = [map(bin, p.split("-"))
816 816 for p in req.form['pairs'][0].split(" ")]
817 817 resp = cStringIO.StringIO()
818 818 for b in self.repo.between(pairs):
819 819 resp.write(" ".join(map(hex, b)) + "\n")
820 820 resp = resp.getvalue()
821 821 req.httphdr("application/mercurial-0.1", length=len(resp))
822 822 req.write(resp)
823 823
824 824 def do_changegroup(self, req):
825 825 req.httphdr("application/mercurial-0.1")
826 826 nodes = []
827 827 if not self.allowpull:
828 828 return
829 829
830 830 if req.form.has_key('roots'):
831 831 nodes = map(bin, req.form['roots'][0].split(" "))
832 832
833 833 z = zlib.compressobj()
834 834 f = self.repo.changegroup(nodes, 'serve')
835 835 while 1:
836 836 chunk = f.read(4096)
837 837 if not chunk:
838 838 break
839 839 req.write(z.compress(chunk))
840 840
841 841 req.write(z.flush())
842 842
843 843 def do_archive(self, req):
844 844 changeset = self.repo.lookup(req.form['node'][0])
845 845 type_ = req.form['type'][0]
846 846 allowed = self.repo.ui.configlist("web", "allow_archive")
847 847 if (type_ in self.archives and (type_ in allowed or
848 848 self.repo.ui.configbool("web", "allow" + type_, False))):
849 849 self.archive(req, changeset, type_)
850 850 return
851 851
852 852 req.write(self.t("error"))
853 853
854 854 def do_static(self, req):
855 855 fname = req.form['file'][0]
856 856 static = self.repo.ui.config("web", "static",
857 857 os.path.join(self.templatepath,
858 858 "static"))
859 859 req.write(staticfile(static, fname, req)
860 860 or self.t("error", error="%r not found" % fname))
861 861
862 862 def do_capabilities(self, req):
863 resp = 'unbundle stream=%d' % (self.repo.revlogversion,)
863 caps = ['unbundle']
864 if self.repo.ui.configbool('server', 'stream'):
865 caps.append('stream=%d' % self.repo.revlogversion)
866 resp = ' '.join(caps)
864 867 req.httphdr("application/mercurial-0.1", length=len(resp))
865 868 req.write(resp)
866 869
867 870 def check_perm(self, req, op, default):
868 871 '''check permission for operation based on user auth.
869 872 return true if op allowed, else false.
870 873 default is policy to use if no config given.'''
871 874
872 875 user = req.env.get('REMOTE_USER')
873 876
874 877 deny = self.repo.ui.configlist('web', 'deny_' + op)
875 878 if deny and (not user or deny == ['*'] or user in deny):
876 879 return False
877 880
878 881 allow = self.repo.ui.configlist('web', 'allow_' + op)
879 882 return (allow and (allow == ['*'] or user in allow)) or default
880 883
881 884 def do_unbundle(self, req):
882 885 def bail(response, headers={}):
883 886 length = int(req.env['CONTENT_LENGTH'])
884 887 for s in util.filechunkiter(req, limit=length):
885 888 # drain incoming bundle, else client will not see
886 889 # response when run outside cgi script
887 890 pass
888 891 req.httphdr("application/mercurial-0.1", headers=headers)
889 892 req.write('0\n')
890 893 req.write(response)
891 894
892 895 # require ssl by default, auth info cannot be sniffed and
893 896 # replayed
894 897 ssl_req = self.repo.ui.configbool('web', 'push_ssl', True)
895 898 if ssl_req and not req.env.get('HTTPS'):
896 899 bail(_('ssl required\n'))
897 900 return
898 901
899 902 # do not allow push unless explicitly allowed
900 903 if not self.check_perm(req, 'push', False):
901 904 bail(_('push not authorized\n'),
902 905 headers={'status': '401 Unauthorized'})
903 906 return
904 907
905 908 req.httphdr("application/mercurial-0.1")
906 909
907 910 their_heads = req.form['heads'][0].split(' ')
908 911
909 912 def check_heads():
910 913 heads = map(hex, self.repo.heads())
911 914 return their_heads == [hex('force')] or their_heads == heads
912 915
913 916 # fail early if possible
914 917 if not check_heads():
915 918 bail(_('unsynced changes\n'))
916 919 return
917 920
918 921 # do not lock repo until all changegroup data is
919 922 # streamed. save to temporary file.
920 923
921 924 fd, tempname = tempfile.mkstemp(prefix='hg-unbundle-')
922 925 fp = os.fdopen(fd, 'wb+')
923 926 try:
924 927 length = int(req.env['CONTENT_LENGTH'])
925 928 for s in util.filechunkiter(req, limit=length):
926 929 fp.write(s)
927 930
928 931 lock = self.repo.lock()
929 932 try:
930 933 if not check_heads():
931 934 req.write('0\n')
932 935 req.write(_('unsynced changes\n'))
933 936 return
934 937
935 938 fp.seek(0)
936 939
937 940 # send addchangegroup output to client
938 941
939 942 old_stdout = sys.stdout
940 943 sys.stdout = cStringIO.StringIO()
941 944
942 945 try:
943 946 ret = self.repo.addchangegroup(fp, 'serve')
944 947 finally:
945 948 val = sys.stdout.getvalue()
946 949 sys.stdout = old_stdout
947 950 req.write('%d\n' % ret)
948 951 req.write(val)
949 952 finally:
950 953 lock.release()
951 954 finally:
952 955 fp.close()
953 956 os.unlink(tempname)
954 957
955 958 def do_stream_out(self, req):
956 959 req.httphdr("application/mercurial-0.1")
957 960 streamclone.stream_out(self.repo, req)
@@ -1,2254 +1,2258 b''
1 1 # localrepo.py - read/write repository class for mercurial
2 2 #
3 3 # Copyright 2005 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms
6 6 # of the GNU General Public License, incorporated herein by reference.
7 7
8 8 from node import *
9 9 from i18n import gettext as _
10 10 from demandload import *
11 11 import repo
12 12 demandload(globals(), "appendfile changegroup")
13 13 demandload(globals(), "changelog dirstate filelog manifest context")
14 14 demandload(globals(), "re lock transaction tempfile stat mdiff errno ui")
15 15 demandload(globals(), "os revlog time util")
16 16
17 17 class localrepository(repo.repository):
18 18 capabilities = ()
19 19
20 20 def __del__(self):
21 21 self.transhandle = None
22 22 def __init__(self, parentui, path=None, create=0):
23 23 repo.repository.__init__(self)
24 24 if not path:
25 25 p = os.getcwd()
26 26 while not os.path.isdir(os.path.join(p, ".hg")):
27 27 oldp = p
28 28 p = os.path.dirname(p)
29 29 if p == oldp:
30 30 raise repo.RepoError(_("no repo found"))
31 31 path = p
32 32 self.path = os.path.join(path, ".hg")
33 33
34 34 if not create and not os.path.isdir(self.path):
35 35 raise repo.RepoError(_("repository %s not found") % path)
36 36
37 37 self.root = os.path.abspath(path)
38 38 self.origroot = path
39 39 self.ui = ui.ui(parentui=parentui)
40 40 self.opener = util.opener(self.path)
41 41 self.wopener = util.opener(self.root)
42 42
43 43 try:
44 44 self.ui.readconfig(self.join("hgrc"), self.root)
45 45 except IOError:
46 46 pass
47 47
48 48 v = self.ui.revlogopts
49 49 self.revlogversion = int(v.get('format', revlog.REVLOG_DEFAULT_FORMAT))
50 50 self.revlogv1 = self.revlogversion != revlog.REVLOGV0
51 51 fl = v.get('flags', None)
52 52 flags = 0
53 53 if fl != None:
54 54 for x in fl.split():
55 55 flags |= revlog.flagstr(x)
56 56 elif self.revlogv1:
57 57 flags = revlog.REVLOG_DEFAULT_FLAGS
58 58
59 59 v = self.revlogversion | flags
60 60 self.manifest = manifest.manifest(self.opener, v)
61 61 self.changelog = changelog.changelog(self.opener, v)
62 62
63 63 # the changelog might not have the inline index flag
64 64 # on. If the format of the changelog is the same as found in
65 65 # .hgrc, apply any flags found in the .hgrc as well.
66 66 # Otherwise, just version from the changelog
67 67 v = self.changelog.version
68 68 if v == self.revlogversion:
69 69 v |= flags
70 70 self.revlogversion = v
71 71
72 72 self.tagscache = None
73 73 self.nodetagscache = None
74 74 self.encodepats = None
75 75 self.decodepats = None
76 76 self.transhandle = None
77 77
78 78 if create:
79 79 if not os.path.exists(path):
80 80 os.mkdir(path)
81 81 os.mkdir(self.path)
82 82 os.mkdir(self.join("data"))
83 83
84 84 self.dirstate = dirstate.dirstate(self.opener, self.ui, self.root)
85 85
86 86 def hook(self, name, throw=False, **args):
87 87 def callhook(hname, funcname):
88 88 '''call python hook. hook is callable object, looked up as
89 89 name in python module. if callable returns "true", hook
90 90 fails, else passes. if hook raises exception, treated as
91 91 hook failure. exception propagates if throw is "true".
92 92
93 93 reason for "true" meaning "hook failed" is so that
94 94 unmodified commands (e.g. mercurial.commands.update) can
95 95 be run as hooks without wrappers to convert return values.'''
96 96
97 97 self.ui.note(_("calling hook %s: %s\n") % (hname, funcname))
98 98 d = funcname.rfind('.')
99 99 if d == -1:
100 100 raise util.Abort(_('%s hook is invalid ("%s" not in a module)')
101 101 % (hname, funcname))
102 102 modname = funcname[:d]
103 103 try:
104 104 obj = __import__(modname)
105 105 except ImportError:
106 106 try:
107 107 # extensions are loaded with hgext_ prefix
108 108 obj = __import__("hgext_%s" % modname)
109 109 except ImportError:
110 110 raise util.Abort(_('%s hook is invalid '
111 111 '(import of "%s" failed)') %
112 112 (hname, modname))
113 113 try:
114 114 for p in funcname.split('.')[1:]:
115 115 obj = getattr(obj, p)
116 116 except AttributeError, err:
117 117 raise util.Abort(_('%s hook is invalid '
118 118 '("%s" is not defined)') %
119 119 (hname, funcname))
120 120 if not callable(obj):
121 121 raise util.Abort(_('%s hook is invalid '
122 122 '("%s" is not callable)') %
123 123 (hname, funcname))
124 124 try:
125 125 r = obj(ui=self.ui, repo=self, hooktype=name, **args)
126 126 except (KeyboardInterrupt, util.SignalInterrupt):
127 127 raise
128 128 except Exception, exc:
129 129 if isinstance(exc, util.Abort):
130 130 self.ui.warn(_('error: %s hook failed: %s\n') %
131 131 (hname, exc.args[0] % exc.args[1:]))
132 132 else:
133 133 self.ui.warn(_('error: %s hook raised an exception: '
134 134 '%s\n') % (hname, exc))
135 135 if throw:
136 136 raise
137 137 self.ui.print_exc()
138 138 return True
139 139 if r:
140 140 if throw:
141 141 raise util.Abort(_('%s hook failed') % hname)
142 142 self.ui.warn(_('warning: %s hook failed\n') % hname)
143 143 return r
144 144
145 145 def runhook(name, cmd):
146 146 self.ui.note(_("running hook %s: %s\n") % (name, cmd))
147 147 env = dict([('HG_' + k.upper(), v) for k, v in args.iteritems()])
148 148 r = util.system(cmd, environ=env, cwd=self.root)
149 149 if r:
150 150 desc, r = util.explain_exit(r)
151 151 if throw:
152 152 raise util.Abort(_('%s hook %s') % (name, desc))
153 153 self.ui.warn(_('warning: %s hook %s\n') % (name, desc))
154 154 return r
155 155
156 156 r = False
157 157 hooks = [(hname, cmd) for hname, cmd in self.ui.configitems("hooks")
158 158 if hname.split(".", 1)[0] == name and cmd]
159 159 hooks.sort()
160 160 for hname, cmd in hooks:
161 161 if cmd.startswith('python:'):
162 162 r = callhook(hname, cmd[7:].strip()) or r
163 163 else:
164 164 r = runhook(hname, cmd) or r
165 165 return r
166 166
167 167 tag_disallowed = ':\r\n'
168 168
169 169 def tag(self, name, node, local=False, message=None, user=None, date=None):
170 170 '''tag a revision with a symbolic name.
171 171
172 172 if local is True, the tag is stored in a per-repository file.
173 173 otherwise, it is stored in the .hgtags file, and a new
174 174 changeset is committed with the change.
175 175
176 176 keyword arguments:
177 177
178 178 local: whether to store tag in non-version-controlled file
179 179 (default False)
180 180
181 181 message: commit message to use if committing
182 182
183 183 user: name of user to use if committing
184 184
185 185 date: date tuple to use if committing'''
186 186
187 187 for c in self.tag_disallowed:
188 188 if c in name:
189 189 raise util.Abort(_('%r cannot be used in a tag name') % c)
190 190
191 191 self.hook('pretag', throw=True, node=node, tag=name, local=local)
192 192
193 193 if local:
194 194 self.opener('localtags', 'a').write('%s %s\n' % (node, name))
195 195 self.hook('tag', node=node, tag=name, local=local)
196 196 return
197 197
198 198 for x in self.changes():
199 199 if '.hgtags' in x:
200 200 raise util.Abort(_('working copy of .hgtags is changed '
201 201 '(please commit .hgtags manually)'))
202 202
203 203 self.wfile('.hgtags', 'ab').write('%s %s\n' % (node, name))
204 204 if self.dirstate.state('.hgtags') == '?':
205 205 self.add(['.hgtags'])
206 206
207 207 if not message:
208 208 message = _('Added tag %s for changeset %s') % (name, node)
209 209
210 210 self.commit(['.hgtags'], message, user, date)
211 211 self.hook('tag', node=node, tag=name, local=local)
212 212
213 213 def tags(self):
214 214 '''return a mapping of tag to node'''
215 215 if not self.tagscache:
216 216 self.tagscache = {}
217 217
218 218 def parsetag(line, context):
219 219 if not line:
220 220 return
221 221 s = l.split(" ", 1)
222 222 if len(s) != 2:
223 223 self.ui.warn(_("%s: cannot parse entry\n") % context)
224 224 return
225 225 node, key = s
226 226 key = key.strip()
227 227 try:
228 228 bin_n = bin(node)
229 229 except TypeError:
230 230 self.ui.warn(_("%s: node '%s' is not well formed\n") %
231 231 (context, node))
232 232 return
233 233 if bin_n not in self.changelog.nodemap:
234 234 self.ui.warn(_("%s: tag '%s' refers to unknown node\n") %
235 235 (context, key))
236 236 return
237 237 self.tagscache[key] = bin_n
238 238
239 239 # read the tags file from each head, ending with the tip,
240 240 # and add each tag found to the map, with "newer" ones
241 241 # taking precedence
242 242 heads = self.heads()
243 243 heads.reverse()
244 244 fl = self.file(".hgtags")
245 245 for node in heads:
246 246 change = self.changelog.read(node)
247 247 rev = self.changelog.rev(node)
248 248 fn, ff = self.manifest.find(change[0], '.hgtags')
249 249 if fn is None: continue
250 250 count = 0
251 251 for l in fl.read(fn).splitlines():
252 252 count += 1
253 253 parsetag(l, _(".hgtags (rev %d:%s), line %d") %
254 254 (rev, short(node), count))
255 255 try:
256 256 f = self.opener("localtags")
257 257 count = 0
258 258 for l in f:
259 259 count += 1
260 260 parsetag(l, _("localtags, line %d") % count)
261 261 except IOError:
262 262 pass
263 263
264 264 self.tagscache['tip'] = self.changelog.tip()
265 265
266 266 return self.tagscache
267 267
268 268 def tagslist(self):
269 269 '''return a list of tags ordered by revision'''
270 270 l = []
271 271 for t, n in self.tags().items():
272 272 try:
273 273 r = self.changelog.rev(n)
274 274 except:
275 275 r = -2 # sort to the beginning of the list if unknown
276 276 l.append((r, t, n))
277 277 l.sort()
278 278 return [(t, n) for r, t, n in l]
279 279
280 280 def nodetags(self, node):
281 281 '''return the tags associated with a node'''
282 282 if not self.nodetagscache:
283 283 self.nodetagscache = {}
284 284 for t, n in self.tags().items():
285 285 self.nodetagscache.setdefault(n, []).append(t)
286 286 return self.nodetagscache.get(node, [])
287 287
288 288 def lookup(self, key):
289 289 try:
290 290 return self.tags()[key]
291 291 except KeyError:
292 292 try:
293 293 return self.changelog.lookup(key)
294 294 except:
295 295 raise repo.RepoError(_("unknown revision '%s'") % key)
296 296
297 297 def dev(self):
298 298 return os.lstat(self.path).st_dev
299 299
300 300 def local(self):
301 301 return True
302 302
303 303 def join(self, f):
304 304 return os.path.join(self.path, f)
305 305
306 306 def wjoin(self, f):
307 307 return os.path.join(self.root, f)
308 308
309 309 def file(self, f):
310 310 if f[0] == '/':
311 311 f = f[1:]
312 312 return filelog.filelog(self.opener, f, self.revlogversion)
313 313
314 314 def changectx(self, changeid):
315 315 return context.changectx(self, changeid)
316 316
317 317 def filectx(self, path, changeid=None, fileid=None):
318 318 """changeid can be a changeset revision, node, or tag.
319 319 fileid can be a file revision or node."""
320 320 return context.filectx(self, path, changeid, fileid)
321 321
322 322 def getcwd(self):
323 323 return self.dirstate.getcwd()
324 324
325 325 def wfile(self, f, mode='r'):
326 326 return self.wopener(f, mode)
327 327
328 328 def wread(self, filename):
329 329 if self.encodepats == None:
330 330 l = []
331 331 for pat, cmd in self.ui.configitems("encode"):
332 332 mf = util.matcher(self.root, "", [pat], [], [])[1]
333 333 l.append((mf, cmd))
334 334 self.encodepats = l
335 335
336 336 data = self.wopener(filename, 'r').read()
337 337
338 338 for mf, cmd in self.encodepats:
339 339 if mf(filename):
340 340 self.ui.debug(_("filtering %s through %s\n") % (filename, cmd))
341 341 data = util.filter(data, cmd)
342 342 break
343 343
344 344 return data
345 345
346 346 def wwrite(self, filename, data, fd=None):
347 347 if self.decodepats == None:
348 348 l = []
349 349 for pat, cmd in self.ui.configitems("decode"):
350 350 mf = util.matcher(self.root, "", [pat], [], [])[1]
351 351 l.append((mf, cmd))
352 352 self.decodepats = l
353 353
354 354 for mf, cmd in self.decodepats:
355 355 if mf(filename):
356 356 self.ui.debug(_("filtering %s through %s\n") % (filename, cmd))
357 357 data = util.filter(data, cmd)
358 358 break
359 359
360 360 if fd:
361 361 return fd.write(data)
362 362 return self.wopener(filename, 'w').write(data)
363 363
364 364 def transaction(self):
365 365 tr = self.transhandle
366 366 if tr != None and tr.running():
367 367 return tr.nest()
368 368
369 369 # save dirstate for rollback
370 370 try:
371 371 ds = self.opener("dirstate").read()
372 372 except IOError:
373 373 ds = ""
374 374 self.opener("journal.dirstate", "w").write(ds)
375 375
376 376 tr = transaction.transaction(self.ui.warn, self.opener,
377 377 self.join("journal"),
378 378 aftertrans(self.path))
379 379 self.transhandle = tr
380 380 return tr
381 381
382 382 def recover(self):
383 383 l = self.lock()
384 384 if os.path.exists(self.join("journal")):
385 385 self.ui.status(_("rolling back interrupted transaction\n"))
386 386 transaction.rollback(self.opener, self.join("journal"))
387 387 self.reload()
388 388 return True
389 389 else:
390 390 self.ui.warn(_("no interrupted transaction available\n"))
391 391 return False
392 392
393 393 def rollback(self, wlock=None):
394 394 if not wlock:
395 395 wlock = self.wlock()
396 396 l = self.lock()
397 397 if os.path.exists(self.join("undo")):
398 398 self.ui.status(_("rolling back last transaction\n"))
399 399 transaction.rollback(self.opener, self.join("undo"))
400 400 util.rename(self.join("undo.dirstate"), self.join("dirstate"))
401 401 self.reload()
402 402 self.wreload()
403 403 else:
404 404 self.ui.warn(_("no rollback information available\n"))
405 405
406 406 def wreload(self):
407 407 self.dirstate.read()
408 408
409 409 def reload(self):
410 410 self.changelog.load()
411 411 self.manifest.load()
412 412 self.tagscache = None
413 413 self.nodetagscache = None
414 414
415 415 def do_lock(self, lockname, wait, releasefn=None, acquirefn=None,
416 416 desc=None):
417 417 try:
418 418 l = lock.lock(self.join(lockname), 0, releasefn, desc=desc)
419 419 except lock.LockHeld, inst:
420 420 if not wait:
421 421 raise
422 422 self.ui.warn(_("waiting for lock on %s held by %s\n") %
423 423 (desc, inst.args[0]))
424 424 # default to 600 seconds timeout
425 425 l = lock.lock(self.join(lockname),
426 426 int(self.ui.config("ui", "timeout") or 600),
427 427 releasefn, desc=desc)
428 428 if acquirefn:
429 429 acquirefn()
430 430 return l
431 431
432 432 def lock(self, wait=1):
433 433 return self.do_lock("lock", wait, acquirefn=self.reload,
434 434 desc=_('repository %s') % self.origroot)
435 435
436 436 def wlock(self, wait=1):
437 437 return self.do_lock("wlock", wait, self.dirstate.write,
438 438 self.wreload,
439 439 desc=_('working directory of %s') % self.origroot)
440 440
441 441 def checkfilemerge(self, filename, text, filelog, manifest1, manifest2):
442 442 "determine whether a new filenode is needed"
443 443 fp1 = manifest1.get(filename, nullid)
444 444 fp2 = manifest2.get(filename, nullid)
445 445
446 446 if fp2 != nullid:
447 447 # is one parent an ancestor of the other?
448 448 fpa = filelog.ancestor(fp1, fp2)
449 449 if fpa == fp1:
450 450 fp1, fp2 = fp2, nullid
451 451 elif fpa == fp2:
452 452 fp2 = nullid
453 453
454 454 # is the file unmodified from the parent? report existing entry
455 455 if fp2 == nullid and text == filelog.read(fp1):
456 456 return (fp1, None, None)
457 457
458 458 return (None, fp1, fp2)
459 459
460 460 def rawcommit(self, files, text, user, date, p1=None, p2=None, wlock=None):
461 461 orig_parent = self.dirstate.parents()[0] or nullid
462 462 p1 = p1 or self.dirstate.parents()[0] or nullid
463 463 p2 = p2 or self.dirstate.parents()[1] or nullid
464 464 c1 = self.changelog.read(p1)
465 465 c2 = self.changelog.read(p2)
466 466 m1 = self.manifest.read(c1[0])
467 467 mf1 = self.manifest.readflags(c1[0])
468 468 m2 = self.manifest.read(c2[0])
469 469 changed = []
470 470
471 471 if orig_parent == p1:
472 472 update_dirstate = 1
473 473 else:
474 474 update_dirstate = 0
475 475
476 476 if not wlock:
477 477 wlock = self.wlock()
478 478 l = self.lock()
479 479 tr = self.transaction()
480 480 mm = m1.copy()
481 481 mfm = mf1.copy()
482 482 linkrev = self.changelog.count()
483 483 for f in files:
484 484 try:
485 485 t = self.wread(f)
486 486 tm = util.is_exec(self.wjoin(f), mfm.get(f, False))
487 487 r = self.file(f)
488 488 mfm[f] = tm
489 489
490 490 (entry, fp1, fp2) = self.checkfilemerge(f, t, r, m1, m2)
491 491 if entry:
492 492 mm[f] = entry
493 493 continue
494 494
495 495 mm[f] = r.add(t, {}, tr, linkrev, fp1, fp2)
496 496 changed.append(f)
497 497 if update_dirstate:
498 498 self.dirstate.update([f], "n")
499 499 except IOError:
500 500 try:
501 501 del mm[f]
502 502 del mfm[f]
503 503 if update_dirstate:
504 504 self.dirstate.forget([f])
505 505 except:
506 506 # deleted from p2?
507 507 pass
508 508
509 509 mnode = self.manifest.add(mm, mfm, tr, linkrev, c1[0], c2[0])
510 510 user = user or self.ui.username()
511 511 n = self.changelog.add(mnode, changed, text, tr, p1, p2, user, date)
512 512 tr.close()
513 513 if update_dirstate:
514 514 self.dirstate.setparents(n, nullid)
515 515
516 516 def commit(self, files=None, text="", user=None, date=None,
517 517 match=util.always, force=False, lock=None, wlock=None,
518 518 force_editor=False):
519 519 commit = []
520 520 remove = []
521 521 changed = []
522 522
523 523 if files:
524 524 for f in files:
525 525 s = self.dirstate.state(f)
526 526 if s in 'nmai':
527 527 commit.append(f)
528 528 elif s == 'r':
529 529 remove.append(f)
530 530 else:
531 531 self.ui.warn(_("%s not tracked!\n") % f)
532 532 else:
533 533 modified, added, removed, deleted, unknown = self.changes(match=match)
534 534 commit = modified + added
535 535 remove = removed
536 536
537 537 p1, p2 = self.dirstate.parents()
538 538 c1 = self.changelog.read(p1)
539 539 c2 = self.changelog.read(p2)
540 540 m1 = self.manifest.read(c1[0])
541 541 mf1 = self.manifest.readflags(c1[0])
542 542 m2 = self.manifest.read(c2[0])
543 543
544 544 if not commit and not remove and not force and p2 == nullid:
545 545 self.ui.status(_("nothing changed\n"))
546 546 return None
547 547
548 548 xp1 = hex(p1)
549 549 if p2 == nullid: xp2 = ''
550 550 else: xp2 = hex(p2)
551 551
552 552 self.hook("precommit", throw=True, parent1=xp1, parent2=xp2)
553 553
554 554 if not wlock:
555 555 wlock = self.wlock()
556 556 if not lock:
557 557 lock = self.lock()
558 558 tr = self.transaction()
559 559
560 560 # check in files
561 561 new = {}
562 562 linkrev = self.changelog.count()
563 563 commit.sort()
564 564 for f in commit:
565 565 self.ui.note(f + "\n")
566 566 try:
567 567 mf1[f] = util.is_exec(self.wjoin(f), mf1.get(f, False))
568 568 t = self.wread(f)
569 569 except IOError:
570 570 self.ui.warn(_("trouble committing %s!\n") % f)
571 571 raise
572 572
573 573 r = self.file(f)
574 574
575 575 meta = {}
576 576 cp = self.dirstate.copied(f)
577 577 if cp:
578 578 meta["copy"] = cp
579 579 meta["copyrev"] = hex(m1.get(cp, m2.get(cp, nullid)))
580 580 self.ui.debug(_(" %s: copy %s:%s\n") % (f, cp, meta["copyrev"]))
581 581 fp1, fp2 = nullid, nullid
582 582 else:
583 583 entry, fp1, fp2 = self.checkfilemerge(f, t, r, m1, m2)
584 584 if entry:
585 585 new[f] = entry
586 586 continue
587 587
588 588 new[f] = r.add(t, meta, tr, linkrev, fp1, fp2)
589 589 # remember what we've added so that we can later calculate
590 590 # the files to pull from a set of changesets
591 591 changed.append(f)
592 592
593 593 # update manifest
594 594 m1 = m1.copy()
595 595 m1.update(new)
596 596 for f in remove:
597 597 if f in m1:
598 598 del m1[f]
599 599 mn = self.manifest.add(m1, mf1, tr, linkrev, c1[0], c2[0],
600 600 (new, remove))
601 601
602 602 # add changeset
603 603 new = new.keys()
604 604 new.sort()
605 605
606 606 user = user or self.ui.username()
607 607 if not text or force_editor:
608 608 edittext = []
609 609 if text:
610 610 edittext.append(text)
611 611 edittext.append("")
612 612 if p2 != nullid:
613 613 edittext.append("HG: branch merge")
614 614 edittext.extend(["HG: changed %s" % f for f in changed])
615 615 edittext.extend(["HG: removed %s" % f for f in remove])
616 616 if not changed and not remove:
617 617 edittext.append("HG: no files changed")
618 618 edittext.append("")
619 619 # run editor in the repository root
620 620 olddir = os.getcwd()
621 621 os.chdir(self.root)
622 622 text = self.ui.edit("\n".join(edittext), user)
623 623 os.chdir(olddir)
624 624
625 625 lines = [line.rstrip() for line in text.rstrip().splitlines()]
626 626 while lines and not lines[0]:
627 627 del lines[0]
628 628 if not lines:
629 629 return None
630 630 text = '\n'.join(lines)
631 631 n = self.changelog.add(mn, changed + remove, text, tr, p1, p2, user, date)
632 632 self.hook('pretxncommit', throw=True, node=hex(n), parent1=xp1,
633 633 parent2=xp2)
634 634 tr.close()
635 635
636 636 self.dirstate.setparents(n)
637 637 self.dirstate.update(new, "n")
638 638 self.dirstate.forget(remove)
639 639
640 640 self.hook("commit", node=hex(n), parent1=xp1, parent2=xp2)
641 641 return n
642 642
643 643 def walk(self, node=None, files=[], match=util.always, badmatch=None):
644 644 if node:
645 645 fdict = dict.fromkeys(files)
646 646 for fn in self.manifest.read(self.changelog.read(node)[0]):
647 647 fdict.pop(fn, None)
648 648 if match(fn):
649 649 yield 'm', fn
650 650 for fn in fdict:
651 651 if badmatch and badmatch(fn):
652 652 if match(fn):
653 653 yield 'b', fn
654 654 else:
655 655 self.ui.warn(_('%s: No such file in rev %s\n') % (
656 656 util.pathto(self.getcwd(), fn), short(node)))
657 657 else:
658 658 for src, fn in self.dirstate.walk(files, match, badmatch=badmatch):
659 659 yield src, fn
660 660
661 661 def changes(self, node1=None, node2=None, files=[], match=util.always,
662 662 wlock=None, show_ignored=None):
663 663 """return changes between two nodes or node and working directory
664 664
665 665 If node1 is None, use the first dirstate parent instead.
666 666 If node2 is None, compare node1 with working directory.
667 667 """
668 668
669 669 def fcmp(fn, mf):
670 670 t1 = self.wread(fn)
671 671 t2 = self.file(fn).read(mf.get(fn, nullid))
672 672 return cmp(t1, t2)
673 673
674 674 def mfmatches(node):
675 675 change = self.changelog.read(node)
676 676 mf = dict(self.manifest.read(change[0]))
677 677 for fn in mf.keys():
678 678 if not match(fn):
679 679 del mf[fn]
680 680 return mf
681 681
682 682 modified, added, removed, deleted, unknown, ignored = [],[],[],[],[],[]
683 683 compareworking = False
684 684 if not node1 or (not node2 and node1 == self.dirstate.parents()[0]):
685 685 compareworking = True
686 686
687 687 if not compareworking:
688 688 # read the manifest from node1 before the manifest from node2,
689 689 # so that we'll hit the manifest cache if we're going through
690 690 # all the revisions in parent->child order.
691 691 mf1 = mfmatches(node1)
692 692
693 693 # are we comparing the working directory?
694 694 if not node2:
695 695 if not wlock:
696 696 try:
697 697 wlock = self.wlock(wait=0)
698 698 except lock.LockException:
699 699 wlock = None
700 700 lookup, modified, added, removed, deleted, unknown, ignored = (
701 701 self.dirstate.changes(files, match, show_ignored))
702 702
703 703 # are we comparing working dir against its parent?
704 704 if compareworking:
705 705 if lookup:
706 706 # do a full compare of any files that might have changed
707 707 mf2 = mfmatches(self.dirstate.parents()[0])
708 708 for f in lookup:
709 709 if fcmp(f, mf2):
710 710 modified.append(f)
711 711 elif wlock is not None:
712 712 self.dirstate.update([f], "n")
713 713 else:
714 714 # we are comparing working dir against non-parent
715 715 # generate a pseudo-manifest for the working dir
716 716 mf2 = mfmatches(self.dirstate.parents()[0])
717 717 for f in lookup + modified + added:
718 718 mf2[f] = ""
719 719 for f in removed:
720 720 if f in mf2:
721 721 del mf2[f]
722 722 else:
723 723 # we are comparing two revisions
724 724 deleted, unknown, ignored = [], [], []
725 725 mf2 = mfmatches(node2)
726 726
727 727 if not compareworking:
728 728 # flush lists from dirstate before comparing manifests
729 729 modified, added = [], []
730 730
731 731 # make sure to sort the files so we talk to the disk in a
732 732 # reasonable order
733 733 mf2keys = mf2.keys()
734 734 mf2keys.sort()
735 735 for fn in mf2keys:
736 736 if mf1.has_key(fn):
737 737 if mf1[fn] != mf2[fn] and (mf2[fn] != "" or fcmp(fn, mf1)):
738 738 modified.append(fn)
739 739 del mf1[fn]
740 740 else:
741 741 added.append(fn)
742 742
743 743 removed = mf1.keys()
744 744
745 745 # sort and return results:
746 746 for l in modified, added, removed, deleted, unknown, ignored:
747 747 l.sort()
748 748 if show_ignored is None:
749 749 return (modified, added, removed, deleted, unknown)
750 750 else:
751 751 return (modified, added, removed, deleted, unknown, ignored)
752 752
753 753 def add(self, list, wlock=None):
754 754 if not wlock:
755 755 wlock = self.wlock()
756 756 for f in list:
757 757 p = self.wjoin(f)
758 758 if not os.path.exists(p):
759 759 self.ui.warn(_("%s does not exist!\n") % f)
760 760 elif not os.path.isfile(p):
761 761 self.ui.warn(_("%s not added: only files supported currently\n")
762 762 % f)
763 763 elif self.dirstate.state(f) in 'an':
764 764 self.ui.warn(_("%s already tracked!\n") % f)
765 765 else:
766 766 self.dirstate.update([f], "a")
767 767
768 768 def forget(self, list, wlock=None):
769 769 if not wlock:
770 770 wlock = self.wlock()
771 771 for f in list:
772 772 if self.dirstate.state(f) not in 'ai':
773 773 self.ui.warn(_("%s not added!\n") % f)
774 774 else:
775 775 self.dirstate.forget([f])
776 776
777 777 def remove(self, list, unlink=False, wlock=None):
778 778 if unlink:
779 779 for f in list:
780 780 try:
781 781 util.unlink(self.wjoin(f))
782 782 except OSError, inst:
783 783 if inst.errno != errno.ENOENT:
784 784 raise
785 785 if not wlock:
786 786 wlock = self.wlock()
787 787 for f in list:
788 788 p = self.wjoin(f)
789 789 if os.path.exists(p):
790 790 self.ui.warn(_("%s still exists!\n") % f)
791 791 elif self.dirstate.state(f) == 'a':
792 792 self.dirstate.forget([f])
793 793 elif f not in self.dirstate:
794 794 self.ui.warn(_("%s not tracked!\n") % f)
795 795 else:
796 796 self.dirstate.update([f], "r")
797 797
798 798 def undelete(self, list, wlock=None):
799 799 p = self.dirstate.parents()[0]
800 800 mn = self.changelog.read(p)[0]
801 801 mf = self.manifest.readflags(mn)
802 802 m = self.manifest.read(mn)
803 803 if not wlock:
804 804 wlock = self.wlock()
805 805 for f in list:
806 806 if self.dirstate.state(f) not in "r":
807 807 self.ui.warn("%s not removed!\n" % f)
808 808 else:
809 809 t = self.file(f).read(m[f])
810 810 self.wwrite(f, t)
811 811 util.set_exec(self.wjoin(f), mf[f])
812 812 self.dirstate.update([f], "n")
813 813
814 814 def copy(self, source, dest, wlock=None):
815 815 p = self.wjoin(dest)
816 816 if not os.path.exists(p):
817 817 self.ui.warn(_("%s does not exist!\n") % dest)
818 818 elif not os.path.isfile(p):
819 819 self.ui.warn(_("copy failed: %s is not a file\n") % dest)
820 820 else:
821 821 if not wlock:
822 822 wlock = self.wlock()
823 823 if self.dirstate.state(dest) == '?':
824 824 self.dirstate.update([dest], "a")
825 825 self.dirstate.copy(source, dest)
826 826
827 827 def heads(self, start=None):
828 828 heads = self.changelog.heads(start)
829 829 # sort the output in rev descending order
830 830 heads = [(-self.changelog.rev(h), h) for h in heads]
831 831 heads.sort()
832 832 return [n for (r, n) in heads]
833 833
834 834 # branchlookup returns a dict giving a list of branches for
835 835 # each head. A branch is defined as the tag of a node or
836 836 # the branch of the node's parents. If a node has multiple
837 837 # branch tags, tags are eliminated if they are visible from other
838 838 # branch tags.
839 839 #
840 840 # So, for this graph: a->b->c->d->e
841 841 # \ /
842 842 # aa -----/
843 843 # a has tag 2.6.12
844 844 # d has tag 2.6.13
845 845 # e would have branch tags for 2.6.12 and 2.6.13. Because the node
846 846 # for 2.6.12 can be reached from the node 2.6.13, that is eliminated
847 847 # from the list.
848 848 #
849 849 # It is possible that more than one head will have the same branch tag.
850 850 # callers need to check the result for multiple heads under the same
851 851 # branch tag if that is a problem for them (ie checkout of a specific
852 852 # branch).
853 853 #
854 854 # passing in a specific branch will limit the depth of the search
855 855 # through the parents. It won't limit the branches returned in the
856 856 # result though.
857 857 def branchlookup(self, heads=None, branch=None):
858 858 if not heads:
859 859 heads = self.heads()
860 860 headt = [ h for h in heads ]
861 861 chlog = self.changelog
862 862 branches = {}
863 863 merges = []
864 864 seenmerge = {}
865 865
866 866 # traverse the tree once for each head, recording in the branches
867 867 # dict which tags are visible from this head. The branches
868 868 # dict also records which tags are visible from each tag
869 869 # while we traverse.
870 870 while headt or merges:
871 871 if merges:
872 872 n, found = merges.pop()
873 873 visit = [n]
874 874 else:
875 875 h = headt.pop()
876 876 visit = [h]
877 877 found = [h]
878 878 seen = {}
879 879 while visit:
880 880 n = visit.pop()
881 881 if n in seen:
882 882 continue
883 883 pp = chlog.parents(n)
884 884 tags = self.nodetags(n)
885 885 if tags:
886 886 for x in tags:
887 887 if x == 'tip':
888 888 continue
889 889 for f in found:
890 890 branches.setdefault(f, {})[n] = 1
891 891 branches.setdefault(n, {})[n] = 1
892 892 break
893 893 if n not in found:
894 894 found.append(n)
895 895 if branch in tags:
896 896 continue
897 897 seen[n] = 1
898 898 if pp[1] != nullid and n not in seenmerge:
899 899 merges.append((pp[1], [x for x in found]))
900 900 seenmerge[n] = 1
901 901 if pp[0] != nullid:
902 902 visit.append(pp[0])
903 903 # traverse the branches dict, eliminating branch tags from each
904 904 # head that are visible from another branch tag for that head.
905 905 out = {}
906 906 viscache = {}
907 907 for h in heads:
908 908 def visible(node):
909 909 if node in viscache:
910 910 return viscache[node]
911 911 ret = {}
912 912 visit = [node]
913 913 while visit:
914 914 x = visit.pop()
915 915 if x in viscache:
916 916 ret.update(viscache[x])
917 917 elif x not in ret:
918 918 ret[x] = 1
919 919 if x in branches:
920 920 visit[len(visit):] = branches[x].keys()
921 921 viscache[node] = ret
922 922 return ret
923 923 if h not in branches:
924 924 continue
925 925 # O(n^2), but somewhat limited. This only searches the
926 926 # tags visible from a specific head, not all the tags in the
927 927 # whole repo.
928 928 for b in branches[h]:
929 929 vis = False
930 930 for bb in branches[h].keys():
931 931 if b != bb:
932 932 if b in visible(bb):
933 933 vis = True
934 934 break
935 935 if not vis:
936 936 l = out.setdefault(h, [])
937 937 l[len(l):] = self.nodetags(b)
938 938 return out
939 939
940 940 def branches(self, nodes):
941 941 if not nodes:
942 942 nodes = [self.changelog.tip()]
943 943 b = []
944 944 for n in nodes:
945 945 t = n
946 946 while 1:
947 947 p = self.changelog.parents(n)
948 948 if p[1] != nullid or p[0] == nullid:
949 949 b.append((t, n, p[0], p[1]))
950 950 break
951 951 n = p[0]
952 952 return b
953 953
954 954 def between(self, pairs):
955 955 r = []
956 956
957 957 for top, bottom in pairs:
958 958 n, l, i = top, [], 0
959 959 f = 1
960 960
961 961 while n != bottom:
962 962 p = self.changelog.parents(n)[0]
963 963 if i == f:
964 964 l.append(n)
965 965 f = f * 2
966 966 n = p
967 967 i += 1
968 968
969 969 r.append(l)
970 970
971 971 return r
972 972
973 973 def findincoming(self, remote, base=None, heads=None, force=False):
974 974 """Return list of roots of the subsets of missing nodes from remote
975 975
976 976 If base dict is specified, assume that these nodes and their parents
977 977 exist on the remote side and that no child of a node of base exists
978 978 in both remote and self.
979 979 Furthermore base will be updated to include the nodes that exists
980 980 in self and remote but no children exists in self and remote.
981 981 If a list of heads is specified, return only nodes which are heads
982 982 or ancestors of these heads.
983 983
984 984 All the ancestors of base are in self and in remote.
985 985 All the descendants of the list returned are missing in self.
986 986 (and so we know that the rest of the nodes are missing in remote, see
987 987 outgoing)
988 988 """
989 989 m = self.changelog.nodemap
990 990 search = []
991 991 fetch = {}
992 992 seen = {}
993 993 seenbranch = {}
994 994 if base == None:
995 995 base = {}
996 996
997 997 if not heads:
998 998 heads = remote.heads()
999 999
1000 1000 if self.changelog.tip() == nullid:
1001 1001 base[nullid] = 1
1002 1002 if heads != [nullid]:
1003 1003 return [nullid]
1004 1004 return []
1005 1005
1006 1006 # assume we're closer to the tip than the root
1007 1007 # and start by examining the heads
1008 1008 self.ui.status(_("searching for changes\n"))
1009 1009
1010 1010 unknown = []
1011 1011 for h in heads:
1012 1012 if h not in m:
1013 1013 unknown.append(h)
1014 1014 else:
1015 1015 base[h] = 1
1016 1016
1017 1017 if not unknown:
1018 1018 return []
1019 1019
1020 1020 req = dict.fromkeys(unknown)
1021 1021 reqcnt = 0
1022 1022
1023 1023 # search through remote branches
1024 1024 # a 'branch' here is a linear segment of history, with four parts:
1025 1025 # head, root, first parent, second parent
1026 1026 # (a branch always has two parents (or none) by definition)
1027 1027 unknown = remote.branches(unknown)
1028 1028 while unknown:
1029 1029 r = []
1030 1030 while unknown:
1031 1031 n = unknown.pop(0)
1032 1032 if n[0] in seen:
1033 1033 continue
1034 1034
1035 1035 self.ui.debug(_("examining %s:%s\n")
1036 1036 % (short(n[0]), short(n[1])))
1037 1037 if n[0] == nullid: # found the end of the branch
1038 1038 pass
1039 1039 elif n in seenbranch:
1040 1040 self.ui.debug(_("branch already found\n"))
1041 1041 continue
1042 1042 elif n[1] and n[1] in m: # do we know the base?
1043 1043 self.ui.debug(_("found incomplete branch %s:%s\n")
1044 1044 % (short(n[0]), short(n[1])))
1045 1045 search.append(n) # schedule branch range for scanning
1046 1046 seenbranch[n] = 1
1047 1047 else:
1048 1048 if n[1] not in seen and n[1] not in fetch:
1049 1049 if n[2] in m and n[3] in m:
1050 1050 self.ui.debug(_("found new changeset %s\n") %
1051 1051 short(n[1]))
1052 1052 fetch[n[1]] = 1 # earliest unknown
1053 1053 for p in n[2:4]:
1054 1054 if p in m:
1055 1055 base[p] = 1 # latest known
1056 1056
1057 1057 for p in n[2:4]:
1058 1058 if p not in req and p not in m:
1059 1059 r.append(p)
1060 1060 req[p] = 1
1061 1061 seen[n[0]] = 1
1062 1062
1063 1063 if r:
1064 1064 reqcnt += 1
1065 1065 self.ui.debug(_("request %d: %s\n") %
1066 1066 (reqcnt, " ".join(map(short, r))))
1067 1067 for p in range(0, len(r), 10):
1068 1068 for b in remote.branches(r[p:p+10]):
1069 1069 self.ui.debug(_("received %s:%s\n") %
1070 1070 (short(b[0]), short(b[1])))
1071 1071 unknown.append(b)
1072 1072
1073 1073 # do binary search on the branches we found
1074 1074 while search:
1075 1075 n = search.pop(0)
1076 1076 reqcnt += 1
1077 1077 l = remote.between([(n[0], n[1])])[0]
1078 1078 l.append(n[1])
1079 1079 p = n[0]
1080 1080 f = 1
1081 1081 for i in l:
1082 1082 self.ui.debug(_("narrowing %d:%d %s\n") % (f, len(l), short(i)))
1083 1083 if i in m:
1084 1084 if f <= 2:
1085 1085 self.ui.debug(_("found new branch changeset %s\n") %
1086 1086 short(p))
1087 1087 fetch[p] = 1
1088 1088 base[i] = 1
1089 1089 else:
1090 1090 self.ui.debug(_("narrowed branch search to %s:%s\n")
1091 1091 % (short(p), short(i)))
1092 1092 search.append((p, i))
1093 1093 break
1094 1094 p, f = i, f * 2
1095 1095
1096 1096 # sanity check our fetch list
1097 1097 for f in fetch.keys():
1098 1098 if f in m:
1099 1099 raise repo.RepoError(_("already have changeset ") + short(f[:4]))
1100 1100
1101 1101 if base.keys() == [nullid]:
1102 1102 if force:
1103 1103 self.ui.warn(_("warning: repository is unrelated\n"))
1104 1104 else:
1105 1105 raise util.Abort(_("repository is unrelated"))
1106 1106
1107 1107 self.ui.note(_("found new changesets starting at ") +
1108 1108 " ".join([short(f) for f in fetch]) + "\n")
1109 1109
1110 1110 self.ui.debug(_("%d total queries\n") % reqcnt)
1111 1111
1112 1112 return fetch.keys()
1113 1113
1114 1114 def findoutgoing(self, remote, base=None, heads=None, force=False):
1115 1115 """Return list of nodes that are roots of subsets not in remote
1116 1116
1117 1117 If base dict is specified, assume that these nodes and their parents
1118 1118 exist on the remote side.
1119 1119 If a list of heads is specified, return only nodes which are heads
1120 1120 or ancestors of these heads, and return a second element which
1121 1121 contains all remote heads which get new children.
1122 1122 """
1123 1123 if base == None:
1124 1124 base = {}
1125 1125 self.findincoming(remote, base, heads, force=force)
1126 1126
1127 1127 self.ui.debug(_("common changesets up to ")
1128 1128 + " ".join(map(short, base.keys())) + "\n")
1129 1129
1130 1130 remain = dict.fromkeys(self.changelog.nodemap)
1131 1131
1132 1132 # prune everything remote has from the tree
1133 1133 del remain[nullid]
1134 1134 remove = base.keys()
1135 1135 while remove:
1136 1136 n = remove.pop(0)
1137 1137 if n in remain:
1138 1138 del remain[n]
1139 1139 for p in self.changelog.parents(n):
1140 1140 remove.append(p)
1141 1141
1142 1142 # find every node whose parents have been pruned
1143 1143 subset = []
1144 1144 # find every remote head that will get new children
1145 1145 updated_heads = {}
1146 1146 for n in remain:
1147 1147 p1, p2 = self.changelog.parents(n)
1148 1148 if p1 not in remain and p2 not in remain:
1149 1149 subset.append(n)
1150 1150 if heads:
1151 1151 if p1 in heads:
1152 1152 updated_heads[p1] = True
1153 1153 if p2 in heads:
1154 1154 updated_heads[p2] = True
1155 1155
1156 1156 # this is the set of all roots we have to push
1157 1157 if heads:
1158 1158 return subset, updated_heads.keys()
1159 1159 else:
1160 1160 return subset
1161 1161
1162 1162 def pull(self, remote, heads=None, force=False):
1163 1163 l = self.lock()
1164 1164
1165 1165 fetch = self.findincoming(remote, force=force)
1166 1166 if fetch == [nullid]:
1167 1167 self.ui.status(_("requesting all changes\n"))
1168 1168
1169 1169 if not fetch:
1170 1170 self.ui.status(_("no changes found\n"))
1171 1171 return 0
1172 1172
1173 1173 if heads is None:
1174 1174 cg = remote.changegroup(fetch, 'pull')
1175 1175 else:
1176 1176 cg = remote.changegroupsubset(fetch, heads, 'pull')
1177 1177 return self.addchangegroup(cg, 'pull')
1178 1178
1179 1179 def push(self, remote, force=False, revs=None):
1180 1180 # there are two ways to push to remote repo:
1181 1181 #
1182 1182 # addchangegroup assumes local user can lock remote
1183 1183 # repo (local filesystem, old ssh servers).
1184 1184 #
1185 1185 # unbundle assumes local user cannot lock remote repo (new ssh
1186 1186 # servers, http servers).
1187 1187
1188 1188 if remote.capable('unbundle'):
1189 1189 return self.push_unbundle(remote, force, revs)
1190 1190 return self.push_addchangegroup(remote, force, revs)
1191 1191
1192 1192 def prepush(self, remote, force, revs):
1193 1193 base = {}
1194 1194 remote_heads = remote.heads()
1195 1195 inc = self.findincoming(remote, base, remote_heads, force=force)
1196 1196 if not force and inc:
1197 1197 self.ui.warn(_("abort: unsynced remote changes!\n"))
1198 1198 self.ui.status(_("(did you forget to sync?"
1199 1199 " use push -f to force)\n"))
1200 1200 return None, 1
1201 1201
1202 1202 update, updated_heads = self.findoutgoing(remote, base, remote_heads)
1203 1203 if revs is not None:
1204 1204 msng_cl, bases, heads = self.changelog.nodesbetween(update, revs)
1205 1205 else:
1206 1206 bases, heads = update, self.changelog.heads()
1207 1207
1208 1208 if not bases:
1209 1209 self.ui.status(_("no changes found\n"))
1210 1210 return None, 1
1211 1211 elif not force:
1212 1212 # FIXME we don't properly detect creation of new heads
1213 1213 # in the push -r case, assume the user knows what he's doing
1214 1214 if not revs and len(remote_heads) < len(heads) \
1215 1215 and remote_heads != [nullid]:
1216 1216 self.ui.warn(_("abort: push creates new remote branches!\n"))
1217 1217 self.ui.status(_("(did you forget to merge?"
1218 1218 " use push -f to force)\n"))
1219 1219 return None, 1
1220 1220
1221 1221 if revs is None:
1222 1222 cg = self.changegroup(update, 'push')
1223 1223 else:
1224 1224 cg = self.changegroupsubset(update, revs, 'push')
1225 1225 return cg, remote_heads
1226 1226
1227 1227 def push_addchangegroup(self, remote, force, revs):
1228 1228 lock = remote.lock()
1229 1229
1230 1230 ret = self.prepush(remote, force, revs)
1231 1231 if ret[0] is not None:
1232 1232 cg, remote_heads = ret
1233 1233 return remote.addchangegroup(cg, 'push')
1234 1234 return ret[1]
1235 1235
1236 1236 def push_unbundle(self, remote, force, revs):
1237 1237 # local repo finds heads on server, finds out what revs it
1238 1238 # must push. once revs transferred, if server finds it has
1239 1239 # different heads (someone else won commit/push race), server
1240 1240 # aborts.
1241 1241
1242 1242 ret = self.prepush(remote, force, revs)
1243 1243 if ret[0] is not None:
1244 1244 cg, remote_heads = ret
1245 1245 if force: remote_heads = ['force']
1246 1246 return remote.unbundle(cg, remote_heads, 'push')
1247 1247 return ret[1]
1248 1248
1249 1249 def changegroupsubset(self, bases, heads, source):
1250 1250 """This function generates a changegroup consisting of all the nodes
1251 1251 that are descendents of any of the bases, and ancestors of any of
1252 1252 the heads.
1253 1253
1254 1254 It is fairly complex as determining which filenodes and which
1255 1255 manifest nodes need to be included for the changeset to be complete
1256 1256 is non-trivial.
1257 1257
1258 1258 Another wrinkle is doing the reverse, figuring out which changeset in
1259 1259 the changegroup a particular filenode or manifestnode belongs to."""
1260 1260
1261 1261 self.hook('preoutgoing', throw=True, source=source)
1262 1262
1263 1263 # Set up some initial variables
1264 1264 # Make it easy to refer to self.changelog
1265 1265 cl = self.changelog
1266 1266 # msng is short for missing - compute the list of changesets in this
1267 1267 # changegroup.
1268 1268 msng_cl_lst, bases, heads = cl.nodesbetween(bases, heads)
1269 1269 # Some bases may turn out to be superfluous, and some heads may be
1270 1270 # too. nodesbetween will return the minimal set of bases and heads
1271 1271 # necessary to re-create the changegroup.
1272 1272
1273 1273 # Known heads are the list of heads that it is assumed the recipient
1274 1274 # of this changegroup will know about.
1275 1275 knownheads = {}
1276 1276 # We assume that all parents of bases are known heads.
1277 1277 for n in bases:
1278 1278 for p in cl.parents(n):
1279 1279 if p != nullid:
1280 1280 knownheads[p] = 1
1281 1281 knownheads = knownheads.keys()
1282 1282 if knownheads:
1283 1283 # Now that we know what heads are known, we can compute which
1284 1284 # changesets are known. The recipient must know about all
1285 1285 # changesets required to reach the known heads from the null
1286 1286 # changeset.
1287 1287 has_cl_set, junk, junk = cl.nodesbetween(None, knownheads)
1288 1288 junk = None
1289 1289 # Transform the list into an ersatz set.
1290 1290 has_cl_set = dict.fromkeys(has_cl_set)
1291 1291 else:
1292 1292 # If there were no known heads, the recipient cannot be assumed to
1293 1293 # know about any changesets.
1294 1294 has_cl_set = {}
1295 1295
1296 1296 # Make it easy to refer to self.manifest
1297 1297 mnfst = self.manifest
1298 1298 # We don't know which manifests are missing yet
1299 1299 msng_mnfst_set = {}
1300 1300 # Nor do we know which filenodes are missing.
1301 1301 msng_filenode_set = {}
1302 1302
1303 1303 junk = mnfst.index[mnfst.count() - 1] # Get around a bug in lazyindex
1304 1304 junk = None
1305 1305
1306 1306 # A changeset always belongs to itself, so the changenode lookup
1307 1307 # function for a changenode is identity.
1308 1308 def identity(x):
1309 1309 return x
1310 1310
1311 1311 # A function generating function. Sets up an environment for the
1312 1312 # inner function.
1313 1313 def cmp_by_rev_func(revlog):
1314 1314 # Compare two nodes by their revision number in the environment's
1315 1315 # revision history. Since the revision number both represents the
1316 1316 # most efficient order to read the nodes in, and represents a
1317 1317 # topological sorting of the nodes, this function is often useful.
1318 1318 def cmp_by_rev(a, b):
1319 1319 return cmp(revlog.rev(a), revlog.rev(b))
1320 1320 return cmp_by_rev
1321 1321
1322 1322 # If we determine that a particular file or manifest node must be a
1323 1323 # node that the recipient of the changegroup will already have, we can
1324 1324 # also assume the recipient will have all the parents. This function
1325 1325 # prunes them from the set of missing nodes.
1326 1326 def prune_parents(revlog, hasset, msngset):
1327 1327 haslst = hasset.keys()
1328 1328 haslst.sort(cmp_by_rev_func(revlog))
1329 1329 for node in haslst:
1330 1330 parentlst = [p for p in revlog.parents(node) if p != nullid]
1331 1331 while parentlst:
1332 1332 n = parentlst.pop()
1333 1333 if n not in hasset:
1334 1334 hasset[n] = 1
1335 1335 p = [p for p in revlog.parents(n) if p != nullid]
1336 1336 parentlst.extend(p)
1337 1337 for n in hasset:
1338 1338 msngset.pop(n, None)
1339 1339
1340 1340 # This is a function generating function used to set up an environment
1341 1341 # for the inner function to execute in.
1342 1342 def manifest_and_file_collector(changedfileset):
1343 1343 # This is an information gathering function that gathers
1344 1344 # information from each changeset node that goes out as part of
1345 1345 # the changegroup. The information gathered is a list of which
1346 1346 # manifest nodes are potentially required (the recipient may
1347 1347 # already have them) and total list of all files which were
1348 1348 # changed in any changeset in the changegroup.
1349 1349 #
1350 1350 # We also remember the first changenode we saw any manifest
1351 1351 # referenced by so we can later determine which changenode 'owns'
1352 1352 # the manifest.
1353 1353 def collect_manifests_and_files(clnode):
1354 1354 c = cl.read(clnode)
1355 1355 for f in c[3]:
1356 1356 # This is to make sure we only have one instance of each
1357 1357 # filename string for each filename.
1358 1358 changedfileset.setdefault(f, f)
1359 1359 msng_mnfst_set.setdefault(c[0], clnode)
1360 1360 return collect_manifests_and_files
1361 1361
1362 1362 # Figure out which manifest nodes (of the ones we think might be part
1363 1363 # of the changegroup) the recipient must know about and remove them
1364 1364 # from the changegroup.
1365 1365 def prune_manifests():
1366 1366 has_mnfst_set = {}
1367 1367 for n in msng_mnfst_set:
1368 1368 # If a 'missing' manifest thinks it belongs to a changenode
1369 1369 # the recipient is assumed to have, obviously the recipient
1370 1370 # must have that manifest.
1371 1371 linknode = cl.node(mnfst.linkrev(n))
1372 1372 if linknode in has_cl_set:
1373 1373 has_mnfst_set[n] = 1
1374 1374 prune_parents(mnfst, has_mnfst_set, msng_mnfst_set)
1375 1375
1376 1376 # Use the information collected in collect_manifests_and_files to say
1377 1377 # which changenode any manifestnode belongs to.
1378 1378 def lookup_manifest_link(mnfstnode):
1379 1379 return msng_mnfst_set[mnfstnode]
1380 1380
1381 1381 # A function generating function that sets up the initial environment
1382 1382 # the inner function.
1383 1383 def filenode_collector(changedfiles):
1384 1384 next_rev = [0]
1385 1385 # This gathers information from each manifestnode included in the
1386 1386 # changegroup about which filenodes the manifest node references
1387 1387 # so we can include those in the changegroup too.
1388 1388 #
1389 1389 # It also remembers which changenode each filenode belongs to. It
1390 1390 # does this by assuming the a filenode belongs to the changenode
1391 1391 # the first manifest that references it belongs to.
1392 1392 def collect_msng_filenodes(mnfstnode):
1393 1393 r = mnfst.rev(mnfstnode)
1394 1394 if r == next_rev[0]:
1395 1395 # If the last rev we looked at was the one just previous,
1396 1396 # we only need to see a diff.
1397 1397 delta = mdiff.patchtext(mnfst.delta(mnfstnode))
1398 1398 # For each line in the delta
1399 1399 for dline in delta.splitlines():
1400 1400 # get the filename and filenode for that line
1401 1401 f, fnode = dline.split('\0')
1402 1402 fnode = bin(fnode[:40])
1403 1403 f = changedfiles.get(f, None)
1404 1404 # And if the file is in the list of files we care
1405 1405 # about.
1406 1406 if f is not None:
1407 1407 # Get the changenode this manifest belongs to
1408 1408 clnode = msng_mnfst_set[mnfstnode]
1409 1409 # Create the set of filenodes for the file if
1410 1410 # there isn't one already.
1411 1411 ndset = msng_filenode_set.setdefault(f, {})
1412 1412 # And set the filenode's changelog node to the
1413 1413 # manifest's if it hasn't been set already.
1414 1414 ndset.setdefault(fnode, clnode)
1415 1415 else:
1416 1416 # Otherwise we need a full manifest.
1417 1417 m = mnfst.read(mnfstnode)
1418 1418 # For every file in we care about.
1419 1419 for f in changedfiles:
1420 1420 fnode = m.get(f, None)
1421 1421 # If it's in the manifest
1422 1422 if fnode is not None:
1423 1423 # See comments above.
1424 1424 clnode = msng_mnfst_set[mnfstnode]
1425 1425 ndset = msng_filenode_set.setdefault(f, {})
1426 1426 ndset.setdefault(fnode, clnode)
1427 1427 # Remember the revision we hope to see next.
1428 1428 next_rev[0] = r + 1
1429 1429 return collect_msng_filenodes
1430 1430
1431 1431 # We have a list of filenodes we think we need for a file, lets remove
1432 1432 # all those we now the recipient must have.
1433 1433 def prune_filenodes(f, filerevlog):
1434 1434 msngset = msng_filenode_set[f]
1435 1435 hasset = {}
1436 1436 # If a 'missing' filenode thinks it belongs to a changenode we
1437 1437 # assume the recipient must have, then the recipient must have
1438 1438 # that filenode.
1439 1439 for n in msngset:
1440 1440 clnode = cl.node(filerevlog.linkrev(n))
1441 1441 if clnode in has_cl_set:
1442 1442 hasset[n] = 1
1443 1443 prune_parents(filerevlog, hasset, msngset)
1444 1444
1445 1445 # A function generator function that sets up the a context for the
1446 1446 # inner function.
1447 1447 def lookup_filenode_link_func(fname):
1448 1448 msngset = msng_filenode_set[fname]
1449 1449 # Lookup the changenode the filenode belongs to.
1450 1450 def lookup_filenode_link(fnode):
1451 1451 return msngset[fnode]
1452 1452 return lookup_filenode_link
1453 1453
1454 1454 # Now that we have all theses utility functions to help out and
1455 1455 # logically divide up the task, generate the group.
1456 1456 def gengroup():
1457 1457 # The set of changed files starts empty.
1458 1458 changedfiles = {}
1459 1459 # Create a changenode group generator that will call our functions
1460 1460 # back to lookup the owning changenode and collect information.
1461 1461 group = cl.group(msng_cl_lst, identity,
1462 1462 manifest_and_file_collector(changedfiles))
1463 1463 for chnk in group:
1464 1464 yield chnk
1465 1465
1466 1466 # The list of manifests has been collected by the generator
1467 1467 # calling our functions back.
1468 1468 prune_manifests()
1469 1469 msng_mnfst_lst = msng_mnfst_set.keys()
1470 1470 # Sort the manifestnodes by revision number.
1471 1471 msng_mnfst_lst.sort(cmp_by_rev_func(mnfst))
1472 1472 # Create a generator for the manifestnodes that calls our lookup
1473 1473 # and data collection functions back.
1474 1474 group = mnfst.group(msng_mnfst_lst, lookup_manifest_link,
1475 1475 filenode_collector(changedfiles))
1476 1476 for chnk in group:
1477 1477 yield chnk
1478 1478
1479 1479 # These are no longer needed, dereference and toss the memory for
1480 1480 # them.
1481 1481 msng_mnfst_lst = None
1482 1482 msng_mnfst_set.clear()
1483 1483
1484 1484 changedfiles = changedfiles.keys()
1485 1485 changedfiles.sort()
1486 1486 # Go through all our files in order sorted by name.
1487 1487 for fname in changedfiles:
1488 1488 filerevlog = self.file(fname)
1489 1489 # Toss out the filenodes that the recipient isn't really
1490 1490 # missing.
1491 1491 if msng_filenode_set.has_key(fname):
1492 1492 prune_filenodes(fname, filerevlog)
1493 1493 msng_filenode_lst = msng_filenode_set[fname].keys()
1494 1494 else:
1495 1495 msng_filenode_lst = []
1496 1496 # If any filenodes are left, generate the group for them,
1497 1497 # otherwise don't bother.
1498 1498 if len(msng_filenode_lst) > 0:
1499 1499 yield changegroup.genchunk(fname)
1500 1500 # Sort the filenodes by their revision #
1501 1501 msng_filenode_lst.sort(cmp_by_rev_func(filerevlog))
1502 1502 # Create a group generator and only pass in a changenode
1503 1503 # lookup function as we need to collect no information
1504 1504 # from filenodes.
1505 1505 group = filerevlog.group(msng_filenode_lst,
1506 1506 lookup_filenode_link_func(fname))
1507 1507 for chnk in group:
1508 1508 yield chnk
1509 1509 if msng_filenode_set.has_key(fname):
1510 1510 # Don't need this anymore, toss it to free memory.
1511 1511 del msng_filenode_set[fname]
1512 1512 # Signal that no more groups are left.
1513 1513 yield changegroup.closechunk()
1514 1514
1515 1515 if msng_cl_lst:
1516 1516 self.hook('outgoing', node=hex(msng_cl_lst[0]), source=source)
1517 1517
1518 1518 return util.chunkbuffer(gengroup())
1519 1519
1520 1520 def changegroup(self, basenodes, source):
1521 1521 """Generate a changegroup of all nodes that we have that a recipient
1522 1522 doesn't.
1523 1523
1524 1524 This is much easier than the previous function as we can assume that
1525 1525 the recipient has any changenode we aren't sending them."""
1526 1526
1527 1527 self.hook('preoutgoing', throw=True, source=source)
1528 1528
1529 1529 cl = self.changelog
1530 1530 nodes = cl.nodesbetween(basenodes, None)[0]
1531 1531 revset = dict.fromkeys([cl.rev(n) for n in nodes])
1532 1532
1533 1533 def identity(x):
1534 1534 return x
1535 1535
1536 1536 def gennodelst(revlog):
1537 1537 for r in xrange(0, revlog.count()):
1538 1538 n = revlog.node(r)
1539 1539 if revlog.linkrev(n) in revset:
1540 1540 yield n
1541 1541
1542 1542 def changed_file_collector(changedfileset):
1543 1543 def collect_changed_files(clnode):
1544 1544 c = cl.read(clnode)
1545 1545 for fname in c[3]:
1546 1546 changedfileset[fname] = 1
1547 1547 return collect_changed_files
1548 1548
1549 1549 def lookuprevlink_func(revlog):
1550 1550 def lookuprevlink(n):
1551 1551 return cl.node(revlog.linkrev(n))
1552 1552 return lookuprevlink
1553 1553
1554 1554 def gengroup():
1555 1555 # construct a list of all changed files
1556 1556 changedfiles = {}
1557 1557
1558 1558 for chnk in cl.group(nodes, identity,
1559 1559 changed_file_collector(changedfiles)):
1560 1560 yield chnk
1561 1561 changedfiles = changedfiles.keys()
1562 1562 changedfiles.sort()
1563 1563
1564 1564 mnfst = self.manifest
1565 1565 nodeiter = gennodelst(mnfst)
1566 1566 for chnk in mnfst.group(nodeiter, lookuprevlink_func(mnfst)):
1567 1567 yield chnk
1568 1568
1569 1569 for fname in changedfiles:
1570 1570 filerevlog = self.file(fname)
1571 1571 nodeiter = gennodelst(filerevlog)
1572 1572 nodeiter = list(nodeiter)
1573 1573 if nodeiter:
1574 1574 yield changegroup.genchunk(fname)
1575 1575 lookup = lookuprevlink_func(filerevlog)
1576 1576 for chnk in filerevlog.group(nodeiter, lookup):
1577 1577 yield chnk
1578 1578
1579 1579 yield changegroup.closechunk()
1580 1580
1581 1581 if nodes:
1582 1582 self.hook('outgoing', node=hex(nodes[0]), source=source)
1583 1583
1584 1584 return util.chunkbuffer(gengroup())
1585 1585
1586 1586 def addchangegroup(self, source, srctype):
1587 1587 """add changegroup to repo.
1588 1588 returns number of heads modified or added + 1."""
1589 1589
1590 1590 def csmap(x):
1591 1591 self.ui.debug(_("add changeset %s\n") % short(x))
1592 1592 return cl.count()
1593 1593
1594 1594 def revmap(x):
1595 1595 return cl.rev(x)
1596 1596
1597 1597 if not source:
1598 1598 return 0
1599 1599
1600 1600 self.hook('prechangegroup', throw=True, source=srctype)
1601 1601
1602 1602 changesets = files = revisions = 0
1603 1603
1604 1604 tr = self.transaction()
1605 1605
1606 1606 # write changelog data to temp files so concurrent readers will not see
1607 1607 # inconsistent view
1608 1608 cl = None
1609 1609 try:
1610 1610 cl = appendfile.appendchangelog(self.opener, self.changelog.version)
1611 1611
1612 1612 oldheads = len(cl.heads())
1613 1613
1614 1614 # pull off the changeset group
1615 1615 self.ui.status(_("adding changesets\n"))
1616 1616 cor = cl.count() - 1
1617 1617 chunkiter = changegroup.chunkiter(source)
1618 1618 if cl.addgroup(chunkiter, csmap, tr, 1) is None:
1619 1619 raise util.Abort(_("received changelog group is empty"))
1620 1620 cnr = cl.count() - 1
1621 1621 changesets = cnr - cor
1622 1622
1623 1623 # pull off the manifest group
1624 1624 self.ui.status(_("adding manifests\n"))
1625 1625 chunkiter = changegroup.chunkiter(source)
1626 1626 # no need to check for empty manifest group here:
1627 1627 # if the result of the merge of 1 and 2 is the same in 3 and 4,
1628 1628 # no new manifest will be created and the manifest group will
1629 1629 # be empty during the pull
1630 1630 self.manifest.addgroup(chunkiter, revmap, tr)
1631 1631
1632 1632 # process the files
1633 1633 self.ui.status(_("adding file changes\n"))
1634 1634 while 1:
1635 1635 f = changegroup.getchunk(source)
1636 1636 if not f:
1637 1637 break
1638 1638 self.ui.debug(_("adding %s revisions\n") % f)
1639 1639 fl = self.file(f)
1640 1640 o = fl.count()
1641 1641 chunkiter = changegroup.chunkiter(source)
1642 1642 if fl.addgroup(chunkiter, revmap, tr) is None:
1643 1643 raise util.Abort(_("received file revlog group is empty"))
1644 1644 revisions += fl.count() - o
1645 1645 files += 1
1646 1646
1647 1647 cl.writedata()
1648 1648 finally:
1649 1649 if cl:
1650 1650 cl.cleanup()
1651 1651
1652 1652 # make changelog see real files again
1653 1653 self.changelog = changelog.changelog(self.opener, self.changelog.version)
1654 1654 self.changelog.checkinlinesize(tr)
1655 1655
1656 1656 newheads = len(self.changelog.heads())
1657 1657 heads = ""
1658 1658 if oldheads and newheads != oldheads:
1659 1659 heads = _(" (%+d heads)") % (newheads - oldheads)
1660 1660
1661 1661 self.ui.status(_("added %d changesets"
1662 1662 " with %d changes to %d files%s\n")
1663 1663 % (changesets, revisions, files, heads))
1664 1664
1665 1665 if changesets > 0:
1666 1666 self.hook('pretxnchangegroup', throw=True,
1667 1667 node=hex(self.changelog.node(cor+1)), source=srctype)
1668 1668
1669 1669 tr.close()
1670 1670
1671 1671 if changesets > 0:
1672 1672 self.hook("changegroup", node=hex(self.changelog.node(cor+1)),
1673 1673 source=srctype)
1674 1674
1675 1675 for i in range(cor + 1, cnr + 1):
1676 1676 self.hook("incoming", node=hex(self.changelog.node(i)),
1677 1677 source=srctype)
1678 1678
1679 1679 return newheads - oldheads + 1
1680 1680
1681 1681 def update(self, node, allow=False, force=False, choose=None,
1682 1682 moddirstate=True, forcemerge=False, wlock=None, show_stats=True):
1683 1683 pl = self.dirstate.parents()
1684 1684 if not force and pl[1] != nullid:
1685 1685 raise util.Abort(_("outstanding uncommitted merges"))
1686 1686
1687 1687 err = False
1688 1688
1689 1689 p1, p2 = pl[0], node
1690 1690 pa = self.changelog.ancestor(p1, p2)
1691 1691 m1n = self.changelog.read(p1)[0]
1692 1692 m2n = self.changelog.read(p2)[0]
1693 1693 man = self.manifest.ancestor(m1n, m2n)
1694 1694 m1 = self.manifest.read(m1n)
1695 1695 mf1 = self.manifest.readflags(m1n)
1696 1696 m2 = self.manifest.read(m2n).copy()
1697 1697 mf2 = self.manifest.readflags(m2n)
1698 1698 ma = self.manifest.read(man)
1699 1699 mfa = self.manifest.readflags(man)
1700 1700
1701 1701 modified, added, removed, deleted, unknown = self.changes()
1702 1702
1703 1703 # is this a jump, or a merge? i.e. is there a linear path
1704 1704 # from p1 to p2?
1705 1705 linear_path = (pa == p1 or pa == p2)
1706 1706
1707 1707 if allow and linear_path:
1708 1708 raise util.Abort(_("there is nothing to merge, just use "
1709 1709 "'hg update' or look at 'hg heads'"))
1710 1710 if allow and not forcemerge:
1711 1711 if modified or added or removed:
1712 1712 raise util.Abort(_("outstanding uncommitted changes"))
1713 1713
1714 1714 if not forcemerge and not force:
1715 1715 for f in unknown:
1716 1716 if f in m2:
1717 1717 t1 = self.wread(f)
1718 1718 t2 = self.file(f).read(m2[f])
1719 1719 if cmp(t1, t2) != 0:
1720 1720 raise util.Abort(_("'%s' already exists in the working"
1721 1721 " dir and differs from remote") % f)
1722 1722
1723 1723 # resolve the manifest to determine which files
1724 1724 # we care about merging
1725 1725 self.ui.note(_("resolving manifests\n"))
1726 1726 self.ui.debug(_(" force %s allow %s moddirstate %s linear %s\n") %
1727 1727 (force, allow, moddirstate, linear_path))
1728 1728 self.ui.debug(_(" ancestor %s local %s remote %s\n") %
1729 1729 (short(man), short(m1n), short(m2n)))
1730 1730
1731 1731 merge = {}
1732 1732 get = {}
1733 1733 remove = []
1734 1734
1735 1735 # construct a working dir manifest
1736 1736 mw = m1.copy()
1737 1737 mfw = mf1.copy()
1738 1738 umap = dict.fromkeys(unknown)
1739 1739
1740 1740 for f in added + modified + unknown:
1741 1741 mw[f] = ""
1742 1742 mfw[f] = util.is_exec(self.wjoin(f), mfw.get(f, False))
1743 1743
1744 1744 if moddirstate and not wlock:
1745 1745 wlock = self.wlock()
1746 1746
1747 1747 for f in deleted + removed:
1748 1748 if f in mw:
1749 1749 del mw[f]
1750 1750
1751 1751 # If we're jumping between revisions (as opposed to merging),
1752 1752 # and if neither the working directory nor the target rev has
1753 1753 # the file, then we need to remove it from the dirstate, to
1754 1754 # prevent the dirstate from listing the file when it is no
1755 1755 # longer in the manifest.
1756 1756 if moddirstate and linear_path and f not in m2:
1757 1757 self.dirstate.forget((f,))
1758 1758
1759 1759 # Compare manifests
1760 1760 for f, n in mw.iteritems():
1761 1761 if choose and not choose(f):
1762 1762 continue
1763 1763 if f in m2:
1764 1764 s = 0
1765 1765
1766 1766 # is the wfile new since m1, and match m2?
1767 1767 if f not in m1:
1768 1768 t1 = self.wread(f)
1769 1769 t2 = self.file(f).read(m2[f])
1770 1770 if cmp(t1, t2) == 0:
1771 1771 n = m2[f]
1772 1772 del t1, t2
1773 1773
1774 1774 # are files different?
1775 1775 if n != m2[f]:
1776 1776 a = ma.get(f, nullid)
1777 1777 # are both different from the ancestor?
1778 1778 if n != a and m2[f] != a:
1779 1779 self.ui.debug(_(" %s versions differ, resolve\n") % f)
1780 1780 # merge executable bits
1781 1781 # "if we changed or they changed, change in merge"
1782 1782 a, b, c = mfa.get(f, 0), mfw[f], mf2[f]
1783 1783 mode = ((a^b) | (a^c)) ^ a
1784 1784 merge[f] = (m1.get(f, nullid), m2[f], mode)
1785 1785 s = 1
1786 1786 # are we clobbering?
1787 1787 # is remote's version newer?
1788 1788 # or are we going back in time?
1789 1789 elif force or m2[f] != a or (p2 == pa and mw[f] == m1[f]):
1790 1790 self.ui.debug(_(" remote %s is newer, get\n") % f)
1791 1791 get[f] = m2[f]
1792 1792 s = 1
1793 1793 elif f in umap or f in added:
1794 1794 # this unknown file is the same as the checkout
1795 1795 # we need to reset the dirstate if the file was added
1796 1796 get[f] = m2[f]
1797 1797
1798 1798 if not s and mfw[f] != mf2[f]:
1799 1799 if force:
1800 1800 self.ui.debug(_(" updating permissions for %s\n") % f)
1801 1801 util.set_exec(self.wjoin(f), mf2[f])
1802 1802 else:
1803 1803 a, b, c = mfa.get(f, 0), mfw[f], mf2[f]
1804 1804 mode = ((a^b) | (a^c)) ^ a
1805 1805 if mode != b:
1806 1806 self.ui.debug(_(" updating permissions for %s\n")
1807 1807 % f)
1808 1808 util.set_exec(self.wjoin(f), mode)
1809 1809 del m2[f]
1810 1810 elif f in ma:
1811 1811 if n != ma[f]:
1812 1812 r = _("d")
1813 1813 if not force and (linear_path or allow):
1814 1814 r = self.ui.prompt(
1815 1815 (_(" local changed %s which remote deleted\n") % f) +
1816 1816 _("(k)eep or (d)elete?"), _("[kd]"), _("k"))
1817 1817 if r == _("d"):
1818 1818 remove.append(f)
1819 1819 else:
1820 1820 self.ui.debug(_("other deleted %s\n") % f)
1821 1821 remove.append(f) # other deleted it
1822 1822 else:
1823 1823 # file is created on branch or in working directory
1824 1824 if force and f not in umap:
1825 1825 self.ui.debug(_("remote deleted %s, clobbering\n") % f)
1826 1826 remove.append(f)
1827 1827 elif n == m1.get(f, nullid): # same as parent
1828 1828 if p2 == pa: # going backwards?
1829 1829 self.ui.debug(_("remote deleted %s\n") % f)
1830 1830 remove.append(f)
1831 1831 else:
1832 1832 self.ui.debug(_("local modified %s, keeping\n") % f)
1833 1833 else:
1834 1834 self.ui.debug(_("working dir created %s, keeping\n") % f)
1835 1835
1836 1836 for f, n in m2.iteritems():
1837 1837 if choose and not choose(f):
1838 1838 continue
1839 1839 if f[0] == "/":
1840 1840 continue
1841 1841 if f in ma and n != ma[f]:
1842 1842 r = _("k")
1843 1843 if not force and (linear_path or allow):
1844 1844 r = self.ui.prompt(
1845 1845 (_("remote changed %s which local deleted\n") % f) +
1846 1846 _("(k)eep or (d)elete?"), _("[kd]"), _("k"))
1847 1847 if r == _("k"):
1848 1848 get[f] = n
1849 1849 elif f not in ma:
1850 1850 self.ui.debug(_("remote created %s\n") % f)
1851 1851 get[f] = n
1852 1852 else:
1853 1853 if force or p2 == pa: # going backwards?
1854 1854 self.ui.debug(_("local deleted %s, recreating\n") % f)
1855 1855 get[f] = n
1856 1856 else:
1857 1857 self.ui.debug(_("local deleted %s\n") % f)
1858 1858
1859 1859 del mw, m1, m2, ma
1860 1860
1861 1861 if force:
1862 1862 for f in merge:
1863 1863 get[f] = merge[f][1]
1864 1864 merge = {}
1865 1865
1866 1866 if linear_path or force:
1867 1867 # we don't need to do any magic, just jump to the new rev
1868 1868 branch_merge = False
1869 1869 p1, p2 = p2, nullid
1870 1870 else:
1871 1871 if not allow:
1872 1872 self.ui.status(_("this update spans a branch"
1873 1873 " affecting the following files:\n"))
1874 1874 fl = merge.keys() + get.keys()
1875 1875 fl.sort()
1876 1876 for f in fl:
1877 1877 cf = ""
1878 1878 if f in merge:
1879 1879 cf = _(" (resolve)")
1880 1880 self.ui.status(" %s%s\n" % (f, cf))
1881 1881 self.ui.warn(_("aborting update spanning branches!\n"))
1882 1882 self.ui.status(_("(use 'hg merge' to merge across branches"
1883 1883 " or 'hg update -C' to lose changes)\n"))
1884 1884 return 1
1885 1885 branch_merge = True
1886 1886
1887 1887 xp1 = hex(p1)
1888 1888 xp2 = hex(p2)
1889 1889 if p2 == nullid: xxp2 = ''
1890 1890 else: xxp2 = xp2
1891 1891
1892 1892 self.hook('preupdate', throw=True, parent1=xp1, parent2=xxp2)
1893 1893
1894 1894 # get the files we don't need to change
1895 1895 files = get.keys()
1896 1896 files.sort()
1897 1897 for f in files:
1898 1898 if f[0] == "/":
1899 1899 continue
1900 1900 self.ui.note(_("getting %s\n") % f)
1901 1901 t = self.file(f).read(get[f])
1902 1902 self.wwrite(f, t)
1903 1903 util.set_exec(self.wjoin(f), mf2[f])
1904 1904 if moddirstate:
1905 1905 if branch_merge:
1906 1906 self.dirstate.update([f], 'n', st_mtime=-1)
1907 1907 else:
1908 1908 self.dirstate.update([f], 'n')
1909 1909
1910 1910 # merge the tricky bits
1911 1911 failedmerge = []
1912 1912 files = merge.keys()
1913 1913 files.sort()
1914 1914 for f in files:
1915 1915 self.ui.status(_("merging %s\n") % f)
1916 1916 my, other, flag = merge[f]
1917 1917 ret = self.merge3(f, my, other, xp1, xp2)
1918 1918 if ret:
1919 1919 err = True
1920 1920 failedmerge.append(f)
1921 1921 util.set_exec(self.wjoin(f), flag)
1922 1922 if moddirstate:
1923 1923 if branch_merge:
1924 1924 # We've done a branch merge, mark this file as merged
1925 1925 # so that we properly record the merger later
1926 1926 self.dirstate.update([f], 'm')
1927 1927 else:
1928 1928 # We've update-merged a locally modified file, so
1929 1929 # we set the dirstate to emulate a normal checkout
1930 1930 # of that file some time in the past. Thus our
1931 1931 # merge will appear as a normal local file
1932 1932 # modification.
1933 1933 f_len = len(self.file(f).read(other))
1934 1934 self.dirstate.update([f], 'n', st_size=f_len, st_mtime=-1)
1935 1935
1936 1936 remove.sort()
1937 1937 for f in remove:
1938 1938 self.ui.note(_("removing %s\n") % f)
1939 1939 util.audit_path(f)
1940 1940 try:
1941 1941 util.unlink(self.wjoin(f))
1942 1942 except OSError, inst:
1943 1943 if inst.errno != errno.ENOENT:
1944 1944 self.ui.warn(_("update failed to remove %s: %s!\n") %
1945 1945 (f, inst.strerror))
1946 1946 if moddirstate:
1947 1947 if branch_merge:
1948 1948 self.dirstate.update(remove, 'r')
1949 1949 else:
1950 1950 self.dirstate.forget(remove)
1951 1951
1952 1952 if moddirstate:
1953 1953 self.dirstate.setparents(p1, p2)
1954 1954
1955 1955 if show_stats:
1956 1956 stats = ((len(get), _("updated")),
1957 1957 (len(merge) - len(failedmerge), _("merged")),
1958 1958 (len(remove), _("removed")),
1959 1959 (len(failedmerge), _("unresolved")))
1960 1960 note = ", ".join([_("%d files %s") % s for s in stats])
1961 1961 self.ui.status("%s\n" % note)
1962 1962 if moddirstate:
1963 1963 if branch_merge:
1964 1964 if failedmerge:
1965 1965 self.ui.status(_("There are unresolved merges,"
1966 1966 " you can redo the full merge using:\n"
1967 1967 " hg update -C %s\n"
1968 1968 " hg merge %s\n"
1969 1969 % (self.changelog.rev(p1),
1970 1970 self.changelog.rev(p2))))
1971 1971 else:
1972 1972 self.ui.status(_("(branch merge, don't forget to commit)\n"))
1973 1973 elif failedmerge:
1974 1974 self.ui.status(_("There are unresolved merges with"
1975 1975 " locally modified files.\n"))
1976 1976
1977 1977 self.hook('update', parent1=xp1, parent2=xxp2, error=int(err))
1978 1978 return err
1979 1979
1980 1980 def merge3(self, fn, my, other, p1, p2):
1981 1981 """perform a 3-way merge in the working directory"""
1982 1982
1983 1983 def temp(prefix, node):
1984 1984 pre = "%s~%s." % (os.path.basename(fn), prefix)
1985 1985 (fd, name) = tempfile.mkstemp(prefix=pre)
1986 1986 f = os.fdopen(fd, "wb")
1987 1987 self.wwrite(fn, fl.read(node), f)
1988 1988 f.close()
1989 1989 return name
1990 1990
1991 1991 fl = self.file(fn)
1992 1992 base = fl.ancestor(my, other)
1993 1993 a = self.wjoin(fn)
1994 1994 b = temp("base", base)
1995 1995 c = temp("other", other)
1996 1996
1997 1997 self.ui.note(_("resolving %s\n") % fn)
1998 1998 self.ui.debug(_("file %s: my %s other %s ancestor %s\n") %
1999 1999 (fn, short(my), short(other), short(base)))
2000 2000
2001 2001 cmd = (os.environ.get("HGMERGE") or self.ui.config("ui", "merge")
2002 2002 or "hgmerge")
2003 2003 r = util.system('%s "%s" "%s" "%s"' % (cmd, a, b, c), cwd=self.root,
2004 2004 environ={'HG_FILE': fn,
2005 2005 'HG_MY_NODE': p1,
2006 2006 'HG_OTHER_NODE': p2,
2007 2007 'HG_FILE_MY_NODE': hex(my),
2008 2008 'HG_FILE_OTHER_NODE': hex(other),
2009 2009 'HG_FILE_BASE_NODE': hex(base)})
2010 2010 if r:
2011 2011 self.ui.warn(_("merging %s failed!\n") % fn)
2012 2012
2013 2013 os.unlink(b)
2014 2014 os.unlink(c)
2015 2015 return r
2016 2016
2017 2017 def verify(self):
2018 2018 filelinkrevs = {}
2019 2019 filenodes = {}
2020 2020 changesets = revisions = files = 0
2021 2021 errors = [0]
2022 2022 warnings = [0]
2023 2023 neededmanifests = {}
2024 2024
2025 2025 def err(msg):
2026 2026 self.ui.warn(msg + "\n")
2027 2027 errors[0] += 1
2028 2028
2029 2029 def warn(msg):
2030 2030 self.ui.warn(msg + "\n")
2031 2031 warnings[0] += 1
2032 2032
2033 2033 def checksize(obj, name):
2034 2034 d = obj.checksize()
2035 2035 if d[0]:
2036 2036 err(_("%s data length off by %d bytes") % (name, d[0]))
2037 2037 if d[1]:
2038 2038 err(_("%s index contains %d extra bytes") % (name, d[1]))
2039 2039
2040 2040 def checkversion(obj, name):
2041 2041 if obj.version != revlog.REVLOGV0:
2042 2042 if not revlogv1:
2043 2043 warn(_("warning: `%s' uses revlog format 1") % name)
2044 2044 elif revlogv1:
2045 2045 warn(_("warning: `%s' uses revlog format 0") % name)
2046 2046
2047 2047 revlogv1 = self.revlogversion != revlog.REVLOGV0
2048 2048 if self.ui.verbose or revlogv1 != self.revlogv1:
2049 2049 self.ui.status(_("repository uses revlog format %d\n") %
2050 2050 (revlogv1 and 1 or 0))
2051 2051
2052 2052 seen = {}
2053 2053 self.ui.status(_("checking changesets\n"))
2054 2054 checksize(self.changelog, "changelog")
2055 2055
2056 2056 for i in range(self.changelog.count()):
2057 2057 changesets += 1
2058 2058 n = self.changelog.node(i)
2059 2059 l = self.changelog.linkrev(n)
2060 2060 if l != i:
2061 2061 err(_("incorrect link (%d) for changeset revision %d") %(l, i))
2062 2062 if n in seen:
2063 2063 err(_("duplicate changeset at revision %d") % i)
2064 2064 seen[n] = 1
2065 2065
2066 2066 for p in self.changelog.parents(n):
2067 2067 if p not in self.changelog.nodemap:
2068 2068 err(_("changeset %s has unknown parent %s") %
2069 2069 (short(n), short(p)))
2070 2070 try:
2071 2071 changes = self.changelog.read(n)
2072 2072 except KeyboardInterrupt:
2073 2073 self.ui.warn(_("interrupted"))
2074 2074 raise
2075 2075 except Exception, inst:
2076 2076 err(_("unpacking changeset %s: %s") % (short(n), inst))
2077 2077 continue
2078 2078
2079 2079 neededmanifests[changes[0]] = n
2080 2080
2081 2081 for f in changes[3]:
2082 2082 filelinkrevs.setdefault(f, []).append(i)
2083 2083
2084 2084 seen = {}
2085 2085 self.ui.status(_("checking manifests\n"))
2086 2086 checkversion(self.manifest, "manifest")
2087 2087 checksize(self.manifest, "manifest")
2088 2088
2089 2089 for i in range(self.manifest.count()):
2090 2090 n = self.manifest.node(i)
2091 2091 l = self.manifest.linkrev(n)
2092 2092
2093 2093 if l < 0 or l >= self.changelog.count():
2094 2094 err(_("bad manifest link (%d) at revision %d") % (l, i))
2095 2095
2096 2096 if n in neededmanifests:
2097 2097 del neededmanifests[n]
2098 2098
2099 2099 if n in seen:
2100 2100 err(_("duplicate manifest at revision %d") % i)
2101 2101
2102 2102 seen[n] = 1
2103 2103
2104 2104 for p in self.manifest.parents(n):
2105 2105 if p not in self.manifest.nodemap:
2106 2106 err(_("manifest %s has unknown parent %s") %
2107 2107 (short(n), short(p)))
2108 2108
2109 2109 try:
2110 2110 delta = mdiff.patchtext(self.manifest.delta(n))
2111 2111 except KeyboardInterrupt:
2112 2112 self.ui.warn(_("interrupted"))
2113 2113 raise
2114 2114 except Exception, inst:
2115 2115 err(_("unpacking manifest %s: %s") % (short(n), inst))
2116 2116 continue
2117 2117
2118 2118 try:
2119 2119 ff = [ l.split('\0') for l in delta.splitlines() ]
2120 2120 for f, fn in ff:
2121 2121 filenodes.setdefault(f, {})[bin(fn[:40])] = 1
2122 2122 except (ValueError, TypeError), inst:
2123 2123 err(_("broken delta in manifest %s: %s") % (short(n), inst))
2124 2124
2125 2125 self.ui.status(_("crosschecking files in changesets and manifests\n"))
2126 2126
2127 2127 for m, c in neededmanifests.items():
2128 2128 err(_("Changeset %s refers to unknown manifest %s") %
2129 2129 (short(m), short(c)))
2130 2130 del neededmanifests
2131 2131
2132 2132 for f in filenodes:
2133 2133 if f not in filelinkrevs:
2134 2134 err(_("file %s in manifest but not in changesets") % f)
2135 2135
2136 2136 for f in filelinkrevs:
2137 2137 if f not in filenodes:
2138 2138 err(_("file %s in changeset but not in manifest") % f)
2139 2139
2140 2140 self.ui.status(_("checking files\n"))
2141 2141 ff = filenodes.keys()
2142 2142 ff.sort()
2143 2143 for f in ff:
2144 2144 if f == "/dev/null":
2145 2145 continue
2146 2146 files += 1
2147 2147 if not f:
2148 2148 err(_("file without name in manifest %s") % short(n))
2149 2149 continue
2150 2150 fl = self.file(f)
2151 2151 checkversion(fl, f)
2152 2152 checksize(fl, f)
2153 2153
2154 2154 nodes = {nullid: 1}
2155 2155 seen = {}
2156 2156 for i in range(fl.count()):
2157 2157 revisions += 1
2158 2158 n = fl.node(i)
2159 2159
2160 2160 if n in seen:
2161 2161 err(_("%s: duplicate revision %d") % (f, i))
2162 2162 if n not in filenodes[f]:
2163 2163 err(_("%s: %d:%s not in manifests") % (f, i, short(n)))
2164 2164 else:
2165 2165 del filenodes[f][n]
2166 2166
2167 2167 flr = fl.linkrev(n)
2168 2168 if flr not in filelinkrevs.get(f, []):
2169 2169 err(_("%s:%s points to unexpected changeset %d")
2170 2170 % (f, short(n), flr))
2171 2171 else:
2172 2172 filelinkrevs[f].remove(flr)
2173 2173
2174 2174 # verify contents
2175 2175 try:
2176 2176 t = fl.read(n)
2177 2177 except KeyboardInterrupt:
2178 2178 self.ui.warn(_("interrupted"))
2179 2179 raise
2180 2180 except Exception, inst:
2181 2181 err(_("unpacking file %s %s: %s") % (f, short(n), inst))
2182 2182
2183 2183 # verify parents
2184 2184 (p1, p2) = fl.parents(n)
2185 2185 if p1 not in nodes:
2186 2186 err(_("file %s:%s unknown parent 1 %s") %
2187 2187 (f, short(n), short(p1)))
2188 2188 if p2 not in nodes:
2189 2189 err(_("file %s:%s unknown parent 2 %s") %
2190 2190 (f, short(n), short(p1)))
2191 2191 nodes[n] = 1
2192 2192
2193 2193 # cross-check
2194 2194 for node in filenodes[f]:
2195 2195 err(_("node %s in manifests not in %s") % (hex(node), f))
2196 2196
2197 2197 self.ui.status(_("%d files, %d changesets, %d total revisions\n") %
2198 2198 (files, changesets, revisions))
2199 2199
2200 2200 if warnings[0]:
2201 2201 self.ui.warn(_("%d warnings encountered!\n") % warnings[0])
2202 2202 if errors[0]:
2203 2203 self.ui.warn(_("%d integrity errors encountered!\n") % errors[0])
2204 2204 return 1
2205 2205
2206 2206 def stream_in(self, remote):
2207 fp = remote.stream_out()
2208 resp = int(fp.readline())
2209 if resp != 0:
2210 raise util.Abort(_('operation forbidden by server'))
2207 2211 self.ui.status(_('streaming all changes\n'))
2208 fp = remote.stream_out()
2209 2212 total_files, total_bytes = map(int, fp.readline().split(' ', 1))
2210 2213 self.ui.status(_('%d files to transfer, %s of data\n') %
2211 2214 (total_files, util.bytecount(total_bytes)))
2212 2215 start = time.time()
2213 2216 for i in xrange(total_files):
2214 2217 name, size = fp.readline().split('\0', 1)
2215 2218 size = int(size)
2216 2219 self.ui.debug('adding %s (%s)\n' % (name, util.bytecount(size)))
2217 2220 ofp = self.opener(name, 'w')
2218 2221 for chunk in util.filechunkiter(fp, limit=size):
2219 2222 ofp.write(chunk)
2220 2223 ofp.close()
2221 2224 elapsed = time.time() - start
2222 2225 self.ui.status(_('transferred %s in %.1f seconds (%s/sec)\n') %
2223 2226 (util.bytecount(total_bytes), elapsed,
2224 2227 util.bytecount(total_bytes / elapsed)))
2225 2228 self.reload()
2226 2229 return len(self.heads()) + 1
2227 2230
2228 2231 def clone(self, remote, heads=[], stream=False):
2229 2232 '''clone remote repository.
2230 2233
2231 2234 keyword arguments:
2232 2235 heads: list of revs to clone (forces use of pull)
2233 pull: force use of pull, even if remote can stream'''
2236 stream: use streaming clone if possible'''
2234 2237
2235 # now, all clients that can stream can read repo formats
2236 # supported by all servers that can stream.
2238 # now, all clients that can request uncompressed clones can
2239 # read repo formats supported by all servers that can serve
2240 # them.
2237 2241
2238 2242 # if revlog format changes, client will have to check version
2239 # and format flags on "stream" capability, and stream only if
2240 # compatible.
2243 # and format flags on "stream" capability, and use
2244 # uncompressed only if compatible.
2241 2245
2242 2246 if stream and not heads and remote.capable('stream'):
2243 2247 return self.stream_in(remote)
2244 2248 return self.pull(remote, heads)
2245 2249
2246 2250 # used to avoid circular references so destructors work
2247 2251 def aftertrans(base):
2248 2252 p = base
2249 2253 def a():
2250 2254 util.rename(os.path.join(p, "journal"), os.path.join(p, "undo"))
2251 2255 util.rename(os.path.join(p, "journal.dirstate"),
2252 2256 os.path.join(p, "undo.dirstate"))
2253 2257 return a
2254 2258
@@ -1,171 +1,173 b''
1 1 # sshserver.py - ssh protocol server support for mercurial
2 2 #
3 3 # Copyright 2005 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms
6 6 # of the GNU General Public License, incorporated herein by reference.
7 7
8 8 from demandload import demandload
9 9 from i18n import gettext as _
10 10 from node import *
11 11 demandload(globals(), "os streamclone sys tempfile util")
12 12
13 13 class sshserver(object):
14 14 def __init__(self, ui, repo):
15 15 self.ui = ui
16 16 self.repo = repo
17 17 self.lock = None
18 18 self.fin = sys.stdin
19 19 self.fout = sys.stdout
20 20
21 21 sys.stdout = sys.stderr
22 22
23 23 # Prevent insertion/deletion of CRs
24 24 util.set_binary(self.fin)
25 25 util.set_binary(self.fout)
26 26
27 27 def getarg(self):
28 28 argline = self.fin.readline()[:-1]
29 29 arg, l = argline.split()
30 30 val = self.fin.read(int(l))
31 31 return arg, val
32 32
33 33 def respond(self, v):
34 34 self.fout.write("%d\n" % len(v))
35 35 self.fout.write(v)
36 36 self.fout.flush()
37 37
38 38 def serve_forever(self):
39 39 while self.serve_one(): pass
40 40 sys.exit(0)
41 41
42 42 def serve_one(self):
43 43 cmd = self.fin.readline()[:-1]
44 44 if cmd:
45 45 impl = getattr(self, 'do_' + cmd, None)
46 46 if impl: impl()
47 47 else: self.respond("")
48 48 return cmd != ''
49 49
50 50 def do_heads(self):
51 51 h = self.repo.heads()
52 52 self.respond(" ".join(map(hex, h)) + "\n")
53 53
54 54 def do_hello(self):
55 55 '''the hello command returns a set of lines describing various
56 56 interesting things about the server, in an RFC822-like format.
57 57 Currently the only one defined is "capabilities", which
58 58 consists of a line in the form:
59 59
60 60 capabilities: space separated list of tokens
61 61 '''
62 62
63 r = "capabilities: unbundle stream=%d\n" % (self.repo.revlogversion,)
64 self.respond(r)
63 caps = ['unbundle']
64 if self.ui.configbool('server', 'stream'):
65 caps.append('stream=%d' % self.repo.revlogversion)
66 self.respond("capabilities: %s\n" % (' '.join(caps),))
65 67
66 68 def do_lock(self):
67 69 '''DEPRECATED - allowing remote client to lock repo is not safe'''
68 70
69 71 self.lock = self.repo.lock()
70 72 self.respond("")
71 73
72 74 def do_unlock(self):
73 75 '''DEPRECATED'''
74 76
75 77 if self.lock:
76 78 self.lock.release()
77 79 self.lock = None
78 80 self.respond("")
79 81
80 82 def do_branches(self):
81 83 arg, nodes = self.getarg()
82 84 nodes = map(bin, nodes.split(" "))
83 85 r = []
84 86 for b in self.repo.branches(nodes):
85 87 r.append(" ".join(map(hex, b)) + "\n")
86 88 self.respond("".join(r))
87 89
88 90 def do_between(self):
89 91 arg, pairs = self.getarg()
90 92 pairs = [map(bin, p.split("-")) for p in pairs.split(" ")]
91 93 r = []
92 94 for b in self.repo.between(pairs):
93 95 r.append(" ".join(map(hex, b)) + "\n")
94 96 self.respond("".join(r))
95 97
96 98 def do_changegroup(self):
97 99 nodes = []
98 100 arg, roots = self.getarg()
99 101 nodes = map(bin, roots.split(" "))
100 102
101 103 cg = self.repo.changegroup(nodes, 'serve')
102 104 while True:
103 105 d = cg.read(4096)
104 106 if not d:
105 107 break
106 108 self.fout.write(d)
107 109
108 110 self.fout.flush()
109 111
110 112 def do_addchangegroup(self):
111 113 '''DEPRECATED'''
112 114
113 115 if not self.lock:
114 116 self.respond("not locked")
115 117 return
116 118
117 119 self.respond("")
118 120 r = self.repo.addchangegroup(self.fin, 'serve')
119 121 self.respond(str(r))
120 122
121 123 def do_unbundle(self):
122 124 their_heads = self.getarg()[1].split()
123 125
124 126 def check_heads():
125 127 heads = map(hex, self.repo.heads())
126 128 return their_heads == [hex('force')] or their_heads == heads
127 129
128 130 # fail early if possible
129 131 if not check_heads():
130 132 self.respond(_('unsynced changes'))
131 133 return
132 134
133 135 self.respond('')
134 136
135 137 # write bundle data to temporary file because it can be big
136 138
137 139 try:
138 140 fd, tempname = tempfile.mkstemp(prefix='hg-unbundle-')
139 141 fp = os.fdopen(fd, 'wb+')
140 142
141 143 count = int(self.fin.readline())
142 144 while count:
143 145 fp.write(self.fin.read(count))
144 146 count = int(self.fin.readline())
145 147
146 148 was_locked = self.lock is not None
147 149 if not was_locked:
148 150 self.lock = self.repo.lock()
149 151 try:
150 152 if not check_heads():
151 153 # someone else committed/pushed/unbundled while we
152 154 # were transferring data
153 155 self.respond(_('unsynced changes'))
154 156 return
155 157 self.respond('')
156 158
157 159 # push can proceed
158 160
159 161 fp.seek(0)
160 162 r = self.repo.addchangegroup(fp, 'serve')
161 163 self.respond(str(r))
162 164 finally:
163 165 if not was_locked:
164 166 self.lock.release()
165 167 self.lock = None
166 168 finally:
167 169 fp.close()
168 170 os.unlink(tempname)
169 171
170 172 def do_stream_out(self):
171 173 streamclone.stream_out(self.repo, self.fout)
@@ -1,82 +1,89 b''
1 1 # streamclone.py - streaming clone server support for mercurial
2 2 #
3 3 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
4 4 #
5 5 # This software may be used and distributed according to the terms
6 6 # of the GNU General Public License, incorporated herein by reference.
7 7
8 8 from demandload import demandload
9 9 from i18n import gettext as _
10 10 demandload(globals(), "os stat util")
11 11
12 12 # if server supports streaming clone, it advertises "stream"
13 13 # capability with value that is version+flags of repo it is serving.
14 14 # client only streams if it can read that repo format.
15 15
16 16 def walkrepo(root):
17 17 '''iterate over metadata files in repository.
18 18 walk in natural (sorted) order.
19 19 yields 2-tuples: name of .d or .i file, size of file.'''
20 20
21 21 strip_count = len(root) + len(os.sep)
22 22 def walk(path, recurse):
23 23 ents = os.listdir(path)
24 24 ents.sort()
25 25 for e in ents:
26 26 pe = os.path.join(path, e)
27 27 st = os.lstat(pe)
28 28 if stat.S_ISDIR(st.st_mode):
29 29 if recurse:
30 30 for x in walk(pe, True):
31 31 yield x
32 32 else:
33 33 if not stat.S_ISREG(st.st_mode) or len(e) < 2:
34 34 continue
35 35 sfx = e[-2:]
36 36 if sfx in ('.d', '.i'):
37 37 yield pe[strip_count:], st.st_size
38 38 # write file data first
39 39 for x in walk(os.path.join(root, 'data'), True):
40 40 yield x
41 41 # write manifest before changelog
42 42 meta = list(walk(root, False))
43 43 meta.sort(reverse=True)
44 44 for x in meta:
45 45 yield x
46 46
47 47 # stream file format is simple.
48 48 #
49 49 # server writes out line that says how many files, how many total
50 50 # bytes. separator is ascii space, byte counts are strings.
51 51 #
52 52 # then for each file:
53 53 #
54 54 # server writes out line that says file name, how many bytes in
55 55 # file. separator is ascii nul, byte count is string.
56 56 #
57 57 # server writes out raw file data.
58 58
59 59 def stream_out(repo, fileobj):
60 60 '''stream out all metadata files in repository.
61 61 writes to file-like object, must support write() and optional flush().'''
62
63 if not repo.ui.configbool('server', 'stream'):
64 fileobj.write('1\n')
65 return
66
67 fileobj.write('0\n')
68
62 69 # get consistent snapshot of repo. lock during scan so lock not
63 70 # needed while we stream, and commits can happen.
64 71 lock = repo.lock()
65 72 repo.ui.debug('scanning\n')
66 73 entries = []
67 74 total_bytes = 0
68 75 for name, size in walkrepo(repo.path):
69 76 entries.append((name, size))
70 77 total_bytes += size
71 78 lock.release()
72 79
73 80 repo.ui.debug('%d files, %d bytes to transfer\n' %
74 81 (len(entries), total_bytes))
75 82 fileobj.write('%d %d\n' % (len(entries), total_bytes))
76 83 for name, size in entries:
77 84 repo.ui.debug('sending %s (%d bytes)\n' % (name, size))
78 85 fileobj.write('%s\0%d\n' % (name, size))
79 86 for chunk in util.filechunkiter(repo.opener(name), limit=size):
80 87 fileobj.write(chunk)
81 88 flush = getattr(fileobj, 'flush', None)
82 89 if flush: flush()
@@ -1,25 +1,25 b''
1 1 #!/bin/sh
2 2
3 mkdir test
3 hg init test
4 4 cd test
5 5 echo foo>foo
6 hg init
7 hg addremove
8 hg commit -m 1
9 hg verify
10 hg serve -p 20059 -d --pid-file=hg.pid
11 cat hg.pid >> $DAEMON_PIDS
6 hg commit -A -d '0 0' -m 1
7 hg --config server.stream=True serve -p 20059 -d --pid-file=hg1.pid
8 cat hg1.pid >> $DAEMON_PIDS
9 hg serve -p 20060 -d --pid-file=hg2.pid
10 cat hg2.pid >> $DAEMON_PIDS
12 11 cd ..
13 12
14 13 echo % clone via stream
15 http_proxy= hg clone --stream http://localhost:20059/ copy 2>&1 | \
14 http_proxy= hg clone --uncompressed http://localhost:20059/ copy 2>&1 | \
16 15 sed -e 's/[0-9][0-9.]*/XXX/g'
17 16 cd copy
18 17 hg verify
19 18
20 cd ..
19 echo % try to clone via stream, should use pull instead
20 http_proxy= hg clone --uncompressed http://localhost:20060/ copy2
21 21
22 22 echo % clone via pull
23 23 http_proxy= hg clone http://localhost:20059/ copy-pull
24 24 cd copy-pull
25 25 hg verify
@@ -1,41 +1,41 b''
1 1 #!/bin/sh
2 2
3 3 hg init a
4 4 cd a
5 5 echo a > a
6 6 hg ci -Ama -d '1123456789 0'
7 hg serve -p 20059 -d --pid-file=hg.pid
7 hg --config server.stream=True serve -p 20059 -d --pid-file=hg.pid
8 8 cat hg.pid >> $DAEMON_PIDS
9 9
10 10 cd ..
11 11 ("$TESTDIR/tinyproxy.py" 20060 localhost >proxy.log 2>&1 </dev/null &
12 12 echo $! > proxy.pid)
13 13 cat proxy.pid >> $DAEMON_PIDS
14 14 sleep 2
15 15
16 16 echo %% url for proxy, stream
17 http_proxy=http://localhost:20060/ hg --config http_proxy.always=True clone --stream http://localhost:20059/ b | \
17 http_proxy=http://localhost:20060/ hg --config http_proxy.always=True clone --uncompressed http://localhost:20059/ b | \
18 18 sed -e 's/[0-9][0-9.]*/XXX/g'
19 19 cd b
20 20 hg verify
21 21 cd ..
22 22
23 23 echo %% url for proxy, pull
24 24 http_proxy=http://localhost:20060/ hg --config http_proxy.always=True clone http://localhost:20059/ b-pull
25 25 cd b-pull
26 26 hg verify
27 27 cd ..
28 28
29 29 echo %% host:port for proxy
30 30 http_proxy=localhost:20060 hg clone --config http_proxy.always=True http://localhost:20059/ c
31 31
32 32 echo %% proxy url with user name and password
33 33 http_proxy=http://user:passwd@localhost:20060 hg clone --config http_proxy.always=True http://localhost:20059/ d
34 34
35 35 echo %% url with user name and password
36 36 http_proxy=http://user:passwd@localhost:20060 hg clone --config http_proxy.always=True http://user:passwd@localhost:20059/ e
37 37
38 38 echo %% bad host:port for proxy
39 39 http_proxy=localhost:20061 hg clone --config http_proxy.always=True http://localhost:20059/ f
40 40
41 41 exit 0
@@ -1,29 +1,30 b''
1 (the addremove command is deprecated; use add and remove --after instead)
2 1 adding foo
3 checking changesets
4 checking manifests
5 crosschecking files in changesets and manifests
6 checking files
7 1 files, 1 changesets, 1 total revisions
8 2 % clone via stream
9 3 streaming all changes
10 4 XXX files to transfer, XXX bytes of data
11 5 transferred XXX bytes in XXX seconds (XXX KB/sec)
12 6 XXX files updated, XXX files merged, XXX files removed, XXX files unresolved
13 7 checking changesets
14 8 checking manifests
15 9 crosschecking files in changesets and manifests
16 10 checking files
17 11 1 files, 1 changesets, 1 total revisions
12 % try to clone via stream, should use pull instead
13 requesting all changes
14 adding changesets
15 adding manifests
16 adding file changes
17 added 1 changesets with 1 changes to 1 files
18 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
18 19 % clone via pull
19 20 requesting all changes
20 21 adding changesets
21 22 adding manifests
22 23 adding file changes
23 24 added 1 changesets with 1 changes to 1 files
24 25 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
25 26 checking changesets
26 27 checking manifests
27 28 crosschecking files in changesets and manifests
28 29 checking files
29 30 1 files, 1 changesets, 1 total revisions
@@ -1,90 +1,92 b''
1 1 #!/bin/sh
2 2
3 3 # This test tries to exercise the ssh functionality with a dummy script
4 4
5 5 cat <<'EOF' > dummyssh
6 6 #!/bin/sh
7 7 # this attempts to deal with relative pathnames
8 8 cd `dirname $0`
9 9
10 10 # check for proper args
11 11 if [ $1 != "user@dummy" ] ; then
12 12 exit -1
13 13 fi
14 14
15 15 # check that we're in the right directory
16 16 if [ ! -x dummyssh ] ; then
17 17 exit -1
18 18 fi
19 19
20 20 echo Got arguments 1:$1 2:$2 3:$3 4:$4 5:$5 >> dummylog
21 21 $2
22 22 EOF
23 23 chmod +x dummyssh
24 24
25 25 echo "# creating 'remote'"
26 26 hg init remote
27 27 cd remote
28 28 echo this > foo
29 29 hg ci -A -m "init" -d "1000000 0" foo
30 echo '[server]' > .hg/hgrc
31 echo 'stream = True' >> .hg/hgrc
30 32
31 33 cd ..
32 34
33 35 echo "# clone remote via stream"
34 hg clone -e ./dummyssh --stream ssh://user@dummy/remote local-stream 2>&1 | \
36 hg clone -e ./dummyssh --uncompressed ssh://user@dummy/remote local-stream 2>&1 | \
35 37 sed -e 's/[0-9][0-9.]*/XXX/g'
36 38 cd local-stream
37 39 hg verify
38 40 cd ..
39 41
40 42 echo "# clone remote via pull"
41 43 hg clone -e ./dummyssh ssh://user@dummy/remote local
42 44
43 45 echo "# verify"
44 46 cd local
45 47 hg verify
46 48
47 49 echo "# empty default pull"
48 50 hg paths
49 51 hg pull -e ../dummyssh
50 52
51 53 echo "# local change"
52 54 echo bleah > foo
53 55 hg ci -m "add" -d "1000000 0"
54 56
55 57 echo "# updating rc"
56 58 echo "default-push = ssh://user@dummy/remote" >> .hg/hgrc
57 59 echo "[ui]" >> .hg/hgrc
58 60 echo "ssh = ../dummyssh" >> .hg/hgrc
59 61
60 62 echo "# find outgoing"
61 63 hg out ssh://user@dummy/remote
62 64
63 65 echo "# find incoming on the remote side"
64 66 hg incoming -R ../remote -e ../dummyssh ssh://user@dummy/local
65 67
66 68 echo "# push"
67 69 hg push
68 70
69 71 cd ../remote
70 72
71 73 echo "# check remote tip"
72 74 hg tip
73 75 hg verify
74 76 hg cat foo
75 77
76 78 echo z > z
77 79 hg ci -A -m z -d '1000001 0' z
78 80
79 81 cd ../local
80 82 echo r > r
81 83 hg ci -A -m z -d '1000002 0' r
82 84
83 85 echo "# push should fail"
84 86 hg push
85 87
86 88 echo "# push should succeed"
87 89 hg push -f
88 90
89 91 cd ..
90 92 cat dummylog
General Comments 0
You need to be logged in to leave comments. Login now