##// END OF EJS Templates
clone: disable stream support on server side by default....
Vadim Gelfer -
r2621:5a5852a4 default
parent child Browse files
Show More
@@ -1,452 +1,464 b''
1 HGRC(5)
1 HGRC(5)
2 =======
2 =======
3 Bryan O'Sullivan <bos@serpentine.com>
3 Bryan O'Sullivan <bos@serpentine.com>
4
4
5 NAME
5 NAME
6 ----
6 ----
7 hgrc - configuration files for Mercurial
7 hgrc - configuration files for Mercurial
8
8
9 SYNOPSIS
9 SYNOPSIS
10 --------
10 --------
11
11
12 The Mercurial system uses a set of configuration files to control
12 The Mercurial system uses a set of configuration files to control
13 aspects of its behaviour.
13 aspects of its behaviour.
14
14
15 FILES
15 FILES
16 -----
16 -----
17
17
18 Mercurial reads configuration data from several files, if they exist.
18 Mercurial reads configuration data from several files, if they exist.
19 The names of these files depend on the system on which Mercurial is
19 The names of these files depend on the system on which Mercurial is
20 installed.
20 installed.
21
21
22 (Unix) <install-root>/etc/mercurial/hgrc.d/*.rc::
22 (Unix) <install-root>/etc/mercurial/hgrc.d/*.rc::
23 (Unix) <install-root>/etc/mercurial/hgrc::
23 (Unix) <install-root>/etc/mercurial/hgrc::
24 Per-installation configuration files, searched for in the
24 Per-installation configuration files, searched for in the
25 directory where Mercurial is installed. For example, if installed
25 directory where Mercurial is installed. For example, if installed
26 in /shared/tools, Mercurial will look in
26 in /shared/tools, Mercurial will look in
27 /shared/tools/etc/mercurial/hgrc. Options in these files apply to
27 /shared/tools/etc/mercurial/hgrc. Options in these files apply to
28 all Mercurial commands executed by any user in any directory.
28 all Mercurial commands executed by any user in any directory.
29
29
30 (Unix) /etc/mercurial/hgrc.d/*.rc::
30 (Unix) /etc/mercurial/hgrc.d/*.rc::
31 (Unix) /etc/mercurial/hgrc::
31 (Unix) /etc/mercurial/hgrc::
32 (Windows) C:\Mercurial\Mercurial.ini::
32 (Windows) C:\Mercurial\Mercurial.ini::
33 Per-system configuration files, for the system on which Mercurial
33 Per-system configuration files, for the system on which Mercurial
34 is running. Options in these files apply to all Mercurial
34 is running. Options in these files apply to all Mercurial
35 commands executed by any user in any directory. Options in these
35 commands executed by any user in any directory. Options in these
36 files override per-installation options.
36 files override per-installation options.
37
37
38 (Unix) $HOME/.hgrc::
38 (Unix) $HOME/.hgrc::
39 (Windows) C:\Documents and Settings\USERNAME\Mercurial.ini::
39 (Windows) C:\Documents and Settings\USERNAME\Mercurial.ini::
40 (Windows) $HOME\Mercurial.ini::
40 (Windows) $HOME\Mercurial.ini::
41 Per-user configuration file, for the user running Mercurial.
41 Per-user configuration file, for the user running Mercurial.
42 Options in this file apply to all Mercurial commands executed by
42 Options in this file apply to all Mercurial commands executed by
43 any user in any directory. Options in this file override
43 any user in any directory. Options in this file override
44 per-installation and per-system options.
44 per-installation and per-system options.
45 On Windows system, one of these is chosen exclusively according
45 On Windows system, one of these is chosen exclusively according
46 to definition of HOME environment variable.
46 to definition of HOME environment variable.
47
47
48 (Unix, Windows) <repo>/.hg/hgrc::
48 (Unix, Windows) <repo>/.hg/hgrc::
49 Per-repository configuration options that only apply in a
49 Per-repository configuration options that only apply in a
50 particular repository. This file is not version-controlled, and
50 particular repository. This file is not version-controlled, and
51 will not get transferred during a "clone" operation. Options in
51 will not get transferred during a "clone" operation. Options in
52 this file override options in all other configuration files.
52 this file override options in all other configuration files.
53
53
54 SYNTAX
54 SYNTAX
55 ------
55 ------
56
56
57 A configuration file consists of sections, led by a "[section]" header
57 A configuration file consists of sections, led by a "[section]" header
58 and followed by "name: value" entries; "name=value" is also accepted.
58 and followed by "name: value" entries; "name=value" is also accepted.
59
59
60 [spam]
60 [spam]
61 eggs=ham
61 eggs=ham
62 green=
62 green=
63 eggs
63 eggs
64
64
65 Each line contains one entry. If the lines that follow are indented,
65 Each line contains one entry. If the lines that follow are indented,
66 they are treated as continuations of that entry.
66 they are treated as continuations of that entry.
67
67
68 Leading whitespace is removed from values. Empty lines are skipped.
68 Leading whitespace is removed from values. Empty lines are skipped.
69
69
70 The optional values can contain format strings which refer to other
70 The optional values can contain format strings which refer to other
71 values in the same section, or values in a special DEFAULT section.
71 values in the same section, or values in a special DEFAULT section.
72
72
73 Lines beginning with "#" or ";" are ignored and may be used to provide
73 Lines beginning with "#" or ";" are ignored and may be used to provide
74 comments.
74 comments.
75
75
76 SECTIONS
76 SECTIONS
77 --------
77 --------
78
78
79 This section describes the different sections that may appear in a
79 This section describes the different sections that may appear in a
80 Mercurial "hgrc" file, the purpose of each section, its possible
80 Mercurial "hgrc" file, the purpose of each section, its possible
81 keys, and their possible values.
81 keys, and their possible values.
82
82
83 decode/encode::
83 decode/encode::
84 Filters for transforming files on checkout/checkin. This would
84 Filters for transforming files on checkout/checkin. This would
85 typically be used for newline processing or other
85 typically be used for newline processing or other
86 localization/canonicalization of files.
86 localization/canonicalization of files.
87
87
88 Filters consist of a filter pattern followed by a filter command.
88 Filters consist of a filter pattern followed by a filter command.
89 Filter patterns are globs by default, rooted at the repository
89 Filter patterns are globs by default, rooted at the repository
90 root. For example, to match any file ending in ".txt" in the root
90 root. For example, to match any file ending in ".txt" in the root
91 directory only, use the pattern "*.txt". To match any file ending
91 directory only, use the pattern "*.txt". To match any file ending
92 in ".c" anywhere in the repository, use the pattern "**.c".
92 in ".c" anywhere in the repository, use the pattern "**.c".
93
93
94 The filter command can start with a specifier, either "pipe:" or
94 The filter command can start with a specifier, either "pipe:" or
95 "tempfile:". If no specifier is given, "pipe:" is used by default.
95 "tempfile:". If no specifier is given, "pipe:" is used by default.
96
96
97 A "pipe:" command must accept data on stdin and return the
97 A "pipe:" command must accept data on stdin and return the
98 transformed data on stdout.
98 transformed data on stdout.
99
99
100 Pipe example:
100 Pipe example:
101
101
102 [encode]
102 [encode]
103 # uncompress gzip files on checkin to improve delta compression
103 # uncompress gzip files on checkin to improve delta compression
104 # note: not necessarily a good idea, just an example
104 # note: not necessarily a good idea, just an example
105 *.gz = pipe: gunzip
105 *.gz = pipe: gunzip
106
106
107 [decode]
107 [decode]
108 # recompress gzip files when writing them to the working dir (we
108 # recompress gzip files when writing them to the working dir (we
109 # can safely omit "pipe:", because it's the default)
109 # can safely omit "pipe:", because it's the default)
110 *.gz = gzip
110 *.gz = gzip
111
111
112 A "tempfile:" command is a template. The string INFILE is replaced
112 A "tempfile:" command is a template. The string INFILE is replaced
113 with the name of a temporary file that contains the data to be
113 with the name of a temporary file that contains the data to be
114 filtered by the command. The string OUTFILE is replaced with the
114 filtered by the command. The string OUTFILE is replaced with the
115 name of an empty temporary file, where the filtered data must be
115 name of an empty temporary file, where the filtered data must be
116 written by the command.
116 written by the command.
117
117
118 NOTE: the tempfile mechanism is recommended for Windows systems,
118 NOTE: the tempfile mechanism is recommended for Windows systems,
119 where the standard shell I/O redirection operators often have
119 where the standard shell I/O redirection operators often have
120 strange effects. In particular, if you are doing line ending
120 strange effects. In particular, if you are doing line ending
121 conversion on Windows using the popular dos2unix and unix2dos
121 conversion on Windows using the popular dos2unix and unix2dos
122 programs, you *must* use the tempfile mechanism, as using pipes will
122 programs, you *must* use the tempfile mechanism, as using pipes will
123 corrupt the contents of your files.
123 corrupt the contents of your files.
124
124
125 Tempfile example:
125 Tempfile example:
126
126
127 [encode]
127 [encode]
128 # convert files to unix line ending conventions on checkin
128 # convert files to unix line ending conventions on checkin
129 **.txt = tempfile: dos2unix -n INFILE OUTFILE
129 **.txt = tempfile: dos2unix -n INFILE OUTFILE
130
130
131 [decode]
131 [decode]
132 # convert files to windows line ending conventions when writing
132 # convert files to windows line ending conventions when writing
133 # them to the working dir
133 # them to the working dir
134 **.txt = tempfile: unix2dos -n INFILE OUTFILE
134 **.txt = tempfile: unix2dos -n INFILE OUTFILE
135
135
136 email::
136 email::
137 Settings for extensions that send email messages.
137 Settings for extensions that send email messages.
138 from;;
138 from;;
139 Optional. Email address to use in "From" header and SMTP envelope
139 Optional. Email address to use in "From" header and SMTP envelope
140 of outgoing messages.
140 of outgoing messages.
141 method;;
141 method;;
142 Optional. Method to use to send email messages. If value is
142 Optional. Method to use to send email messages. If value is
143 "smtp" (default), use SMTP (see section "[mail]" for
143 "smtp" (default), use SMTP (see section "[mail]" for
144 configuration). Otherwise, use as name of program to run that
144 configuration). Otherwise, use as name of program to run that
145 acts like sendmail (takes "-f" option for sender, list of
145 acts like sendmail (takes "-f" option for sender, list of
146 recipients on command line, message on stdin). Normally, setting
146 recipients on command line, message on stdin). Normally, setting
147 this to "sendmail" or "/usr/sbin/sendmail" is enough to use
147 this to "sendmail" or "/usr/sbin/sendmail" is enough to use
148 sendmail to send messages.
148 sendmail to send messages.
149
149
150 Email example:
150 Email example:
151
151
152 [email]
152 [email]
153 from = Joseph User <joe.user@example.com>
153 from = Joseph User <joe.user@example.com>
154 method = /usr/sbin/sendmail
154 method = /usr/sbin/sendmail
155
155
156 extensions::
156 extensions::
157 Mercurial has an extension mechanism for adding new features. To
157 Mercurial has an extension mechanism for adding new features. To
158 enable an extension, create an entry for it in this section.
158 enable an extension, create an entry for it in this section.
159
159
160 If you know that the extension is already in Python's search path,
160 If you know that the extension is already in Python's search path,
161 you can give the name of the module, followed by "=", with nothing
161 you can give the name of the module, followed by "=", with nothing
162 after the "=".
162 after the "=".
163
163
164 Otherwise, give a name that you choose, followed by "=", followed by
164 Otherwise, give a name that you choose, followed by "=", followed by
165 the path to the ".py" file (including the file name extension) that
165 the path to the ".py" file (including the file name extension) that
166 defines the extension.
166 defines the extension.
167
167
168 Example for ~/.hgrc:
168 Example for ~/.hgrc:
169
169
170 [extensions]
170 [extensions]
171 # (the mq extension will get loaded from mercurial's path)
171 # (the mq extension will get loaded from mercurial's path)
172 hgext.mq =
172 hgext.mq =
173 # (this extension will get loaded from the file specified)
173 # (this extension will get loaded from the file specified)
174 myfeature = ~/.hgext/myfeature.py
174 myfeature = ~/.hgext/myfeature.py
175
175
176 hooks::
176 hooks::
177 Commands or Python functions that get automatically executed by
177 Commands or Python functions that get automatically executed by
178 various actions such as starting or finishing a commit. Multiple
178 various actions such as starting or finishing a commit. Multiple
179 hooks can be run for the same action by appending a suffix to the
179 hooks can be run for the same action by appending a suffix to the
180 action. Overriding a site-wide hook can be done by changing its
180 action. Overriding a site-wide hook can be done by changing its
181 value or setting it to an empty string.
181 value or setting it to an empty string.
182
182
183 Example .hg/hgrc:
183 Example .hg/hgrc:
184
184
185 [hooks]
185 [hooks]
186 # do not use the site-wide hook
186 # do not use the site-wide hook
187 incoming =
187 incoming =
188 incoming.email = /my/email/hook
188 incoming.email = /my/email/hook
189 incoming.autobuild = /my/build/hook
189 incoming.autobuild = /my/build/hook
190
190
191 Most hooks are run with environment variables set that give added
191 Most hooks are run with environment variables set that give added
192 useful information. For each hook below, the environment variables
192 useful information. For each hook below, the environment variables
193 it is passed are listed with names of the form "$HG_foo".
193 it is passed are listed with names of the form "$HG_foo".
194
194
195 changegroup;;
195 changegroup;;
196 Run after a changegroup has been added via push, pull or
196 Run after a changegroup has been added via push, pull or
197 unbundle. ID of the first new changeset is in $HG_NODE.
197 unbundle. ID of the first new changeset is in $HG_NODE.
198 commit;;
198 commit;;
199 Run after a changeset has been created in the local repository.
199 Run after a changeset has been created in the local repository.
200 ID of the newly created changeset is in $HG_NODE. Parent
200 ID of the newly created changeset is in $HG_NODE. Parent
201 changeset IDs are in $HG_PARENT1 and $HG_PARENT2.
201 changeset IDs are in $HG_PARENT1 and $HG_PARENT2.
202 incoming;;
202 incoming;;
203 Run after a changeset has been pulled, pushed, or unbundled into
203 Run after a changeset has been pulled, pushed, or unbundled into
204 the local repository. The ID of the newly arrived changeset is in
204 the local repository. The ID of the newly arrived changeset is in
205 $HG_NODE.
205 $HG_NODE.
206 outgoing;;
206 outgoing;;
207 Run after sending changes from local repository to another. ID of
207 Run after sending changes from local repository to another. ID of
208 first changeset sent is in $HG_NODE. Source of operation is in
208 first changeset sent is in $HG_NODE. Source of operation is in
209 $HG_SOURCE; see "preoutgoing" hook for description.
209 $HG_SOURCE; see "preoutgoing" hook for description.
210 prechangegroup;;
210 prechangegroup;;
211 Run before a changegroup is added via push, pull or unbundle.
211 Run before a changegroup is added via push, pull or unbundle.
212 Exit status 0 allows the changegroup to proceed. Non-zero status
212 Exit status 0 allows the changegroup to proceed. Non-zero status
213 will cause the push, pull or unbundle to fail.
213 will cause the push, pull or unbundle to fail.
214 precommit;;
214 precommit;;
215 Run before starting a local commit. Exit status 0 allows the
215 Run before starting a local commit. Exit status 0 allows the
216 commit to proceed. Non-zero status will cause the commit to fail.
216 commit to proceed. Non-zero status will cause the commit to fail.
217 Parent changeset IDs are in $HG_PARENT1 and $HG_PARENT2.
217 Parent changeset IDs are in $HG_PARENT1 and $HG_PARENT2.
218 preoutgoing;;
218 preoutgoing;;
219 Run before computing changes to send from the local repository to
219 Run before computing changes to send from the local repository to
220 another. Non-zero status will cause failure. This lets you
220 another. Non-zero status will cause failure. This lets you
221 prevent pull over http or ssh. Also prevents against local pull,
221 prevent pull over http or ssh. Also prevents against local pull,
222 push (outbound) or bundle commands, but not effective, since you
222 push (outbound) or bundle commands, but not effective, since you
223 can just copy files instead then. Source of operation is in
223 can just copy files instead then. Source of operation is in
224 $HG_SOURCE. If "serve", operation is happening on behalf of
224 $HG_SOURCE. If "serve", operation is happening on behalf of
225 remote ssh or http repository. If "push", "pull" or "bundle",
225 remote ssh or http repository. If "push", "pull" or "bundle",
226 operation is happening on behalf of repository on same system.
226 operation is happening on behalf of repository on same system.
227 pretag;;
227 pretag;;
228 Run before creating a tag. Exit status 0 allows the tag to be
228 Run before creating a tag. Exit status 0 allows the tag to be
229 created. Non-zero status will cause the tag to fail. ID of
229 created. Non-zero status will cause the tag to fail. ID of
230 changeset to tag is in $HG_NODE. Name of tag is in $HG_TAG. Tag
230 changeset to tag is in $HG_NODE. Name of tag is in $HG_TAG. Tag
231 is local if $HG_LOCAL=1, in repo if $HG_LOCAL=0.
231 is local if $HG_LOCAL=1, in repo if $HG_LOCAL=0.
232 pretxnchangegroup;;
232 pretxnchangegroup;;
233 Run after a changegroup has been added via push, pull or unbundle,
233 Run after a changegroup has been added via push, pull or unbundle,
234 but before the transaction has been committed. Changegroup is
234 but before the transaction has been committed. Changegroup is
235 visible to hook program. This lets you validate incoming changes
235 visible to hook program. This lets you validate incoming changes
236 before accepting them. Passed the ID of the first new changeset
236 before accepting them. Passed the ID of the first new changeset
237 in $HG_NODE. Exit status 0 allows the transaction to commit.
237 in $HG_NODE. Exit status 0 allows the transaction to commit.
238 Non-zero status will cause the transaction to be rolled back and
238 Non-zero status will cause the transaction to be rolled back and
239 the push, pull or unbundle will fail.
239 the push, pull or unbundle will fail.
240 pretxncommit;;
240 pretxncommit;;
241 Run after a changeset has been created but the transaction not yet
241 Run after a changeset has been created but the transaction not yet
242 committed. Changeset is visible to hook program. This lets you
242 committed. Changeset is visible to hook program. This lets you
243 validate commit message and changes. Exit status 0 allows the
243 validate commit message and changes. Exit status 0 allows the
244 commit to proceed. Non-zero status will cause the transaction to
244 commit to proceed. Non-zero status will cause the transaction to
245 be rolled back. ID of changeset is in $HG_NODE. Parent changeset
245 be rolled back. ID of changeset is in $HG_NODE. Parent changeset
246 IDs are in $HG_PARENT1 and $HG_PARENT2.
246 IDs are in $HG_PARENT1 and $HG_PARENT2.
247 preupdate;;
247 preupdate;;
248 Run before updating the working directory. Exit status 0 allows
248 Run before updating the working directory. Exit status 0 allows
249 the update to proceed. Non-zero status will prevent the update.
249 the update to proceed. Non-zero status will prevent the update.
250 Changeset ID of first new parent is in $HG_PARENT1. If merge, ID
250 Changeset ID of first new parent is in $HG_PARENT1. If merge, ID
251 of second new parent is in $HG_PARENT2.
251 of second new parent is in $HG_PARENT2.
252 tag;;
252 tag;;
253 Run after a tag is created. ID of tagged changeset is in
253 Run after a tag is created. ID of tagged changeset is in
254 $HG_NODE. Name of tag is in $HG_TAG. Tag is local if
254 $HG_NODE. Name of tag is in $HG_TAG. Tag is local if
255 $HG_LOCAL=1, in repo if $HG_LOCAL=0.
255 $HG_LOCAL=1, in repo if $HG_LOCAL=0.
256 update;;
256 update;;
257 Run after updating the working directory. Changeset ID of first
257 Run after updating the working directory. Changeset ID of first
258 new parent is in $HG_PARENT1. If merge, ID of second new parent
258 new parent is in $HG_PARENT1. If merge, ID of second new parent
259 is in $HG_PARENT2. If update succeeded, $HG_ERROR=0. If update
259 is in $HG_PARENT2. If update succeeded, $HG_ERROR=0. If update
260 failed (e.g. because conflicts not resolved), $HG_ERROR=1.
260 failed (e.g. because conflicts not resolved), $HG_ERROR=1.
261
261
262 Note: In earlier releases, the names of hook environment variables
262 Note: In earlier releases, the names of hook environment variables
263 did not have a "HG_" prefix. The old unprefixed names are no longer
263 did not have a "HG_" prefix. The old unprefixed names are no longer
264 provided in the environment.
264 provided in the environment.
265
265
266 The syntax for Python hooks is as follows:
266 The syntax for Python hooks is as follows:
267
267
268 hookname = python:modulename.submodule.callable
268 hookname = python:modulename.submodule.callable
269
269
270 Python hooks are run within the Mercurial process. Each hook is
270 Python hooks are run within the Mercurial process. Each hook is
271 called with at least three keyword arguments: a ui object (keyword
271 called with at least three keyword arguments: a ui object (keyword
272 "ui"), a repository object (keyword "repo"), and a "hooktype"
272 "ui"), a repository object (keyword "repo"), and a "hooktype"
273 keyword that tells what kind of hook is used. Arguments listed as
273 keyword that tells what kind of hook is used. Arguments listed as
274 environment variables above are passed as keyword arguments, with no
274 environment variables above are passed as keyword arguments, with no
275 "HG_" prefix, and names in lower case.
275 "HG_" prefix, and names in lower case.
276
276
277 A Python hook must return a "true" value to succeed. Returning a
277 A Python hook must return a "true" value to succeed. Returning a
278 "false" value or raising an exception is treated as failure of the
278 "false" value or raising an exception is treated as failure of the
279 hook.
279 hook.
280
280
281 http_proxy::
281 http_proxy::
282 Used to access web-based Mercurial repositories through a HTTP
282 Used to access web-based Mercurial repositories through a HTTP
283 proxy.
283 proxy.
284 host;;
284 host;;
285 Host name and (optional) port of the proxy server, for example
285 Host name and (optional) port of the proxy server, for example
286 "myproxy:8000".
286 "myproxy:8000".
287 no;;
287 no;;
288 Optional. Comma-separated list of host names that should bypass
288 Optional. Comma-separated list of host names that should bypass
289 the proxy.
289 the proxy.
290 passwd;;
290 passwd;;
291 Optional. Password to authenticate with at the proxy server.
291 Optional. Password to authenticate with at the proxy server.
292 user;;
292 user;;
293 Optional. User name to authenticate with at the proxy server.
293 Optional. User name to authenticate with at the proxy server.
294
294
295 smtp::
295 smtp::
296 Configuration for extensions that need to send email messages.
296 Configuration for extensions that need to send email messages.
297 host;;
297 host;;
298 Optional. Host name of mail server. Default: "mail".
298 Optional. Host name of mail server. Default: "mail".
299 port;;
299 port;;
300 Optional. Port to connect to on mail server. Default: 25.
300 Optional. Port to connect to on mail server. Default: 25.
301 tls;;
301 tls;;
302 Optional. Whether to connect to mail server using TLS. True or
302 Optional. Whether to connect to mail server using TLS. True or
303 False. Default: False.
303 False. Default: False.
304 username;;
304 username;;
305 Optional. User name to authenticate to SMTP server with.
305 Optional. User name to authenticate to SMTP server with.
306 If username is specified, password must also be specified.
306 If username is specified, password must also be specified.
307 Default: none.
307 Default: none.
308 password;;
308 password;;
309 Optional. Password to authenticate to SMTP server with.
309 Optional. Password to authenticate to SMTP server with.
310 If username is specified, password must also be specified.
310 If username is specified, password must also be specified.
311 Default: none.
311 Default: none.
312 local_hostname;;
312 local_hostname;;
313 Optional. It's the hostname that the sender can use to identify itself
313 Optional. It's the hostname that the sender can use to identify itself
314 to the MTA.
314 to the MTA.
315
315
316 paths::
316 paths::
317 Assigns symbolic names to repositories. The left side is the
317 Assigns symbolic names to repositories. The left side is the
318 symbolic name, and the right gives the directory or URL that is the
318 symbolic name, and the right gives the directory or URL that is the
319 location of the repository. Default paths can be declared by
319 location of the repository. Default paths can be declared by
320 setting the following entries.
320 setting the following entries.
321 default;;
321 default;;
322 Directory or URL to use when pulling if no source is specified.
322 Directory or URL to use when pulling if no source is specified.
323 Default is set to repository from which the current repository
323 Default is set to repository from which the current repository
324 was cloned.
324 was cloned.
325 default-push;;
325 default-push;;
326 Optional. Directory or URL to use when pushing if no destination
326 Optional. Directory or URL to use when pushing if no destination
327 is specified.
327 is specified.
328
328
329 server::
330 Controls generic server settings.
331 stream;;
332 Whether to allow clients to clone a repo using the uncompressed
333 streaming protocol. This transfers about 40% more data than a
334 regular clone, but uses less memory and CPU on both server and
335 client. Over a LAN (100Mbps or better) or a very fast WAN, an
336 uncompressed streaming clone is a lot faster (~10x) than a regular
337 clone. Over most WAN connections (anything slower than about
338 6Mbps), uncompressed streaming is slower, because of the extra
339 data transfer overhead. Default is False.
340
329 ui::
341 ui::
330 User interface controls.
342 User interface controls.
331 debug;;
343 debug;;
332 Print debugging information. True or False. Default is False.
344 Print debugging information. True or False. Default is False.
333 editor;;
345 editor;;
334 The editor to use during a commit. Default is $EDITOR or "vi".
346 The editor to use during a commit. Default is $EDITOR or "vi".
335 ignore;;
347 ignore;;
336 A file to read per-user ignore patterns from. This file should be in
348 A file to read per-user ignore patterns from. This file should be in
337 the same format as a repository-wide .hgignore file. This option
349 the same format as a repository-wide .hgignore file. This option
338 supports hook syntax, so if you want to specify multiple ignore
350 supports hook syntax, so if you want to specify multiple ignore
339 files, you can do so by setting something like
351 files, you can do so by setting something like
340 "ignore.other = ~/.hgignore2". For details of the ignore file
352 "ignore.other = ~/.hgignore2". For details of the ignore file
341 format, see the hgignore(5) man page.
353 format, see the hgignore(5) man page.
342 interactive;;
354 interactive;;
343 Allow to prompt the user. True or False. Default is True.
355 Allow to prompt the user. True or False. Default is True.
344 logtemplate;;
356 logtemplate;;
345 Template string for commands that print changesets.
357 Template string for commands that print changesets.
346 style;;
358 style;;
347 Name of style to use for command output.
359 Name of style to use for command output.
348 merge;;
360 merge;;
349 The conflict resolution program to use during a manual merge.
361 The conflict resolution program to use during a manual merge.
350 Default is "hgmerge".
362 Default is "hgmerge".
351 quiet;;
363 quiet;;
352 Reduce the amount of output printed. True or False. Default is False.
364 Reduce the amount of output printed. True or False. Default is False.
353 remotecmd;;
365 remotecmd;;
354 remote command to use for clone/push/pull operations. Default is 'hg'.
366 remote command to use for clone/push/pull operations. Default is 'hg'.
355 ssh;;
367 ssh;;
356 command to use for SSH connections. Default is 'ssh'.
368 command to use for SSH connections. Default is 'ssh'.
357 timeout;;
369 timeout;;
358 The timeout used when a lock is held (in seconds), a negative value
370 The timeout used when a lock is held (in seconds), a negative value
359 means no timeout. Default is 600.
371 means no timeout. Default is 600.
360 username;;
372 username;;
361 The committer of a changeset created when running "commit".
373 The committer of a changeset created when running "commit".
362 Typically a person's name and email address, e.g. "Fred Widget
374 Typically a person's name and email address, e.g. "Fred Widget
363 <fred@example.com>". Default is $EMAIL or username@hostname, unless
375 <fred@example.com>". Default is $EMAIL or username@hostname, unless
364 username is set to an empty string, which enforces specifying the
376 username is set to an empty string, which enforces specifying the
365 username manually.
377 username manually.
366 verbose;;
378 verbose;;
367 Increase the amount of output printed. True or False. Default is False.
379 Increase the amount of output printed. True or False. Default is False.
368
380
369
381
370 web::
382 web::
371 Web interface configuration.
383 Web interface configuration.
372 accesslog;;
384 accesslog;;
373 Where to output the access log. Default is stdout.
385 Where to output the access log. Default is stdout.
374 address;;
386 address;;
375 Interface address to bind to. Default is all.
387 Interface address to bind to. Default is all.
376 allow_archive;;
388 allow_archive;;
377 List of archive format (bz2, gz, zip) allowed for downloading.
389 List of archive format (bz2, gz, zip) allowed for downloading.
378 Default is empty.
390 Default is empty.
379 allowbz2;;
391 allowbz2;;
380 (DEPRECATED) Whether to allow .tar.bz2 downloading of repo revisions.
392 (DEPRECATED) Whether to allow .tar.bz2 downloading of repo revisions.
381 Default is false.
393 Default is false.
382 allowgz;;
394 allowgz;;
383 (DEPRECATED) Whether to allow .tar.gz downloading of repo revisions.
395 (DEPRECATED) Whether to allow .tar.gz downloading of repo revisions.
384 Default is false.
396 Default is false.
385 allowpull;;
397 allowpull;;
386 Whether to allow pulling from the repository. Default is true.
398 Whether to allow pulling from the repository. Default is true.
387 allow_push;;
399 allow_push;;
388 Whether to allow pushing to the repository. If empty or not set,
400 Whether to allow pushing to the repository. If empty or not set,
389 push is not allowed. If the special value "*", any remote user
401 push is not allowed. If the special value "*", any remote user
390 can push, including unauthenticated users. Otherwise, the remote
402 can push, including unauthenticated users. Otherwise, the remote
391 user must have been authenticated, and the authenticated user name
403 user must have been authenticated, and the authenticated user name
392 must be present in this list (separated by whitespace or ",").
404 must be present in this list (separated by whitespace or ",").
393 The contents of the allow_push list are examined after the
405 The contents of the allow_push list are examined after the
394 deny_push list.
406 deny_push list.
395 allowzip;;
407 allowzip;;
396 (DEPRECATED) Whether to allow .zip downloading of repo revisions.
408 (DEPRECATED) Whether to allow .zip downloading of repo revisions.
397 Default is false. This feature creates temporary files.
409 Default is false. This feature creates temporary files.
398 baseurl;;
410 baseurl;;
399 Base URL to use when publishing URLs in other locations, so
411 Base URL to use when publishing URLs in other locations, so
400 third-party tools like email notification hooks can construct URLs.
412 third-party tools like email notification hooks can construct URLs.
401 Example: "http://hgserver/repos/"
413 Example: "http://hgserver/repos/"
402 contact;;
414 contact;;
403 Name or email address of the person in charge of the repository.
415 Name or email address of the person in charge of the repository.
404 Default is "unknown".
416 Default is "unknown".
405 deny_push;;
417 deny_push;;
406 Whether to deny pushing to the repository. If empty or not set,
418 Whether to deny pushing to the repository. If empty or not set,
407 push is not denied. If the special value "*", all remote users
419 push is not denied. If the special value "*", all remote users
408 are denied push. Otherwise, unauthenticated users are all denied,
420 are denied push. Otherwise, unauthenticated users are all denied,
409 and any authenticated user name present in this list (separated by
421 and any authenticated user name present in this list (separated by
410 whitespace or ",") is also denied. The contents of the deny_push
422 whitespace or ",") is also denied. The contents of the deny_push
411 list are examined before the allow_push list.
423 list are examined before the allow_push list.
412 description;;
424 description;;
413 Textual description of the repository's purpose or contents.
425 Textual description of the repository's purpose or contents.
414 Default is "unknown".
426 Default is "unknown".
415 errorlog;;
427 errorlog;;
416 Where to output the error log. Default is stderr.
428 Where to output the error log. Default is stderr.
417 ipv6;;
429 ipv6;;
418 Whether to use IPv6. Default is false.
430 Whether to use IPv6. Default is false.
419 name;;
431 name;;
420 Repository name to use in the web interface. Default is current
432 Repository name to use in the web interface. Default is current
421 working directory.
433 working directory.
422 maxchanges;;
434 maxchanges;;
423 Maximum number of changes to list on the changelog. Default is 10.
435 Maximum number of changes to list on the changelog. Default is 10.
424 maxfiles;;
436 maxfiles;;
425 Maximum number of files to list per changeset. Default is 10.
437 Maximum number of files to list per changeset. Default is 10.
426 port;;
438 port;;
427 Port to listen on. Default is 8000.
439 Port to listen on. Default is 8000.
428 push_ssl;;
440 push_ssl;;
429 Whether to require that inbound pushes be transported over SSL to
441 Whether to require that inbound pushes be transported over SSL to
430 prevent password sniffing. Default is true.
442 prevent password sniffing. Default is true.
431 style;;
443 style;;
432 Which template map style to use.
444 Which template map style to use.
433 templates;;
445 templates;;
434 Where to find the HTML templates. Default is install path.
446 Where to find the HTML templates. Default is install path.
435
447
436
448
437 AUTHOR
449 AUTHOR
438 ------
450 ------
439 Bryan O'Sullivan <bos@serpentine.com>.
451 Bryan O'Sullivan <bos@serpentine.com>.
440
452
441 Mercurial was written by Matt Mackall <mpm@selenic.com>.
453 Mercurial was written by Matt Mackall <mpm@selenic.com>.
442
454
443 SEE ALSO
455 SEE ALSO
444 --------
456 --------
445 hg(1), hgignore(5)
457 hg(1), hgignore(5)
446
458
447 COPYING
459 COPYING
448 -------
460 -------
449 This manual page is copyright 2005 Bryan O'Sullivan.
461 This manual page is copyright 2005 Bryan O'Sullivan.
450 Mercurial is copyright 2005, 2006 Matt Mackall.
462 Mercurial is copyright 2005, 2006 Matt Mackall.
451 Free use of this software is granted under the terms of the GNU General
463 Free use of this software is granted under the terms of the GNU General
452 Public License (GPL).
464 Public License (GPL).
@@ -1,208 +1,209 b''
1 # hg.py - repository classes for mercurial
1 # hg.py - repository classes for mercurial
2 #
2 #
3 # Copyright 2005 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms
5 # This software may be used and distributed according to the terms
6 # of the GNU General Public License, incorporated herein by reference.
6 # of the GNU General Public License, incorporated herein by reference.
7
7
8 from node import *
8 from node import *
9 from repo import *
9 from repo import *
10 from demandload import *
10 from demandload import *
11 from i18n import gettext as _
11 from i18n import gettext as _
12 demandload(globals(), "localrepo bundlerepo httprepo sshrepo statichttprepo")
12 demandload(globals(), "localrepo bundlerepo httprepo sshrepo statichttprepo")
13 demandload(globals(), "errno lock os shutil util")
13 demandload(globals(), "errno lock os shutil util")
14
14
15 def bundle(ui, path):
15 def bundle(ui, path):
16 if path.startswith('bundle://'):
16 if path.startswith('bundle://'):
17 path = path[9:]
17 path = path[9:]
18 else:
18 else:
19 path = path[7:]
19 path = path[7:]
20 s = path.split("+", 1)
20 s = path.split("+", 1)
21 if len(s) == 1:
21 if len(s) == 1:
22 repopath, bundlename = "", s[0]
22 repopath, bundlename = "", s[0]
23 else:
23 else:
24 repopath, bundlename = s
24 repopath, bundlename = s
25 return bundlerepo.bundlerepository(ui, repopath, bundlename)
25 return bundlerepo.bundlerepository(ui, repopath, bundlename)
26
26
27 def hg(ui, path):
27 def hg(ui, path):
28 ui.warn(_("hg:// syntax is deprecated, please use http:// instead\n"))
28 ui.warn(_("hg:// syntax is deprecated, please use http:// instead\n"))
29 return httprepo.httprepository(ui, path.replace("hg://", "http://"))
29 return httprepo.httprepository(ui, path.replace("hg://", "http://"))
30
30
31 def local_(ui, path, create=0):
31 def local_(ui, path, create=0):
32 if path.startswith('file:'):
32 if path.startswith('file:'):
33 path = path[5:]
33 path = path[5:]
34 return localrepo.localrepository(ui, path, create)
34 return localrepo.localrepository(ui, path, create)
35
35
36 def ssh_(ui, path, create=0):
36 def ssh_(ui, path, create=0):
37 return sshrepo.sshrepository(ui, path, create)
37 return sshrepo.sshrepository(ui, path, create)
38
38
39 def old_http(ui, path):
39 def old_http(ui, path):
40 ui.warn(_("old-http:// syntax is deprecated, "
40 ui.warn(_("old-http:// syntax is deprecated, "
41 "please use static-http:// instead\n"))
41 "please use static-http:// instead\n"))
42 return statichttprepo.statichttprepository(
42 return statichttprepo.statichttprepository(
43 ui, path.replace("old-http://", "http://"))
43 ui, path.replace("old-http://", "http://"))
44
44
45 def static_http(ui, path):
45 def static_http(ui, path):
46 return statichttprepo.statichttprepository(
46 return statichttprepo.statichttprepository(
47 ui, path.replace("static-http://", "http://"))
47 ui, path.replace("static-http://", "http://"))
48
48
49 schemes = {
49 schemes = {
50 'bundle': bundle,
50 'bundle': bundle,
51 'file': local_,
51 'file': local_,
52 'hg': hg,
52 'hg': hg,
53 'http': lambda ui, path: httprepo.httprepository(ui, path),
53 'http': lambda ui, path: httprepo.httprepository(ui, path),
54 'https': lambda ui, path: httprepo.httpsrepository(ui, path),
54 'https': lambda ui, path: httprepo.httpsrepository(ui, path),
55 'old-http': old_http,
55 'old-http': old_http,
56 'ssh': ssh_,
56 'ssh': ssh_,
57 'static-http': static_http,
57 'static-http': static_http,
58 }
58 }
59
59
60 def repository(ui, path=None, create=0):
60 def repository(ui, path=None, create=0):
61 scheme = None
61 scheme = None
62 if path:
62 if path:
63 c = path.find(':')
63 c = path.find(':')
64 if c > 0:
64 if c > 0:
65 scheme = schemes.get(path[:c])
65 scheme = schemes.get(path[:c])
66 else:
66 else:
67 path = ''
67 path = ''
68 ctor = scheme or schemes['file']
68 ctor = scheme or schemes['file']
69 if create:
69 if create:
70 try:
70 try:
71 return ctor(ui, path, create)
71 return ctor(ui, path, create)
72 except TypeError:
72 except TypeError:
73 raise util.Abort(_('cannot create new repository over "%s" protocol') %
73 raise util.Abort(_('cannot create new repository over "%s" protocol') %
74 scheme)
74 scheme)
75 return ctor(ui, path)
75 return ctor(ui, path)
76
76
77 def clone(ui, source, dest=None, pull=False, rev=None, update=True,
77 def clone(ui, source, dest=None, pull=False, rev=None, update=True,
78 stream=False):
78 stream=False):
79 """Make a copy of an existing repository.
79 """Make a copy of an existing repository.
80
80
81 Create a copy of an existing repository in a new directory. The
81 Create a copy of an existing repository in a new directory. The
82 source and destination are URLs, as passed to the repository
82 source and destination are URLs, as passed to the repository
83 function. Returns a pair of repository objects, the source and
83 function. Returns a pair of repository objects, the source and
84 newly created destination.
84 newly created destination.
85
85
86 The location of the source is added to the new repository's
86 The location of the source is added to the new repository's
87 .hg/hgrc file, as the default to be used for future pulls and
87 .hg/hgrc file, as the default to be used for future pulls and
88 pushes.
88 pushes.
89
89
90 If an exception is raised, the partly cloned/updated destination
90 If an exception is raised, the partly cloned/updated destination
91 repository will be deleted.
91 repository will be deleted.
92
92
93 Keyword arguments:
93 Keyword arguments:
94
94
95 dest: URL of destination repository to create (defaults to base
95 dest: URL of destination repository to create (defaults to base
96 name of source repository)
96 name of source repository)
97
97
98 pull: always pull from source repository, even in local case
98 pull: always pull from source repository, even in local case
99
99
100 stream: stream from repository (fast over LAN, slow over WAN)
100 stream: stream raw data uncompressed from repository (fast over
101 LAN, slow over WAN)
101
102
102 rev: revision to clone up to (implies pull=True)
103 rev: revision to clone up to (implies pull=True)
103
104
104 update: update working directory after clone completes, if
105 update: update working directory after clone completes, if
105 destination is local repository
106 destination is local repository
106 """
107 """
107 if dest is None:
108 if dest is None:
108 dest = os.path.basename(os.path.normpath(source))
109 dest = os.path.basename(os.path.normpath(source))
109
110
110 if os.path.exists(dest):
111 if os.path.exists(dest):
111 raise util.Abort(_("destination '%s' already exists"), dest)
112 raise util.Abort(_("destination '%s' already exists"), dest)
112
113
113 class DirCleanup(object):
114 class DirCleanup(object):
114 def __init__(self, dir_):
115 def __init__(self, dir_):
115 self.rmtree = shutil.rmtree
116 self.rmtree = shutil.rmtree
116 self.dir_ = dir_
117 self.dir_ = dir_
117 def close(self):
118 def close(self):
118 self.dir_ = None
119 self.dir_ = None
119 def __del__(self):
120 def __del__(self):
120 if self.dir_:
121 if self.dir_:
121 self.rmtree(self.dir_, True)
122 self.rmtree(self.dir_, True)
122
123
123 src_repo = repository(ui, source)
124 src_repo = repository(ui, source)
124
125
125 dest_repo = None
126 dest_repo = None
126 try:
127 try:
127 dest_repo = repository(ui, dest)
128 dest_repo = repository(ui, dest)
128 raise util.Abort(_("destination '%s' already exists." % dest))
129 raise util.Abort(_("destination '%s' already exists." % dest))
129 except RepoError:
130 except RepoError:
130 dest_repo = repository(ui, dest, create=True)
131 dest_repo = repository(ui, dest, create=True)
131
132
132 dest_path = None
133 dest_path = None
133 dir_cleanup = None
134 dir_cleanup = None
134 if dest_repo.local():
135 if dest_repo.local():
135 dest_path = os.path.realpath(dest)
136 dest_path = os.path.realpath(dest)
136 dir_cleanup = DirCleanup(dest_path)
137 dir_cleanup = DirCleanup(dest_path)
137
138
138 abspath = source
139 abspath = source
139 copy = False
140 copy = False
140 if src_repo.local() and dest_repo.local():
141 if src_repo.local() and dest_repo.local():
141 abspath = os.path.abspath(source)
142 abspath = os.path.abspath(source)
142 copy = not pull and not rev
143 copy = not pull and not rev
143
144
144 src_lock, dest_lock = None, None
145 src_lock, dest_lock = None, None
145 if copy:
146 if copy:
146 try:
147 try:
147 # we use a lock here because if we race with commit, we
148 # we use a lock here because if we race with commit, we
148 # can end up with extra data in the cloned revlogs that's
149 # can end up with extra data in the cloned revlogs that's
149 # not pointed to by changesets, thus causing verify to
150 # not pointed to by changesets, thus causing verify to
150 # fail
151 # fail
151 src_lock = src_repo.lock()
152 src_lock = src_repo.lock()
152 except lock.LockException:
153 except lock.LockException:
153 copy = False
154 copy = False
154
155
155 if copy:
156 if copy:
156 # we lock here to avoid premature writing to the target
157 # we lock here to avoid premature writing to the target
157 dest_lock = lock.lock(os.path.join(dest_path, ".hg", "lock"))
158 dest_lock = lock.lock(os.path.join(dest_path, ".hg", "lock"))
158
159
159 # we need to remove the (empty) data dir in dest so copyfiles
160 # we need to remove the (empty) data dir in dest so copyfiles
160 # can do its work
161 # can do its work
161 os.rmdir(os.path.join(dest_path, ".hg", "data"))
162 os.rmdir(os.path.join(dest_path, ".hg", "data"))
162 files = "data 00manifest.d 00manifest.i 00changelog.d 00changelog.i"
163 files = "data 00manifest.d 00manifest.i 00changelog.d 00changelog.i"
163 for f in files.split():
164 for f in files.split():
164 src = os.path.join(source, ".hg", f)
165 src = os.path.join(source, ".hg", f)
165 dst = os.path.join(dest_path, ".hg", f)
166 dst = os.path.join(dest_path, ".hg", f)
166 try:
167 try:
167 util.copyfiles(src, dst)
168 util.copyfiles(src, dst)
168 except OSError, inst:
169 except OSError, inst:
169 if inst.errno != errno.ENOENT:
170 if inst.errno != errno.ENOENT:
170 raise
171 raise
171
172
172 # we need to re-init the repo after manually copying the data
173 # we need to re-init the repo after manually copying the data
173 # into it
174 # into it
174 dest_repo = repository(ui, dest)
175 dest_repo = repository(ui, dest)
175
176
176 else:
177 else:
177 revs = None
178 revs = None
178 if rev:
179 if rev:
179 if not src_repo.local():
180 if not src_repo.local():
180 raise util.Abort(_("clone by revision not supported yet "
181 raise util.Abort(_("clone by revision not supported yet "
181 "for remote repositories"))
182 "for remote repositories"))
182 revs = [src_repo.lookup(r) for r in rev]
183 revs = [src_repo.lookup(r) for r in rev]
183
184
184 if dest_repo.local():
185 if dest_repo.local():
185 dest_repo.clone(src_repo, heads=revs, stream=stream)
186 dest_repo.clone(src_repo, heads=revs, stream=stream)
186 elif src_repo.local():
187 elif src_repo.local():
187 src_repo.push(dest_repo, revs=revs)
188 src_repo.push(dest_repo, revs=revs)
188 else:
189 else:
189 raise util.Abort(_("clone from remote to remote not supported"))
190 raise util.Abort(_("clone from remote to remote not supported"))
190
191
191 if src_lock:
192 if src_lock:
192 src_lock.release()
193 src_lock.release()
193
194
194 if dest_repo.local():
195 if dest_repo.local():
195 fp = dest_repo.opener("hgrc", "w", text=True)
196 fp = dest_repo.opener("hgrc", "w", text=True)
196 fp.write("[paths]\n")
197 fp.write("[paths]\n")
197 fp.write("default = %s\n" % abspath)
198 fp.write("default = %s\n" % abspath)
198 fp.close()
199 fp.close()
199
200
200 if dest_lock:
201 if dest_lock:
201 dest_lock.release()
202 dest_lock.release()
202
203
203 if update:
204 if update:
204 dest_repo.update(dest_repo.changelog.tip())
205 dest_repo.update(dest_repo.changelog.tip())
205 if dir_cleanup:
206 if dir_cleanup:
206 dir_cleanup.close()
207 dir_cleanup.close()
207
208
208 return src_repo, dest_repo
209 return src_repo, dest_repo
@@ -1,957 +1,960 b''
1 # hgweb/hgweb_mod.py - Web interface for a repository.
1 # hgweb/hgweb_mod.py - Web interface for a repository.
2 #
2 #
3 # Copyright 21 May 2005 - (c) 2005 Jake Edge <jake@edge2.net>
3 # Copyright 21 May 2005 - (c) 2005 Jake Edge <jake@edge2.net>
4 # Copyright 2005 Matt Mackall <mpm@selenic.com>
4 # Copyright 2005 Matt Mackall <mpm@selenic.com>
5 #
5 #
6 # This software may be used and distributed according to the terms
6 # This software may be used and distributed according to the terms
7 # of the GNU General Public License, incorporated herein by reference.
7 # of the GNU General Public License, incorporated herein by reference.
8
8
9 import os
9 import os
10 import os.path
10 import os.path
11 import mimetypes
11 import mimetypes
12 from mercurial.demandload import demandload
12 from mercurial.demandload import demandload
13 demandload(globals(), "re zlib ConfigParser mimetools cStringIO sys tempfile")
13 demandload(globals(), "re zlib ConfigParser mimetools cStringIO sys tempfile")
14 demandload(globals(), "mercurial:mdiff,ui,hg,util,archival,streamclone")
14 demandload(globals(), "mercurial:mdiff,ui,hg,util,archival,streamclone")
15 demandload(globals(), "mercurial:templater")
15 demandload(globals(), "mercurial:templater")
16 demandload(globals(), "mercurial.hgweb.common:get_mtime,staticfile")
16 demandload(globals(), "mercurial.hgweb.common:get_mtime,staticfile")
17 from mercurial.node import *
17 from mercurial.node import *
18 from mercurial.i18n import gettext as _
18 from mercurial.i18n import gettext as _
19
19
20 def _up(p):
20 def _up(p):
21 if p[0] != "/":
21 if p[0] != "/":
22 p = "/" + p
22 p = "/" + p
23 if p[-1] == "/":
23 if p[-1] == "/":
24 p = p[:-1]
24 p = p[:-1]
25 up = os.path.dirname(p)
25 up = os.path.dirname(p)
26 if up == "/":
26 if up == "/":
27 return "/"
27 return "/"
28 return up + "/"
28 return up + "/"
29
29
30 class hgweb(object):
30 class hgweb(object):
31 def __init__(self, repo, name=None):
31 def __init__(self, repo, name=None):
32 if type(repo) == type(""):
32 if type(repo) == type(""):
33 self.repo = hg.repository(ui.ui(), repo)
33 self.repo = hg.repository(ui.ui(), repo)
34 else:
34 else:
35 self.repo = repo
35 self.repo = repo
36
36
37 self.mtime = -1
37 self.mtime = -1
38 self.reponame = name
38 self.reponame = name
39 self.archives = 'zip', 'gz', 'bz2'
39 self.archives = 'zip', 'gz', 'bz2'
40 self.templatepath = self.repo.ui.config("web", "templates",
40 self.templatepath = self.repo.ui.config("web", "templates",
41 templater.templatepath())
41 templater.templatepath())
42
42
43 def refresh(self):
43 def refresh(self):
44 mtime = get_mtime(self.repo.root)
44 mtime = get_mtime(self.repo.root)
45 if mtime != self.mtime:
45 if mtime != self.mtime:
46 self.mtime = mtime
46 self.mtime = mtime
47 self.repo = hg.repository(self.repo.ui, self.repo.root)
47 self.repo = hg.repository(self.repo.ui, self.repo.root)
48 self.maxchanges = int(self.repo.ui.config("web", "maxchanges", 10))
48 self.maxchanges = int(self.repo.ui.config("web", "maxchanges", 10))
49 self.maxfiles = int(self.repo.ui.config("web", "maxfiles", 10))
49 self.maxfiles = int(self.repo.ui.config("web", "maxfiles", 10))
50 self.allowpull = self.repo.ui.configbool("web", "allowpull", True)
50 self.allowpull = self.repo.ui.configbool("web", "allowpull", True)
51
51
52 def archivelist(self, nodeid):
52 def archivelist(self, nodeid):
53 allowed = self.repo.ui.configlist("web", "allow_archive")
53 allowed = self.repo.ui.configlist("web", "allow_archive")
54 for i in self.archives:
54 for i in self.archives:
55 if i in allowed or self.repo.ui.configbool("web", "allow" + i):
55 if i in allowed or self.repo.ui.configbool("web", "allow" + i):
56 yield {"type" : i, "node" : nodeid, "url": ""}
56 yield {"type" : i, "node" : nodeid, "url": ""}
57
57
58 def listfiles(self, files, mf):
58 def listfiles(self, files, mf):
59 for f in files[:self.maxfiles]:
59 for f in files[:self.maxfiles]:
60 yield self.t("filenodelink", node=hex(mf[f]), file=f)
60 yield self.t("filenodelink", node=hex(mf[f]), file=f)
61 if len(files) > self.maxfiles:
61 if len(files) > self.maxfiles:
62 yield self.t("fileellipses")
62 yield self.t("fileellipses")
63
63
64 def listfilediffs(self, files, changeset):
64 def listfilediffs(self, files, changeset):
65 for f in files[:self.maxfiles]:
65 for f in files[:self.maxfiles]:
66 yield self.t("filedifflink", node=hex(changeset), file=f)
66 yield self.t("filedifflink", node=hex(changeset), file=f)
67 if len(files) > self.maxfiles:
67 if len(files) > self.maxfiles:
68 yield self.t("fileellipses")
68 yield self.t("fileellipses")
69
69
70 def siblings(self, siblings=[], rev=None, hiderev=None, **args):
70 def siblings(self, siblings=[], rev=None, hiderev=None, **args):
71 if not rev:
71 if not rev:
72 rev = lambda x: ""
72 rev = lambda x: ""
73 siblings = [s for s in siblings if s != nullid]
73 siblings = [s for s in siblings if s != nullid]
74 if len(siblings) == 1 and rev(siblings[0]) == hiderev:
74 if len(siblings) == 1 and rev(siblings[0]) == hiderev:
75 return
75 return
76 for s in siblings:
76 for s in siblings:
77 yield dict(node=hex(s), rev=rev(s), **args)
77 yield dict(node=hex(s), rev=rev(s), **args)
78
78
79 def renamelink(self, fl, node):
79 def renamelink(self, fl, node):
80 r = fl.renamed(node)
80 r = fl.renamed(node)
81 if r:
81 if r:
82 return [dict(file=r[0], node=hex(r[1]))]
82 return [dict(file=r[0], node=hex(r[1]))]
83 return []
83 return []
84
84
85 def showtag(self, t1, node=nullid, **args):
85 def showtag(self, t1, node=nullid, **args):
86 for t in self.repo.nodetags(node):
86 for t in self.repo.nodetags(node):
87 yield self.t(t1, tag=t, **args)
87 yield self.t(t1, tag=t, **args)
88
88
89 def diff(self, node1, node2, files):
89 def diff(self, node1, node2, files):
90 def filterfiles(filters, files):
90 def filterfiles(filters, files):
91 l = [x for x in files if x in filters]
91 l = [x for x in files if x in filters]
92
92
93 for t in filters:
93 for t in filters:
94 if t and t[-1] != os.sep:
94 if t and t[-1] != os.sep:
95 t += os.sep
95 t += os.sep
96 l += [x for x in files if x.startswith(t)]
96 l += [x for x in files if x.startswith(t)]
97 return l
97 return l
98
98
99 parity = [0]
99 parity = [0]
100 def diffblock(diff, f, fn):
100 def diffblock(diff, f, fn):
101 yield self.t("diffblock",
101 yield self.t("diffblock",
102 lines=prettyprintlines(diff),
102 lines=prettyprintlines(diff),
103 parity=parity[0],
103 parity=parity[0],
104 file=f,
104 file=f,
105 filenode=hex(fn or nullid))
105 filenode=hex(fn or nullid))
106 parity[0] = 1 - parity[0]
106 parity[0] = 1 - parity[0]
107
107
108 def prettyprintlines(diff):
108 def prettyprintlines(diff):
109 for l in diff.splitlines(1):
109 for l in diff.splitlines(1):
110 if l.startswith('+'):
110 if l.startswith('+'):
111 yield self.t("difflineplus", line=l)
111 yield self.t("difflineplus", line=l)
112 elif l.startswith('-'):
112 elif l.startswith('-'):
113 yield self.t("difflineminus", line=l)
113 yield self.t("difflineminus", line=l)
114 elif l.startswith('@'):
114 elif l.startswith('@'):
115 yield self.t("difflineat", line=l)
115 yield self.t("difflineat", line=l)
116 else:
116 else:
117 yield self.t("diffline", line=l)
117 yield self.t("diffline", line=l)
118
118
119 r = self.repo
119 r = self.repo
120 cl = r.changelog
120 cl = r.changelog
121 mf = r.manifest
121 mf = r.manifest
122 change1 = cl.read(node1)
122 change1 = cl.read(node1)
123 change2 = cl.read(node2)
123 change2 = cl.read(node2)
124 mmap1 = mf.read(change1[0])
124 mmap1 = mf.read(change1[0])
125 mmap2 = mf.read(change2[0])
125 mmap2 = mf.read(change2[0])
126 date1 = util.datestr(change1[2])
126 date1 = util.datestr(change1[2])
127 date2 = util.datestr(change2[2])
127 date2 = util.datestr(change2[2])
128
128
129 modified, added, removed, deleted, unknown = r.changes(node1, node2)
129 modified, added, removed, deleted, unknown = r.changes(node1, node2)
130 if files:
130 if files:
131 modified, added, removed = map(lambda x: filterfiles(files, x),
131 modified, added, removed = map(lambda x: filterfiles(files, x),
132 (modified, added, removed))
132 (modified, added, removed))
133
133
134 diffopts = self.repo.ui.diffopts()
134 diffopts = self.repo.ui.diffopts()
135 showfunc = diffopts['showfunc']
135 showfunc = diffopts['showfunc']
136 ignorews = diffopts['ignorews']
136 ignorews = diffopts['ignorews']
137 ignorewsamount = diffopts['ignorewsamount']
137 ignorewsamount = diffopts['ignorewsamount']
138 ignoreblanklines = diffopts['ignoreblanklines']
138 ignoreblanklines = diffopts['ignoreblanklines']
139 for f in modified:
139 for f in modified:
140 to = r.file(f).read(mmap1[f])
140 to = r.file(f).read(mmap1[f])
141 tn = r.file(f).read(mmap2[f])
141 tn = r.file(f).read(mmap2[f])
142 yield diffblock(mdiff.unidiff(to, date1, tn, date2, f,
142 yield diffblock(mdiff.unidiff(to, date1, tn, date2, f,
143 showfunc=showfunc, ignorews=ignorews,
143 showfunc=showfunc, ignorews=ignorews,
144 ignorewsamount=ignorewsamount,
144 ignorewsamount=ignorewsamount,
145 ignoreblanklines=ignoreblanklines), f, tn)
145 ignoreblanklines=ignoreblanklines), f, tn)
146 for f in added:
146 for f in added:
147 to = None
147 to = None
148 tn = r.file(f).read(mmap2[f])
148 tn = r.file(f).read(mmap2[f])
149 yield diffblock(mdiff.unidiff(to, date1, tn, date2, f,
149 yield diffblock(mdiff.unidiff(to, date1, tn, date2, f,
150 showfunc=showfunc, ignorews=ignorews,
150 showfunc=showfunc, ignorews=ignorews,
151 ignorewsamount=ignorewsamount,
151 ignorewsamount=ignorewsamount,
152 ignoreblanklines=ignoreblanklines), f, tn)
152 ignoreblanklines=ignoreblanklines), f, tn)
153 for f in removed:
153 for f in removed:
154 to = r.file(f).read(mmap1[f])
154 to = r.file(f).read(mmap1[f])
155 tn = None
155 tn = None
156 yield diffblock(mdiff.unidiff(to, date1, tn, date2, f,
156 yield diffblock(mdiff.unidiff(to, date1, tn, date2, f,
157 showfunc=showfunc, ignorews=ignorews,
157 showfunc=showfunc, ignorews=ignorews,
158 ignorewsamount=ignorewsamount,
158 ignorewsamount=ignorewsamount,
159 ignoreblanklines=ignoreblanklines), f, tn)
159 ignoreblanklines=ignoreblanklines), f, tn)
160
160
161 def changelog(self, pos):
161 def changelog(self, pos):
162 def changenav(**map):
162 def changenav(**map):
163 def seq(factor, maxchanges=None):
163 def seq(factor, maxchanges=None):
164 if maxchanges:
164 if maxchanges:
165 yield maxchanges
165 yield maxchanges
166 if maxchanges >= 20 and maxchanges <= 40:
166 if maxchanges >= 20 and maxchanges <= 40:
167 yield 50
167 yield 50
168 else:
168 else:
169 yield 1 * factor
169 yield 1 * factor
170 yield 3 * factor
170 yield 3 * factor
171 for f in seq(factor * 10):
171 for f in seq(factor * 10):
172 yield f
172 yield f
173
173
174 l = []
174 l = []
175 last = 0
175 last = 0
176 for f in seq(1, self.maxchanges):
176 for f in seq(1, self.maxchanges):
177 if f < self.maxchanges or f <= last:
177 if f < self.maxchanges or f <= last:
178 continue
178 continue
179 if f > count:
179 if f > count:
180 break
180 break
181 last = f
181 last = f
182 r = "%d" % f
182 r = "%d" % f
183 if pos + f < count:
183 if pos + f < count:
184 l.append(("+" + r, pos + f))
184 l.append(("+" + r, pos + f))
185 if pos - f >= 0:
185 if pos - f >= 0:
186 l.insert(0, ("-" + r, pos - f))
186 l.insert(0, ("-" + r, pos - f))
187
187
188 yield {"rev": 0, "label": "(0)"}
188 yield {"rev": 0, "label": "(0)"}
189
189
190 for label, rev in l:
190 for label, rev in l:
191 yield {"label": label, "rev": rev}
191 yield {"label": label, "rev": rev}
192
192
193 yield {"label": "tip", "rev": "tip"}
193 yield {"label": "tip", "rev": "tip"}
194
194
195 def changelist(**map):
195 def changelist(**map):
196 parity = (start - end) & 1
196 parity = (start - end) & 1
197 cl = self.repo.changelog
197 cl = self.repo.changelog
198 l = [] # build a list in forward order for efficiency
198 l = [] # build a list in forward order for efficiency
199 for i in range(start, end):
199 for i in range(start, end):
200 n = cl.node(i)
200 n = cl.node(i)
201 changes = cl.read(n)
201 changes = cl.read(n)
202 hn = hex(n)
202 hn = hex(n)
203
203
204 l.insert(0, {"parity": parity,
204 l.insert(0, {"parity": parity,
205 "author": changes[1],
205 "author": changes[1],
206 "parent": self.siblings(cl.parents(n), cl.rev,
206 "parent": self.siblings(cl.parents(n), cl.rev,
207 cl.rev(n) - 1),
207 cl.rev(n) - 1),
208 "child": self.siblings(cl.children(n), cl.rev,
208 "child": self.siblings(cl.children(n), cl.rev,
209 cl.rev(n) + 1),
209 cl.rev(n) + 1),
210 "changelogtag": self.showtag("changelogtag",n),
210 "changelogtag": self.showtag("changelogtag",n),
211 "manifest": hex(changes[0]),
211 "manifest": hex(changes[0]),
212 "desc": changes[4],
212 "desc": changes[4],
213 "date": changes[2],
213 "date": changes[2],
214 "files": self.listfilediffs(changes[3], n),
214 "files": self.listfilediffs(changes[3], n),
215 "rev": i,
215 "rev": i,
216 "node": hn})
216 "node": hn})
217 parity = 1 - parity
217 parity = 1 - parity
218
218
219 for e in l:
219 for e in l:
220 yield e
220 yield e
221
221
222 cl = self.repo.changelog
222 cl = self.repo.changelog
223 mf = cl.read(cl.tip())[0]
223 mf = cl.read(cl.tip())[0]
224 count = cl.count()
224 count = cl.count()
225 start = max(0, pos - self.maxchanges + 1)
225 start = max(0, pos - self.maxchanges + 1)
226 end = min(count, start + self.maxchanges)
226 end = min(count, start + self.maxchanges)
227 pos = end - 1
227 pos = end - 1
228
228
229 yield self.t('changelog',
229 yield self.t('changelog',
230 changenav=changenav,
230 changenav=changenav,
231 manifest=hex(mf),
231 manifest=hex(mf),
232 rev=pos, changesets=count, entries=changelist,
232 rev=pos, changesets=count, entries=changelist,
233 archives=self.archivelist("tip"))
233 archives=self.archivelist("tip"))
234
234
235 def search(self, query):
235 def search(self, query):
236
236
237 def changelist(**map):
237 def changelist(**map):
238 cl = self.repo.changelog
238 cl = self.repo.changelog
239 count = 0
239 count = 0
240 qw = query.lower().split()
240 qw = query.lower().split()
241
241
242 def revgen():
242 def revgen():
243 for i in range(cl.count() - 1, 0, -100):
243 for i in range(cl.count() - 1, 0, -100):
244 l = []
244 l = []
245 for j in range(max(0, i - 100), i):
245 for j in range(max(0, i - 100), i):
246 n = cl.node(j)
246 n = cl.node(j)
247 changes = cl.read(n)
247 changes = cl.read(n)
248 l.append((n, j, changes))
248 l.append((n, j, changes))
249 l.reverse()
249 l.reverse()
250 for e in l:
250 for e in l:
251 yield e
251 yield e
252
252
253 for n, i, changes in revgen():
253 for n, i, changes in revgen():
254 miss = 0
254 miss = 0
255 for q in qw:
255 for q in qw:
256 if not (q in changes[1].lower() or
256 if not (q in changes[1].lower() or
257 q in changes[4].lower() or
257 q in changes[4].lower() or
258 q in " ".join(changes[3][:20]).lower()):
258 q in " ".join(changes[3][:20]).lower()):
259 miss = 1
259 miss = 1
260 break
260 break
261 if miss:
261 if miss:
262 continue
262 continue
263
263
264 count += 1
264 count += 1
265 hn = hex(n)
265 hn = hex(n)
266
266
267 yield self.t('searchentry',
267 yield self.t('searchentry',
268 parity=count & 1,
268 parity=count & 1,
269 author=changes[1],
269 author=changes[1],
270 parent=self.siblings(cl.parents(n), cl.rev),
270 parent=self.siblings(cl.parents(n), cl.rev),
271 child=self.siblings(cl.children(n), cl.rev),
271 child=self.siblings(cl.children(n), cl.rev),
272 changelogtag=self.showtag("changelogtag",n),
272 changelogtag=self.showtag("changelogtag",n),
273 manifest=hex(changes[0]),
273 manifest=hex(changes[0]),
274 desc=changes[4],
274 desc=changes[4],
275 date=changes[2],
275 date=changes[2],
276 files=self.listfilediffs(changes[3], n),
276 files=self.listfilediffs(changes[3], n),
277 rev=i,
277 rev=i,
278 node=hn)
278 node=hn)
279
279
280 if count >= self.maxchanges:
280 if count >= self.maxchanges:
281 break
281 break
282
282
283 cl = self.repo.changelog
283 cl = self.repo.changelog
284 mf = cl.read(cl.tip())[0]
284 mf = cl.read(cl.tip())[0]
285
285
286 yield self.t('search',
286 yield self.t('search',
287 query=query,
287 query=query,
288 manifest=hex(mf),
288 manifest=hex(mf),
289 entries=changelist)
289 entries=changelist)
290
290
291 def changeset(self, nodeid):
291 def changeset(self, nodeid):
292 cl = self.repo.changelog
292 cl = self.repo.changelog
293 n = self.repo.lookup(nodeid)
293 n = self.repo.lookup(nodeid)
294 nodeid = hex(n)
294 nodeid = hex(n)
295 changes = cl.read(n)
295 changes = cl.read(n)
296 p1 = cl.parents(n)[0]
296 p1 = cl.parents(n)[0]
297
297
298 files = []
298 files = []
299 mf = self.repo.manifest.read(changes[0])
299 mf = self.repo.manifest.read(changes[0])
300 for f in changes[3]:
300 for f in changes[3]:
301 files.append(self.t("filenodelink",
301 files.append(self.t("filenodelink",
302 filenode=hex(mf.get(f, nullid)), file=f))
302 filenode=hex(mf.get(f, nullid)), file=f))
303
303
304 def diff(**map):
304 def diff(**map):
305 yield self.diff(p1, n, None)
305 yield self.diff(p1, n, None)
306
306
307 yield self.t('changeset',
307 yield self.t('changeset',
308 diff=diff,
308 diff=diff,
309 rev=cl.rev(n),
309 rev=cl.rev(n),
310 node=nodeid,
310 node=nodeid,
311 parent=self.siblings(cl.parents(n), cl.rev),
311 parent=self.siblings(cl.parents(n), cl.rev),
312 child=self.siblings(cl.children(n), cl.rev),
312 child=self.siblings(cl.children(n), cl.rev),
313 changesettag=self.showtag("changesettag",n),
313 changesettag=self.showtag("changesettag",n),
314 manifest=hex(changes[0]),
314 manifest=hex(changes[0]),
315 author=changes[1],
315 author=changes[1],
316 desc=changes[4],
316 desc=changes[4],
317 date=changes[2],
317 date=changes[2],
318 files=files,
318 files=files,
319 archives=self.archivelist(nodeid))
319 archives=self.archivelist(nodeid))
320
320
321 def filelog(self, f, filenode):
321 def filelog(self, f, filenode):
322 cl = self.repo.changelog
322 cl = self.repo.changelog
323 fl = self.repo.file(f)
323 fl = self.repo.file(f)
324 filenode = hex(fl.lookup(filenode))
324 filenode = hex(fl.lookup(filenode))
325 count = fl.count()
325 count = fl.count()
326
326
327 def entries(**map):
327 def entries(**map):
328 l = []
328 l = []
329 parity = (count - 1) & 1
329 parity = (count - 1) & 1
330
330
331 for i in range(count):
331 for i in range(count):
332 n = fl.node(i)
332 n = fl.node(i)
333 lr = fl.linkrev(n)
333 lr = fl.linkrev(n)
334 cn = cl.node(lr)
334 cn = cl.node(lr)
335 cs = cl.read(cl.node(lr))
335 cs = cl.read(cl.node(lr))
336
336
337 l.insert(0, {"parity": parity,
337 l.insert(0, {"parity": parity,
338 "filenode": hex(n),
338 "filenode": hex(n),
339 "filerev": i,
339 "filerev": i,
340 "file": f,
340 "file": f,
341 "node": hex(cn),
341 "node": hex(cn),
342 "author": cs[1],
342 "author": cs[1],
343 "date": cs[2],
343 "date": cs[2],
344 "rename": self.renamelink(fl, n),
344 "rename": self.renamelink(fl, n),
345 "parent": self.siblings(fl.parents(n),
345 "parent": self.siblings(fl.parents(n),
346 fl.rev, file=f),
346 fl.rev, file=f),
347 "child": self.siblings(fl.children(n),
347 "child": self.siblings(fl.children(n),
348 fl.rev, file=f),
348 fl.rev, file=f),
349 "desc": cs[4]})
349 "desc": cs[4]})
350 parity = 1 - parity
350 parity = 1 - parity
351
351
352 for e in l:
352 for e in l:
353 yield e
353 yield e
354
354
355 yield self.t("filelog", file=f, filenode=filenode, entries=entries)
355 yield self.t("filelog", file=f, filenode=filenode, entries=entries)
356
356
357 def filerevision(self, f, node):
357 def filerevision(self, f, node):
358 fl = self.repo.file(f)
358 fl = self.repo.file(f)
359 n = fl.lookup(node)
359 n = fl.lookup(node)
360 node = hex(n)
360 node = hex(n)
361 text = fl.read(n)
361 text = fl.read(n)
362 changerev = fl.linkrev(n)
362 changerev = fl.linkrev(n)
363 cl = self.repo.changelog
363 cl = self.repo.changelog
364 cn = cl.node(changerev)
364 cn = cl.node(changerev)
365 cs = cl.read(cn)
365 cs = cl.read(cn)
366 mfn = cs[0]
366 mfn = cs[0]
367
367
368 mt = mimetypes.guess_type(f)[0]
368 mt = mimetypes.guess_type(f)[0]
369 rawtext = text
369 rawtext = text
370 if util.binary(text):
370 if util.binary(text):
371 mt = mt or 'application/octet-stream'
371 mt = mt or 'application/octet-stream'
372 text = "(binary:%s)" % mt
372 text = "(binary:%s)" % mt
373 mt = mt or 'text/plain'
373 mt = mt or 'text/plain'
374
374
375 def lines():
375 def lines():
376 for l, t in enumerate(text.splitlines(1)):
376 for l, t in enumerate(text.splitlines(1)):
377 yield {"line": t,
377 yield {"line": t,
378 "linenumber": "% 6d" % (l + 1),
378 "linenumber": "% 6d" % (l + 1),
379 "parity": l & 1}
379 "parity": l & 1}
380
380
381 yield self.t("filerevision",
381 yield self.t("filerevision",
382 file=f,
382 file=f,
383 filenode=node,
383 filenode=node,
384 path=_up(f),
384 path=_up(f),
385 text=lines(),
385 text=lines(),
386 raw=rawtext,
386 raw=rawtext,
387 mimetype=mt,
387 mimetype=mt,
388 rev=changerev,
388 rev=changerev,
389 node=hex(cn),
389 node=hex(cn),
390 manifest=hex(mfn),
390 manifest=hex(mfn),
391 author=cs[1],
391 author=cs[1],
392 date=cs[2],
392 date=cs[2],
393 parent=self.siblings(fl.parents(n), fl.rev, file=f),
393 parent=self.siblings(fl.parents(n), fl.rev, file=f),
394 child=self.siblings(fl.children(n), fl.rev, file=f),
394 child=self.siblings(fl.children(n), fl.rev, file=f),
395 rename=self.renamelink(fl, n),
395 rename=self.renamelink(fl, n),
396 permissions=self.repo.manifest.readflags(mfn)[f])
396 permissions=self.repo.manifest.readflags(mfn)[f])
397
397
398 def fileannotate(self, f, node):
398 def fileannotate(self, f, node):
399 bcache = {}
399 bcache = {}
400 ncache = {}
400 ncache = {}
401 fl = self.repo.file(f)
401 fl = self.repo.file(f)
402 n = fl.lookup(node)
402 n = fl.lookup(node)
403 node = hex(n)
403 node = hex(n)
404 changerev = fl.linkrev(n)
404 changerev = fl.linkrev(n)
405
405
406 cl = self.repo.changelog
406 cl = self.repo.changelog
407 cn = cl.node(changerev)
407 cn = cl.node(changerev)
408 cs = cl.read(cn)
408 cs = cl.read(cn)
409 mfn = cs[0]
409 mfn = cs[0]
410
410
411 def annotate(**map):
411 def annotate(**map):
412 parity = 1
412 parity = 1
413 last = None
413 last = None
414 for r, l in fl.annotate(n):
414 for r, l in fl.annotate(n):
415 try:
415 try:
416 cnode = ncache[r]
416 cnode = ncache[r]
417 except KeyError:
417 except KeyError:
418 cnode = ncache[r] = self.repo.changelog.node(r)
418 cnode = ncache[r] = self.repo.changelog.node(r)
419
419
420 try:
420 try:
421 name = bcache[r]
421 name = bcache[r]
422 except KeyError:
422 except KeyError:
423 cl = self.repo.changelog.read(cnode)
423 cl = self.repo.changelog.read(cnode)
424 bcache[r] = name = self.repo.ui.shortuser(cl[1])
424 bcache[r] = name = self.repo.ui.shortuser(cl[1])
425
425
426 if last != cnode:
426 if last != cnode:
427 parity = 1 - parity
427 parity = 1 - parity
428 last = cnode
428 last = cnode
429
429
430 yield {"parity": parity,
430 yield {"parity": parity,
431 "node": hex(cnode),
431 "node": hex(cnode),
432 "rev": r,
432 "rev": r,
433 "author": name,
433 "author": name,
434 "file": f,
434 "file": f,
435 "line": l}
435 "line": l}
436
436
437 yield self.t("fileannotate",
437 yield self.t("fileannotate",
438 file=f,
438 file=f,
439 filenode=node,
439 filenode=node,
440 annotate=annotate,
440 annotate=annotate,
441 path=_up(f),
441 path=_up(f),
442 rev=changerev,
442 rev=changerev,
443 node=hex(cn),
443 node=hex(cn),
444 manifest=hex(mfn),
444 manifest=hex(mfn),
445 author=cs[1],
445 author=cs[1],
446 date=cs[2],
446 date=cs[2],
447 rename=self.renamelink(fl, n),
447 rename=self.renamelink(fl, n),
448 parent=self.siblings(fl.parents(n), fl.rev, file=f),
448 parent=self.siblings(fl.parents(n), fl.rev, file=f),
449 child=self.siblings(fl.children(n), fl.rev, file=f),
449 child=self.siblings(fl.children(n), fl.rev, file=f),
450 permissions=self.repo.manifest.readflags(mfn)[f])
450 permissions=self.repo.manifest.readflags(mfn)[f])
451
451
452 def manifest(self, mnode, path):
452 def manifest(self, mnode, path):
453 man = self.repo.manifest
453 man = self.repo.manifest
454 mn = man.lookup(mnode)
454 mn = man.lookup(mnode)
455 mnode = hex(mn)
455 mnode = hex(mn)
456 mf = man.read(mn)
456 mf = man.read(mn)
457 rev = man.rev(mn)
457 rev = man.rev(mn)
458 changerev = man.linkrev(mn)
458 changerev = man.linkrev(mn)
459 node = self.repo.changelog.node(changerev)
459 node = self.repo.changelog.node(changerev)
460 mff = man.readflags(mn)
460 mff = man.readflags(mn)
461
461
462 files = {}
462 files = {}
463
463
464 p = path[1:]
464 p = path[1:]
465 if p and p[-1] != "/":
465 if p and p[-1] != "/":
466 p += "/"
466 p += "/"
467 l = len(p)
467 l = len(p)
468
468
469 for f,n in mf.items():
469 for f,n in mf.items():
470 if f[:l] != p:
470 if f[:l] != p:
471 continue
471 continue
472 remain = f[l:]
472 remain = f[l:]
473 if "/" in remain:
473 if "/" in remain:
474 short = remain[:remain.index("/") + 1] # bleah
474 short = remain[:remain.index("/") + 1] # bleah
475 files[short] = (f, None)
475 files[short] = (f, None)
476 else:
476 else:
477 short = os.path.basename(remain)
477 short = os.path.basename(remain)
478 files[short] = (f, n)
478 files[short] = (f, n)
479
479
480 def filelist(**map):
480 def filelist(**map):
481 parity = 0
481 parity = 0
482 fl = files.keys()
482 fl = files.keys()
483 fl.sort()
483 fl.sort()
484 for f in fl:
484 for f in fl:
485 full, fnode = files[f]
485 full, fnode = files[f]
486 if not fnode:
486 if not fnode:
487 continue
487 continue
488
488
489 yield {"file": full,
489 yield {"file": full,
490 "manifest": mnode,
490 "manifest": mnode,
491 "filenode": hex(fnode),
491 "filenode": hex(fnode),
492 "parity": parity,
492 "parity": parity,
493 "basename": f,
493 "basename": f,
494 "permissions": mff[full]}
494 "permissions": mff[full]}
495 parity = 1 - parity
495 parity = 1 - parity
496
496
497 def dirlist(**map):
497 def dirlist(**map):
498 parity = 0
498 parity = 0
499 fl = files.keys()
499 fl = files.keys()
500 fl.sort()
500 fl.sort()
501 for f in fl:
501 for f in fl:
502 full, fnode = files[f]
502 full, fnode = files[f]
503 if fnode:
503 if fnode:
504 continue
504 continue
505
505
506 yield {"parity": parity,
506 yield {"parity": parity,
507 "path": os.path.join(path, f),
507 "path": os.path.join(path, f),
508 "manifest": mnode,
508 "manifest": mnode,
509 "basename": f[:-1]}
509 "basename": f[:-1]}
510 parity = 1 - parity
510 parity = 1 - parity
511
511
512 yield self.t("manifest",
512 yield self.t("manifest",
513 manifest=mnode,
513 manifest=mnode,
514 rev=rev,
514 rev=rev,
515 node=hex(node),
515 node=hex(node),
516 path=path,
516 path=path,
517 up=_up(path),
517 up=_up(path),
518 fentries=filelist,
518 fentries=filelist,
519 dentries=dirlist,
519 dentries=dirlist,
520 archives=self.archivelist(hex(node)))
520 archives=self.archivelist(hex(node)))
521
521
522 def tags(self):
522 def tags(self):
523 cl = self.repo.changelog
523 cl = self.repo.changelog
524 mf = cl.read(cl.tip())[0]
524 mf = cl.read(cl.tip())[0]
525
525
526 i = self.repo.tagslist()
526 i = self.repo.tagslist()
527 i.reverse()
527 i.reverse()
528
528
529 def entries(notip=False, **map):
529 def entries(notip=False, **map):
530 parity = 0
530 parity = 0
531 for k,n in i:
531 for k,n in i:
532 if notip and k == "tip": continue
532 if notip and k == "tip": continue
533 yield {"parity": parity,
533 yield {"parity": parity,
534 "tag": k,
534 "tag": k,
535 "tagmanifest": hex(cl.read(n)[0]),
535 "tagmanifest": hex(cl.read(n)[0]),
536 "date": cl.read(n)[2],
536 "date": cl.read(n)[2],
537 "node": hex(n)}
537 "node": hex(n)}
538 parity = 1 - parity
538 parity = 1 - parity
539
539
540 yield self.t("tags",
540 yield self.t("tags",
541 manifest=hex(mf),
541 manifest=hex(mf),
542 entries=lambda **x: entries(False, **x),
542 entries=lambda **x: entries(False, **x),
543 entriesnotip=lambda **x: entries(True, **x))
543 entriesnotip=lambda **x: entries(True, **x))
544
544
545 def summary(self):
545 def summary(self):
546 cl = self.repo.changelog
546 cl = self.repo.changelog
547 mf = cl.read(cl.tip())[0]
547 mf = cl.read(cl.tip())[0]
548
548
549 i = self.repo.tagslist()
549 i = self.repo.tagslist()
550 i.reverse()
550 i.reverse()
551
551
552 def tagentries(**map):
552 def tagentries(**map):
553 parity = 0
553 parity = 0
554 count = 0
554 count = 0
555 for k,n in i:
555 for k,n in i:
556 if k == "tip": # skip tip
556 if k == "tip": # skip tip
557 continue;
557 continue;
558
558
559 count += 1
559 count += 1
560 if count > 10: # limit to 10 tags
560 if count > 10: # limit to 10 tags
561 break;
561 break;
562
562
563 c = cl.read(n)
563 c = cl.read(n)
564 m = c[0]
564 m = c[0]
565 t = c[2]
565 t = c[2]
566
566
567 yield self.t("tagentry",
567 yield self.t("tagentry",
568 parity = parity,
568 parity = parity,
569 tag = k,
569 tag = k,
570 node = hex(n),
570 node = hex(n),
571 date = t,
571 date = t,
572 tagmanifest = hex(m))
572 tagmanifest = hex(m))
573 parity = 1 - parity
573 parity = 1 - parity
574
574
575 def changelist(**map):
575 def changelist(**map):
576 parity = 0
576 parity = 0
577 cl = self.repo.changelog
577 cl = self.repo.changelog
578 l = [] # build a list in forward order for efficiency
578 l = [] # build a list in forward order for efficiency
579 for i in range(start, end):
579 for i in range(start, end):
580 n = cl.node(i)
580 n = cl.node(i)
581 changes = cl.read(n)
581 changes = cl.read(n)
582 hn = hex(n)
582 hn = hex(n)
583 t = changes[2]
583 t = changes[2]
584
584
585 l.insert(0, self.t(
585 l.insert(0, self.t(
586 'shortlogentry',
586 'shortlogentry',
587 parity = parity,
587 parity = parity,
588 author = changes[1],
588 author = changes[1],
589 manifest = hex(changes[0]),
589 manifest = hex(changes[0]),
590 desc = changes[4],
590 desc = changes[4],
591 date = t,
591 date = t,
592 rev = i,
592 rev = i,
593 node = hn))
593 node = hn))
594 parity = 1 - parity
594 parity = 1 - parity
595
595
596 yield l
596 yield l
597
597
598 cl = self.repo.changelog
598 cl = self.repo.changelog
599 mf = cl.read(cl.tip())[0]
599 mf = cl.read(cl.tip())[0]
600 count = cl.count()
600 count = cl.count()
601 start = max(0, count - self.maxchanges)
601 start = max(0, count - self.maxchanges)
602 end = min(count, start + self.maxchanges)
602 end = min(count, start + self.maxchanges)
603
603
604 yield self.t("summary",
604 yield self.t("summary",
605 desc = self.repo.ui.config("web", "description", "unknown"),
605 desc = self.repo.ui.config("web", "description", "unknown"),
606 owner = (self.repo.ui.config("ui", "username") or # preferred
606 owner = (self.repo.ui.config("ui", "username") or # preferred
607 self.repo.ui.config("web", "contact") or # deprecated
607 self.repo.ui.config("web", "contact") or # deprecated
608 self.repo.ui.config("web", "author", "unknown")), # also
608 self.repo.ui.config("web", "author", "unknown")), # also
609 lastchange = (0, 0), # FIXME
609 lastchange = (0, 0), # FIXME
610 manifest = hex(mf),
610 manifest = hex(mf),
611 tags = tagentries,
611 tags = tagentries,
612 shortlog = changelist)
612 shortlog = changelist)
613
613
614 def filediff(self, file, changeset):
614 def filediff(self, file, changeset):
615 cl = self.repo.changelog
615 cl = self.repo.changelog
616 n = self.repo.lookup(changeset)
616 n = self.repo.lookup(changeset)
617 changeset = hex(n)
617 changeset = hex(n)
618 p1 = cl.parents(n)[0]
618 p1 = cl.parents(n)[0]
619 cs = cl.read(n)
619 cs = cl.read(n)
620 mf = self.repo.manifest.read(cs[0])
620 mf = self.repo.manifest.read(cs[0])
621
621
622 def diff(**map):
622 def diff(**map):
623 yield self.diff(p1, n, [file])
623 yield self.diff(p1, n, [file])
624
624
625 yield self.t("filediff",
625 yield self.t("filediff",
626 file=file,
626 file=file,
627 filenode=hex(mf.get(file, nullid)),
627 filenode=hex(mf.get(file, nullid)),
628 node=changeset,
628 node=changeset,
629 rev=self.repo.changelog.rev(n),
629 rev=self.repo.changelog.rev(n),
630 parent=self.siblings(cl.parents(n), cl.rev),
630 parent=self.siblings(cl.parents(n), cl.rev),
631 child=self.siblings(cl.children(n), cl.rev),
631 child=self.siblings(cl.children(n), cl.rev),
632 diff=diff)
632 diff=diff)
633
633
634 archive_specs = {
634 archive_specs = {
635 'bz2': ('application/x-tar', 'tbz2', '.tar.bz2', None),
635 'bz2': ('application/x-tar', 'tbz2', '.tar.bz2', None),
636 'gz': ('application/x-tar', 'tgz', '.tar.gz', None),
636 'gz': ('application/x-tar', 'tgz', '.tar.gz', None),
637 'zip': ('application/zip', 'zip', '.zip', None),
637 'zip': ('application/zip', 'zip', '.zip', None),
638 }
638 }
639
639
640 def archive(self, req, cnode, type_):
640 def archive(self, req, cnode, type_):
641 reponame = re.sub(r"\W+", "-", os.path.basename(self.reponame))
641 reponame = re.sub(r"\W+", "-", os.path.basename(self.reponame))
642 name = "%s-%s" % (reponame, short(cnode))
642 name = "%s-%s" % (reponame, short(cnode))
643 mimetype, artype, extension, encoding = self.archive_specs[type_]
643 mimetype, artype, extension, encoding = self.archive_specs[type_]
644 headers = [('Content-type', mimetype),
644 headers = [('Content-type', mimetype),
645 ('Content-disposition', 'attachment; filename=%s%s' %
645 ('Content-disposition', 'attachment; filename=%s%s' %
646 (name, extension))]
646 (name, extension))]
647 if encoding:
647 if encoding:
648 headers.append(('Content-encoding', encoding))
648 headers.append(('Content-encoding', encoding))
649 req.header(headers)
649 req.header(headers)
650 archival.archive(self.repo, req.out, cnode, artype, prefix=name)
650 archival.archive(self.repo, req.out, cnode, artype, prefix=name)
651
651
652 # add tags to things
652 # add tags to things
653 # tags -> list of changesets corresponding to tags
653 # tags -> list of changesets corresponding to tags
654 # find tag, changeset, file
654 # find tag, changeset, file
655
655
656 def cleanpath(self, path):
656 def cleanpath(self, path):
657 p = util.normpath(path)
657 p = util.normpath(path)
658 if p[:2] == "..":
658 if p[:2] == "..":
659 raise Exception("suspicious path")
659 raise Exception("suspicious path")
660 return p
660 return p
661
661
662 def run(self):
662 def run(self):
663 if not os.environ.get('GATEWAY_INTERFACE', '').startswith("CGI/1."):
663 if not os.environ.get('GATEWAY_INTERFACE', '').startswith("CGI/1."):
664 raise RuntimeError("This function is only intended to be called while running as a CGI script.")
664 raise RuntimeError("This function is only intended to be called while running as a CGI script.")
665 import mercurial.hgweb.wsgicgi as wsgicgi
665 import mercurial.hgweb.wsgicgi as wsgicgi
666 from request import wsgiapplication
666 from request import wsgiapplication
667 def make_web_app():
667 def make_web_app():
668 return self
668 return self
669 wsgicgi.launch(wsgiapplication(make_web_app))
669 wsgicgi.launch(wsgiapplication(make_web_app))
670
670
671 def run_wsgi(self, req):
671 def run_wsgi(self, req):
672 def header(**map):
672 def header(**map):
673 header_file = cStringIO.StringIO(''.join(self.t("header", **map)))
673 header_file = cStringIO.StringIO(''.join(self.t("header", **map)))
674 msg = mimetools.Message(header_file, 0)
674 msg = mimetools.Message(header_file, 0)
675 req.header(msg.items())
675 req.header(msg.items())
676 yield header_file.read()
676 yield header_file.read()
677
677
678 def rawfileheader(**map):
678 def rawfileheader(**map):
679 req.header([('Content-type', map['mimetype']),
679 req.header([('Content-type', map['mimetype']),
680 ('Content-disposition', 'filename=%s' % map['file']),
680 ('Content-disposition', 'filename=%s' % map['file']),
681 ('Content-length', str(len(map['raw'])))])
681 ('Content-length', str(len(map['raw'])))])
682 yield ''
682 yield ''
683
683
684 def footer(**map):
684 def footer(**map):
685 yield self.t("footer",
685 yield self.t("footer",
686 motd=self.repo.ui.config("web", "motd", ""),
686 motd=self.repo.ui.config("web", "motd", ""),
687 **map)
687 **map)
688
688
689 def expand_form(form):
689 def expand_form(form):
690 shortcuts = {
690 shortcuts = {
691 'cl': [('cmd', ['changelog']), ('rev', None)],
691 'cl': [('cmd', ['changelog']), ('rev', None)],
692 'cs': [('cmd', ['changeset']), ('node', None)],
692 'cs': [('cmd', ['changeset']), ('node', None)],
693 'f': [('cmd', ['file']), ('filenode', None)],
693 'f': [('cmd', ['file']), ('filenode', None)],
694 'fl': [('cmd', ['filelog']), ('filenode', None)],
694 'fl': [('cmd', ['filelog']), ('filenode', None)],
695 'fd': [('cmd', ['filediff']), ('node', None)],
695 'fd': [('cmd', ['filediff']), ('node', None)],
696 'fa': [('cmd', ['annotate']), ('filenode', None)],
696 'fa': [('cmd', ['annotate']), ('filenode', None)],
697 'mf': [('cmd', ['manifest']), ('manifest', None)],
697 'mf': [('cmd', ['manifest']), ('manifest', None)],
698 'ca': [('cmd', ['archive']), ('node', None)],
698 'ca': [('cmd', ['archive']), ('node', None)],
699 'tags': [('cmd', ['tags'])],
699 'tags': [('cmd', ['tags'])],
700 'tip': [('cmd', ['changeset']), ('node', ['tip'])],
700 'tip': [('cmd', ['changeset']), ('node', ['tip'])],
701 'static': [('cmd', ['static']), ('file', None)]
701 'static': [('cmd', ['static']), ('file', None)]
702 }
702 }
703
703
704 for k in shortcuts.iterkeys():
704 for k in shortcuts.iterkeys():
705 if form.has_key(k):
705 if form.has_key(k):
706 for name, value in shortcuts[k]:
706 for name, value in shortcuts[k]:
707 if value is None:
707 if value is None:
708 value = form[k]
708 value = form[k]
709 form[name] = value
709 form[name] = value
710 del form[k]
710 del form[k]
711
711
712 self.refresh()
712 self.refresh()
713
713
714 expand_form(req.form)
714 expand_form(req.form)
715
715
716 m = os.path.join(self.templatepath, "map")
716 m = os.path.join(self.templatepath, "map")
717 style = self.repo.ui.config("web", "style", "")
717 style = self.repo.ui.config("web", "style", "")
718 if req.form.has_key('style'):
718 if req.form.has_key('style'):
719 style = req.form['style'][0]
719 style = req.form['style'][0]
720 if style:
720 if style:
721 b = os.path.basename("map-" + style)
721 b = os.path.basename("map-" + style)
722 p = os.path.join(self.templatepath, b)
722 p = os.path.join(self.templatepath, b)
723 if os.path.isfile(p):
723 if os.path.isfile(p):
724 m = p
724 m = p
725
725
726 port = req.env["SERVER_PORT"]
726 port = req.env["SERVER_PORT"]
727 port = port != "80" and (":" + port) or ""
727 port = port != "80" and (":" + port) or ""
728 uri = req.env["REQUEST_URI"]
728 uri = req.env["REQUEST_URI"]
729 if "?" in uri:
729 if "?" in uri:
730 uri = uri.split("?")[0]
730 uri = uri.split("?")[0]
731 url = "http://%s%s%s" % (req.env["SERVER_NAME"], port, uri)
731 url = "http://%s%s%s" % (req.env["SERVER_NAME"], port, uri)
732 if not self.reponame:
732 if not self.reponame:
733 self.reponame = (self.repo.ui.config("web", "name")
733 self.reponame = (self.repo.ui.config("web", "name")
734 or uri.strip('/') or self.repo.root)
734 or uri.strip('/') or self.repo.root)
735
735
736 self.t = templater.templater(m, templater.common_filters,
736 self.t = templater.templater(m, templater.common_filters,
737 defaults={"url": url,
737 defaults={"url": url,
738 "repo": self.reponame,
738 "repo": self.reponame,
739 "header": header,
739 "header": header,
740 "footer": footer,
740 "footer": footer,
741 "rawfileheader": rawfileheader,
741 "rawfileheader": rawfileheader,
742 })
742 })
743
743
744 if not req.form.has_key('cmd'):
744 if not req.form.has_key('cmd'):
745 req.form['cmd'] = [self.t.cache['default'],]
745 req.form['cmd'] = [self.t.cache['default'],]
746
746
747 cmd = req.form['cmd'][0]
747 cmd = req.form['cmd'][0]
748
748
749 method = getattr(self, 'do_' + cmd, None)
749 method = getattr(self, 'do_' + cmd, None)
750 if method:
750 if method:
751 method(req)
751 method(req)
752 else:
752 else:
753 req.write(self.t("error"))
753 req.write(self.t("error"))
754
754
755 def do_changelog(self, req):
755 def do_changelog(self, req):
756 hi = self.repo.changelog.count() - 1
756 hi = self.repo.changelog.count() - 1
757 if req.form.has_key('rev'):
757 if req.form.has_key('rev'):
758 hi = req.form['rev'][0]
758 hi = req.form['rev'][0]
759 try:
759 try:
760 hi = self.repo.changelog.rev(self.repo.lookup(hi))
760 hi = self.repo.changelog.rev(self.repo.lookup(hi))
761 except hg.RepoError:
761 except hg.RepoError:
762 req.write(self.search(hi)) # XXX redirect to 404 page?
762 req.write(self.search(hi)) # XXX redirect to 404 page?
763 return
763 return
764
764
765 req.write(self.changelog(hi))
765 req.write(self.changelog(hi))
766
766
767 def do_changeset(self, req):
767 def do_changeset(self, req):
768 req.write(self.changeset(req.form['node'][0]))
768 req.write(self.changeset(req.form['node'][0]))
769
769
770 def do_manifest(self, req):
770 def do_manifest(self, req):
771 req.write(self.manifest(req.form['manifest'][0],
771 req.write(self.manifest(req.form['manifest'][0],
772 self.cleanpath(req.form['path'][0])))
772 self.cleanpath(req.form['path'][0])))
773
773
774 def do_tags(self, req):
774 def do_tags(self, req):
775 req.write(self.tags())
775 req.write(self.tags())
776
776
777 def do_summary(self, req):
777 def do_summary(self, req):
778 req.write(self.summary())
778 req.write(self.summary())
779
779
780 def do_filediff(self, req):
780 def do_filediff(self, req):
781 req.write(self.filediff(self.cleanpath(req.form['file'][0]),
781 req.write(self.filediff(self.cleanpath(req.form['file'][0]),
782 req.form['node'][0]))
782 req.form['node'][0]))
783
783
784 def do_file(self, req):
784 def do_file(self, req):
785 req.write(self.filerevision(self.cleanpath(req.form['file'][0]),
785 req.write(self.filerevision(self.cleanpath(req.form['file'][0]),
786 req.form['filenode'][0]))
786 req.form['filenode'][0]))
787
787
788 def do_annotate(self, req):
788 def do_annotate(self, req):
789 req.write(self.fileannotate(self.cleanpath(req.form['file'][0]),
789 req.write(self.fileannotate(self.cleanpath(req.form['file'][0]),
790 req.form['filenode'][0]))
790 req.form['filenode'][0]))
791
791
792 def do_filelog(self, req):
792 def do_filelog(self, req):
793 req.write(self.filelog(self.cleanpath(req.form['file'][0]),
793 req.write(self.filelog(self.cleanpath(req.form['file'][0]),
794 req.form['filenode'][0]))
794 req.form['filenode'][0]))
795
795
796 def do_heads(self, req):
796 def do_heads(self, req):
797 resp = " ".join(map(hex, self.repo.heads())) + "\n"
797 resp = " ".join(map(hex, self.repo.heads())) + "\n"
798 req.httphdr("application/mercurial-0.1", length=len(resp))
798 req.httphdr("application/mercurial-0.1", length=len(resp))
799 req.write(resp)
799 req.write(resp)
800
800
801 def do_branches(self, req):
801 def do_branches(self, req):
802 nodes = []
802 nodes = []
803 if req.form.has_key('nodes'):
803 if req.form.has_key('nodes'):
804 nodes = map(bin, req.form['nodes'][0].split(" "))
804 nodes = map(bin, req.form['nodes'][0].split(" "))
805 resp = cStringIO.StringIO()
805 resp = cStringIO.StringIO()
806 for b in self.repo.branches(nodes):
806 for b in self.repo.branches(nodes):
807 resp.write(" ".join(map(hex, b)) + "\n")
807 resp.write(" ".join(map(hex, b)) + "\n")
808 resp = resp.getvalue()
808 resp = resp.getvalue()
809 req.httphdr("application/mercurial-0.1", length=len(resp))
809 req.httphdr("application/mercurial-0.1", length=len(resp))
810 req.write(resp)
810 req.write(resp)
811
811
812 def do_between(self, req):
812 def do_between(self, req):
813 nodes = []
813 nodes = []
814 if req.form.has_key('pairs'):
814 if req.form.has_key('pairs'):
815 pairs = [map(bin, p.split("-"))
815 pairs = [map(bin, p.split("-"))
816 for p in req.form['pairs'][0].split(" ")]
816 for p in req.form['pairs'][0].split(" ")]
817 resp = cStringIO.StringIO()
817 resp = cStringIO.StringIO()
818 for b in self.repo.between(pairs):
818 for b in self.repo.between(pairs):
819 resp.write(" ".join(map(hex, b)) + "\n")
819 resp.write(" ".join(map(hex, b)) + "\n")
820 resp = resp.getvalue()
820 resp = resp.getvalue()
821 req.httphdr("application/mercurial-0.1", length=len(resp))
821 req.httphdr("application/mercurial-0.1", length=len(resp))
822 req.write(resp)
822 req.write(resp)
823
823
824 def do_changegroup(self, req):
824 def do_changegroup(self, req):
825 req.httphdr("application/mercurial-0.1")
825 req.httphdr("application/mercurial-0.1")
826 nodes = []
826 nodes = []
827 if not self.allowpull:
827 if not self.allowpull:
828 return
828 return
829
829
830 if req.form.has_key('roots'):
830 if req.form.has_key('roots'):
831 nodes = map(bin, req.form['roots'][0].split(" "))
831 nodes = map(bin, req.form['roots'][0].split(" "))
832
832
833 z = zlib.compressobj()
833 z = zlib.compressobj()
834 f = self.repo.changegroup(nodes, 'serve')
834 f = self.repo.changegroup(nodes, 'serve')
835 while 1:
835 while 1:
836 chunk = f.read(4096)
836 chunk = f.read(4096)
837 if not chunk:
837 if not chunk:
838 break
838 break
839 req.write(z.compress(chunk))
839 req.write(z.compress(chunk))
840
840
841 req.write(z.flush())
841 req.write(z.flush())
842
842
843 def do_archive(self, req):
843 def do_archive(self, req):
844 changeset = self.repo.lookup(req.form['node'][0])
844 changeset = self.repo.lookup(req.form['node'][0])
845 type_ = req.form['type'][0]
845 type_ = req.form['type'][0]
846 allowed = self.repo.ui.configlist("web", "allow_archive")
846 allowed = self.repo.ui.configlist("web", "allow_archive")
847 if (type_ in self.archives and (type_ in allowed or
847 if (type_ in self.archives and (type_ in allowed or
848 self.repo.ui.configbool("web", "allow" + type_, False))):
848 self.repo.ui.configbool("web", "allow" + type_, False))):
849 self.archive(req, changeset, type_)
849 self.archive(req, changeset, type_)
850 return
850 return
851
851
852 req.write(self.t("error"))
852 req.write(self.t("error"))
853
853
854 def do_static(self, req):
854 def do_static(self, req):
855 fname = req.form['file'][0]
855 fname = req.form['file'][0]
856 static = self.repo.ui.config("web", "static",
856 static = self.repo.ui.config("web", "static",
857 os.path.join(self.templatepath,
857 os.path.join(self.templatepath,
858 "static"))
858 "static"))
859 req.write(staticfile(static, fname, req)
859 req.write(staticfile(static, fname, req)
860 or self.t("error", error="%r not found" % fname))
860 or self.t("error", error="%r not found" % fname))
861
861
862 def do_capabilities(self, req):
862 def do_capabilities(self, req):
863 resp = 'unbundle stream=%d' % (self.repo.revlogversion,)
863 caps = ['unbundle']
864 if self.repo.ui.configbool('server', 'stream'):
865 caps.append('stream=%d' % self.repo.revlogversion)
866 resp = ' '.join(caps)
864 req.httphdr("application/mercurial-0.1", length=len(resp))
867 req.httphdr("application/mercurial-0.1", length=len(resp))
865 req.write(resp)
868 req.write(resp)
866
869
867 def check_perm(self, req, op, default):
870 def check_perm(self, req, op, default):
868 '''check permission for operation based on user auth.
871 '''check permission for operation based on user auth.
869 return true if op allowed, else false.
872 return true if op allowed, else false.
870 default is policy to use if no config given.'''
873 default is policy to use if no config given.'''
871
874
872 user = req.env.get('REMOTE_USER')
875 user = req.env.get('REMOTE_USER')
873
876
874 deny = self.repo.ui.configlist('web', 'deny_' + op)
877 deny = self.repo.ui.configlist('web', 'deny_' + op)
875 if deny and (not user or deny == ['*'] or user in deny):
878 if deny and (not user or deny == ['*'] or user in deny):
876 return False
879 return False
877
880
878 allow = self.repo.ui.configlist('web', 'allow_' + op)
881 allow = self.repo.ui.configlist('web', 'allow_' + op)
879 return (allow and (allow == ['*'] or user in allow)) or default
882 return (allow and (allow == ['*'] or user in allow)) or default
880
883
881 def do_unbundle(self, req):
884 def do_unbundle(self, req):
882 def bail(response, headers={}):
885 def bail(response, headers={}):
883 length = int(req.env['CONTENT_LENGTH'])
886 length = int(req.env['CONTENT_LENGTH'])
884 for s in util.filechunkiter(req, limit=length):
887 for s in util.filechunkiter(req, limit=length):
885 # drain incoming bundle, else client will not see
888 # drain incoming bundle, else client will not see
886 # response when run outside cgi script
889 # response when run outside cgi script
887 pass
890 pass
888 req.httphdr("application/mercurial-0.1", headers=headers)
891 req.httphdr("application/mercurial-0.1", headers=headers)
889 req.write('0\n')
892 req.write('0\n')
890 req.write(response)
893 req.write(response)
891
894
892 # require ssl by default, auth info cannot be sniffed and
895 # require ssl by default, auth info cannot be sniffed and
893 # replayed
896 # replayed
894 ssl_req = self.repo.ui.configbool('web', 'push_ssl', True)
897 ssl_req = self.repo.ui.configbool('web', 'push_ssl', True)
895 if ssl_req and not req.env.get('HTTPS'):
898 if ssl_req and not req.env.get('HTTPS'):
896 bail(_('ssl required\n'))
899 bail(_('ssl required\n'))
897 return
900 return
898
901
899 # do not allow push unless explicitly allowed
902 # do not allow push unless explicitly allowed
900 if not self.check_perm(req, 'push', False):
903 if not self.check_perm(req, 'push', False):
901 bail(_('push not authorized\n'),
904 bail(_('push not authorized\n'),
902 headers={'status': '401 Unauthorized'})
905 headers={'status': '401 Unauthorized'})
903 return
906 return
904
907
905 req.httphdr("application/mercurial-0.1")
908 req.httphdr("application/mercurial-0.1")
906
909
907 their_heads = req.form['heads'][0].split(' ')
910 their_heads = req.form['heads'][0].split(' ')
908
911
909 def check_heads():
912 def check_heads():
910 heads = map(hex, self.repo.heads())
913 heads = map(hex, self.repo.heads())
911 return their_heads == [hex('force')] or their_heads == heads
914 return their_heads == [hex('force')] or their_heads == heads
912
915
913 # fail early if possible
916 # fail early if possible
914 if not check_heads():
917 if not check_heads():
915 bail(_('unsynced changes\n'))
918 bail(_('unsynced changes\n'))
916 return
919 return
917
920
918 # do not lock repo until all changegroup data is
921 # do not lock repo until all changegroup data is
919 # streamed. save to temporary file.
922 # streamed. save to temporary file.
920
923
921 fd, tempname = tempfile.mkstemp(prefix='hg-unbundle-')
924 fd, tempname = tempfile.mkstemp(prefix='hg-unbundle-')
922 fp = os.fdopen(fd, 'wb+')
925 fp = os.fdopen(fd, 'wb+')
923 try:
926 try:
924 length = int(req.env['CONTENT_LENGTH'])
927 length = int(req.env['CONTENT_LENGTH'])
925 for s in util.filechunkiter(req, limit=length):
928 for s in util.filechunkiter(req, limit=length):
926 fp.write(s)
929 fp.write(s)
927
930
928 lock = self.repo.lock()
931 lock = self.repo.lock()
929 try:
932 try:
930 if not check_heads():
933 if not check_heads():
931 req.write('0\n')
934 req.write('0\n')
932 req.write(_('unsynced changes\n'))
935 req.write(_('unsynced changes\n'))
933 return
936 return
934
937
935 fp.seek(0)
938 fp.seek(0)
936
939
937 # send addchangegroup output to client
940 # send addchangegroup output to client
938
941
939 old_stdout = sys.stdout
942 old_stdout = sys.stdout
940 sys.stdout = cStringIO.StringIO()
943 sys.stdout = cStringIO.StringIO()
941
944
942 try:
945 try:
943 ret = self.repo.addchangegroup(fp, 'serve')
946 ret = self.repo.addchangegroup(fp, 'serve')
944 finally:
947 finally:
945 val = sys.stdout.getvalue()
948 val = sys.stdout.getvalue()
946 sys.stdout = old_stdout
949 sys.stdout = old_stdout
947 req.write('%d\n' % ret)
950 req.write('%d\n' % ret)
948 req.write(val)
951 req.write(val)
949 finally:
952 finally:
950 lock.release()
953 lock.release()
951 finally:
954 finally:
952 fp.close()
955 fp.close()
953 os.unlink(tempname)
956 os.unlink(tempname)
954
957
955 def do_stream_out(self, req):
958 def do_stream_out(self, req):
956 req.httphdr("application/mercurial-0.1")
959 req.httphdr("application/mercurial-0.1")
957 streamclone.stream_out(self.repo, req)
960 streamclone.stream_out(self.repo, req)
@@ -1,2254 +1,2258 b''
1 # localrepo.py - read/write repository class for mercurial
1 # localrepo.py - read/write repository class for mercurial
2 #
2 #
3 # Copyright 2005 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms
5 # This software may be used and distributed according to the terms
6 # of the GNU General Public License, incorporated herein by reference.
6 # of the GNU General Public License, incorporated herein by reference.
7
7
8 from node import *
8 from node import *
9 from i18n import gettext as _
9 from i18n import gettext as _
10 from demandload import *
10 from demandload import *
11 import repo
11 import repo
12 demandload(globals(), "appendfile changegroup")
12 demandload(globals(), "appendfile changegroup")
13 demandload(globals(), "changelog dirstate filelog manifest context")
13 demandload(globals(), "changelog dirstate filelog manifest context")
14 demandload(globals(), "re lock transaction tempfile stat mdiff errno ui")
14 demandload(globals(), "re lock transaction tempfile stat mdiff errno ui")
15 demandload(globals(), "os revlog time util")
15 demandload(globals(), "os revlog time util")
16
16
17 class localrepository(repo.repository):
17 class localrepository(repo.repository):
18 capabilities = ()
18 capabilities = ()
19
19
20 def __del__(self):
20 def __del__(self):
21 self.transhandle = None
21 self.transhandle = None
22 def __init__(self, parentui, path=None, create=0):
22 def __init__(self, parentui, path=None, create=0):
23 repo.repository.__init__(self)
23 repo.repository.__init__(self)
24 if not path:
24 if not path:
25 p = os.getcwd()
25 p = os.getcwd()
26 while not os.path.isdir(os.path.join(p, ".hg")):
26 while not os.path.isdir(os.path.join(p, ".hg")):
27 oldp = p
27 oldp = p
28 p = os.path.dirname(p)
28 p = os.path.dirname(p)
29 if p == oldp:
29 if p == oldp:
30 raise repo.RepoError(_("no repo found"))
30 raise repo.RepoError(_("no repo found"))
31 path = p
31 path = p
32 self.path = os.path.join(path, ".hg")
32 self.path = os.path.join(path, ".hg")
33
33
34 if not create and not os.path.isdir(self.path):
34 if not create and not os.path.isdir(self.path):
35 raise repo.RepoError(_("repository %s not found") % path)
35 raise repo.RepoError(_("repository %s not found") % path)
36
36
37 self.root = os.path.abspath(path)
37 self.root = os.path.abspath(path)
38 self.origroot = path
38 self.origroot = path
39 self.ui = ui.ui(parentui=parentui)
39 self.ui = ui.ui(parentui=parentui)
40 self.opener = util.opener(self.path)
40 self.opener = util.opener(self.path)
41 self.wopener = util.opener(self.root)
41 self.wopener = util.opener(self.root)
42
42
43 try:
43 try:
44 self.ui.readconfig(self.join("hgrc"), self.root)
44 self.ui.readconfig(self.join("hgrc"), self.root)
45 except IOError:
45 except IOError:
46 pass
46 pass
47
47
48 v = self.ui.revlogopts
48 v = self.ui.revlogopts
49 self.revlogversion = int(v.get('format', revlog.REVLOG_DEFAULT_FORMAT))
49 self.revlogversion = int(v.get('format', revlog.REVLOG_DEFAULT_FORMAT))
50 self.revlogv1 = self.revlogversion != revlog.REVLOGV0
50 self.revlogv1 = self.revlogversion != revlog.REVLOGV0
51 fl = v.get('flags', None)
51 fl = v.get('flags', None)
52 flags = 0
52 flags = 0
53 if fl != None:
53 if fl != None:
54 for x in fl.split():
54 for x in fl.split():
55 flags |= revlog.flagstr(x)
55 flags |= revlog.flagstr(x)
56 elif self.revlogv1:
56 elif self.revlogv1:
57 flags = revlog.REVLOG_DEFAULT_FLAGS
57 flags = revlog.REVLOG_DEFAULT_FLAGS
58
58
59 v = self.revlogversion | flags
59 v = self.revlogversion | flags
60 self.manifest = manifest.manifest(self.opener, v)
60 self.manifest = manifest.manifest(self.opener, v)
61 self.changelog = changelog.changelog(self.opener, v)
61 self.changelog = changelog.changelog(self.opener, v)
62
62
63 # the changelog might not have the inline index flag
63 # the changelog might not have the inline index flag
64 # on. If the format of the changelog is the same as found in
64 # on. If the format of the changelog is the same as found in
65 # .hgrc, apply any flags found in the .hgrc as well.
65 # .hgrc, apply any flags found in the .hgrc as well.
66 # Otherwise, just version from the changelog
66 # Otherwise, just version from the changelog
67 v = self.changelog.version
67 v = self.changelog.version
68 if v == self.revlogversion:
68 if v == self.revlogversion:
69 v |= flags
69 v |= flags
70 self.revlogversion = v
70 self.revlogversion = v
71
71
72 self.tagscache = None
72 self.tagscache = None
73 self.nodetagscache = None
73 self.nodetagscache = None
74 self.encodepats = None
74 self.encodepats = None
75 self.decodepats = None
75 self.decodepats = None
76 self.transhandle = None
76 self.transhandle = None
77
77
78 if create:
78 if create:
79 if not os.path.exists(path):
79 if not os.path.exists(path):
80 os.mkdir(path)
80 os.mkdir(path)
81 os.mkdir(self.path)
81 os.mkdir(self.path)
82 os.mkdir(self.join("data"))
82 os.mkdir(self.join("data"))
83
83
84 self.dirstate = dirstate.dirstate(self.opener, self.ui, self.root)
84 self.dirstate = dirstate.dirstate(self.opener, self.ui, self.root)
85
85
86 def hook(self, name, throw=False, **args):
86 def hook(self, name, throw=False, **args):
87 def callhook(hname, funcname):
87 def callhook(hname, funcname):
88 '''call python hook. hook is callable object, looked up as
88 '''call python hook. hook is callable object, looked up as
89 name in python module. if callable returns "true", hook
89 name in python module. if callable returns "true", hook
90 fails, else passes. if hook raises exception, treated as
90 fails, else passes. if hook raises exception, treated as
91 hook failure. exception propagates if throw is "true".
91 hook failure. exception propagates if throw is "true".
92
92
93 reason for "true" meaning "hook failed" is so that
93 reason for "true" meaning "hook failed" is so that
94 unmodified commands (e.g. mercurial.commands.update) can
94 unmodified commands (e.g. mercurial.commands.update) can
95 be run as hooks without wrappers to convert return values.'''
95 be run as hooks without wrappers to convert return values.'''
96
96
97 self.ui.note(_("calling hook %s: %s\n") % (hname, funcname))
97 self.ui.note(_("calling hook %s: %s\n") % (hname, funcname))
98 d = funcname.rfind('.')
98 d = funcname.rfind('.')
99 if d == -1:
99 if d == -1:
100 raise util.Abort(_('%s hook is invalid ("%s" not in a module)')
100 raise util.Abort(_('%s hook is invalid ("%s" not in a module)')
101 % (hname, funcname))
101 % (hname, funcname))
102 modname = funcname[:d]
102 modname = funcname[:d]
103 try:
103 try:
104 obj = __import__(modname)
104 obj = __import__(modname)
105 except ImportError:
105 except ImportError:
106 try:
106 try:
107 # extensions are loaded with hgext_ prefix
107 # extensions are loaded with hgext_ prefix
108 obj = __import__("hgext_%s" % modname)
108 obj = __import__("hgext_%s" % modname)
109 except ImportError:
109 except ImportError:
110 raise util.Abort(_('%s hook is invalid '
110 raise util.Abort(_('%s hook is invalid '
111 '(import of "%s" failed)') %
111 '(import of "%s" failed)') %
112 (hname, modname))
112 (hname, modname))
113 try:
113 try:
114 for p in funcname.split('.')[1:]:
114 for p in funcname.split('.')[1:]:
115 obj = getattr(obj, p)
115 obj = getattr(obj, p)
116 except AttributeError, err:
116 except AttributeError, err:
117 raise util.Abort(_('%s hook is invalid '
117 raise util.Abort(_('%s hook is invalid '
118 '("%s" is not defined)') %
118 '("%s" is not defined)') %
119 (hname, funcname))
119 (hname, funcname))
120 if not callable(obj):
120 if not callable(obj):
121 raise util.Abort(_('%s hook is invalid '
121 raise util.Abort(_('%s hook is invalid '
122 '("%s" is not callable)') %
122 '("%s" is not callable)') %
123 (hname, funcname))
123 (hname, funcname))
124 try:
124 try:
125 r = obj(ui=self.ui, repo=self, hooktype=name, **args)
125 r = obj(ui=self.ui, repo=self, hooktype=name, **args)
126 except (KeyboardInterrupt, util.SignalInterrupt):
126 except (KeyboardInterrupt, util.SignalInterrupt):
127 raise
127 raise
128 except Exception, exc:
128 except Exception, exc:
129 if isinstance(exc, util.Abort):
129 if isinstance(exc, util.Abort):
130 self.ui.warn(_('error: %s hook failed: %s\n') %
130 self.ui.warn(_('error: %s hook failed: %s\n') %
131 (hname, exc.args[0] % exc.args[1:]))
131 (hname, exc.args[0] % exc.args[1:]))
132 else:
132 else:
133 self.ui.warn(_('error: %s hook raised an exception: '
133 self.ui.warn(_('error: %s hook raised an exception: '
134 '%s\n') % (hname, exc))
134 '%s\n') % (hname, exc))
135 if throw:
135 if throw:
136 raise
136 raise
137 self.ui.print_exc()
137 self.ui.print_exc()
138 return True
138 return True
139 if r:
139 if r:
140 if throw:
140 if throw:
141 raise util.Abort(_('%s hook failed') % hname)
141 raise util.Abort(_('%s hook failed') % hname)
142 self.ui.warn(_('warning: %s hook failed\n') % hname)
142 self.ui.warn(_('warning: %s hook failed\n') % hname)
143 return r
143 return r
144
144
145 def runhook(name, cmd):
145 def runhook(name, cmd):
146 self.ui.note(_("running hook %s: %s\n") % (name, cmd))
146 self.ui.note(_("running hook %s: %s\n") % (name, cmd))
147 env = dict([('HG_' + k.upper(), v) for k, v in args.iteritems()])
147 env = dict([('HG_' + k.upper(), v) for k, v in args.iteritems()])
148 r = util.system(cmd, environ=env, cwd=self.root)
148 r = util.system(cmd, environ=env, cwd=self.root)
149 if r:
149 if r:
150 desc, r = util.explain_exit(r)
150 desc, r = util.explain_exit(r)
151 if throw:
151 if throw:
152 raise util.Abort(_('%s hook %s') % (name, desc))
152 raise util.Abort(_('%s hook %s') % (name, desc))
153 self.ui.warn(_('warning: %s hook %s\n') % (name, desc))
153 self.ui.warn(_('warning: %s hook %s\n') % (name, desc))
154 return r
154 return r
155
155
156 r = False
156 r = False
157 hooks = [(hname, cmd) for hname, cmd in self.ui.configitems("hooks")
157 hooks = [(hname, cmd) for hname, cmd in self.ui.configitems("hooks")
158 if hname.split(".", 1)[0] == name and cmd]
158 if hname.split(".", 1)[0] == name and cmd]
159 hooks.sort()
159 hooks.sort()
160 for hname, cmd in hooks:
160 for hname, cmd in hooks:
161 if cmd.startswith('python:'):
161 if cmd.startswith('python:'):
162 r = callhook(hname, cmd[7:].strip()) or r
162 r = callhook(hname, cmd[7:].strip()) or r
163 else:
163 else:
164 r = runhook(hname, cmd) or r
164 r = runhook(hname, cmd) or r
165 return r
165 return r
166
166
167 tag_disallowed = ':\r\n'
167 tag_disallowed = ':\r\n'
168
168
169 def tag(self, name, node, local=False, message=None, user=None, date=None):
169 def tag(self, name, node, local=False, message=None, user=None, date=None):
170 '''tag a revision with a symbolic name.
170 '''tag a revision with a symbolic name.
171
171
172 if local is True, the tag is stored in a per-repository file.
172 if local is True, the tag is stored in a per-repository file.
173 otherwise, it is stored in the .hgtags file, and a new
173 otherwise, it is stored in the .hgtags file, and a new
174 changeset is committed with the change.
174 changeset is committed with the change.
175
175
176 keyword arguments:
176 keyword arguments:
177
177
178 local: whether to store tag in non-version-controlled file
178 local: whether to store tag in non-version-controlled file
179 (default False)
179 (default False)
180
180
181 message: commit message to use if committing
181 message: commit message to use if committing
182
182
183 user: name of user to use if committing
183 user: name of user to use if committing
184
184
185 date: date tuple to use if committing'''
185 date: date tuple to use if committing'''
186
186
187 for c in self.tag_disallowed:
187 for c in self.tag_disallowed:
188 if c in name:
188 if c in name:
189 raise util.Abort(_('%r cannot be used in a tag name') % c)
189 raise util.Abort(_('%r cannot be used in a tag name') % c)
190
190
191 self.hook('pretag', throw=True, node=node, tag=name, local=local)
191 self.hook('pretag', throw=True, node=node, tag=name, local=local)
192
192
193 if local:
193 if local:
194 self.opener('localtags', 'a').write('%s %s\n' % (node, name))
194 self.opener('localtags', 'a').write('%s %s\n' % (node, name))
195 self.hook('tag', node=node, tag=name, local=local)
195 self.hook('tag', node=node, tag=name, local=local)
196 return
196 return
197
197
198 for x in self.changes():
198 for x in self.changes():
199 if '.hgtags' in x:
199 if '.hgtags' in x:
200 raise util.Abort(_('working copy of .hgtags is changed '
200 raise util.Abort(_('working copy of .hgtags is changed '
201 '(please commit .hgtags manually)'))
201 '(please commit .hgtags manually)'))
202
202
203 self.wfile('.hgtags', 'ab').write('%s %s\n' % (node, name))
203 self.wfile('.hgtags', 'ab').write('%s %s\n' % (node, name))
204 if self.dirstate.state('.hgtags') == '?':
204 if self.dirstate.state('.hgtags') == '?':
205 self.add(['.hgtags'])
205 self.add(['.hgtags'])
206
206
207 if not message:
207 if not message:
208 message = _('Added tag %s for changeset %s') % (name, node)
208 message = _('Added tag %s for changeset %s') % (name, node)
209
209
210 self.commit(['.hgtags'], message, user, date)
210 self.commit(['.hgtags'], message, user, date)
211 self.hook('tag', node=node, tag=name, local=local)
211 self.hook('tag', node=node, tag=name, local=local)
212
212
213 def tags(self):
213 def tags(self):
214 '''return a mapping of tag to node'''
214 '''return a mapping of tag to node'''
215 if not self.tagscache:
215 if not self.tagscache:
216 self.tagscache = {}
216 self.tagscache = {}
217
217
218 def parsetag(line, context):
218 def parsetag(line, context):
219 if not line:
219 if not line:
220 return
220 return
221 s = l.split(" ", 1)
221 s = l.split(" ", 1)
222 if len(s) != 2:
222 if len(s) != 2:
223 self.ui.warn(_("%s: cannot parse entry\n") % context)
223 self.ui.warn(_("%s: cannot parse entry\n") % context)
224 return
224 return
225 node, key = s
225 node, key = s
226 key = key.strip()
226 key = key.strip()
227 try:
227 try:
228 bin_n = bin(node)
228 bin_n = bin(node)
229 except TypeError:
229 except TypeError:
230 self.ui.warn(_("%s: node '%s' is not well formed\n") %
230 self.ui.warn(_("%s: node '%s' is not well formed\n") %
231 (context, node))
231 (context, node))
232 return
232 return
233 if bin_n not in self.changelog.nodemap:
233 if bin_n not in self.changelog.nodemap:
234 self.ui.warn(_("%s: tag '%s' refers to unknown node\n") %
234 self.ui.warn(_("%s: tag '%s' refers to unknown node\n") %
235 (context, key))
235 (context, key))
236 return
236 return
237 self.tagscache[key] = bin_n
237 self.tagscache[key] = bin_n
238
238
239 # read the tags file from each head, ending with the tip,
239 # read the tags file from each head, ending with the tip,
240 # and add each tag found to the map, with "newer" ones
240 # and add each tag found to the map, with "newer" ones
241 # taking precedence
241 # taking precedence
242 heads = self.heads()
242 heads = self.heads()
243 heads.reverse()
243 heads.reverse()
244 fl = self.file(".hgtags")
244 fl = self.file(".hgtags")
245 for node in heads:
245 for node in heads:
246 change = self.changelog.read(node)
246 change = self.changelog.read(node)
247 rev = self.changelog.rev(node)
247 rev = self.changelog.rev(node)
248 fn, ff = self.manifest.find(change[0], '.hgtags')
248 fn, ff = self.manifest.find(change[0], '.hgtags')
249 if fn is None: continue
249 if fn is None: continue
250 count = 0
250 count = 0
251 for l in fl.read(fn).splitlines():
251 for l in fl.read(fn).splitlines():
252 count += 1
252 count += 1
253 parsetag(l, _(".hgtags (rev %d:%s), line %d") %
253 parsetag(l, _(".hgtags (rev %d:%s), line %d") %
254 (rev, short(node), count))
254 (rev, short(node), count))
255 try:
255 try:
256 f = self.opener("localtags")
256 f = self.opener("localtags")
257 count = 0
257 count = 0
258 for l in f:
258 for l in f:
259 count += 1
259 count += 1
260 parsetag(l, _("localtags, line %d") % count)
260 parsetag(l, _("localtags, line %d") % count)
261 except IOError:
261 except IOError:
262 pass
262 pass
263
263
264 self.tagscache['tip'] = self.changelog.tip()
264 self.tagscache['tip'] = self.changelog.tip()
265
265
266 return self.tagscache
266 return self.tagscache
267
267
268 def tagslist(self):
268 def tagslist(self):
269 '''return a list of tags ordered by revision'''
269 '''return a list of tags ordered by revision'''
270 l = []
270 l = []
271 for t, n in self.tags().items():
271 for t, n in self.tags().items():
272 try:
272 try:
273 r = self.changelog.rev(n)
273 r = self.changelog.rev(n)
274 except:
274 except:
275 r = -2 # sort to the beginning of the list if unknown
275 r = -2 # sort to the beginning of the list if unknown
276 l.append((r, t, n))
276 l.append((r, t, n))
277 l.sort()
277 l.sort()
278 return [(t, n) for r, t, n in l]
278 return [(t, n) for r, t, n in l]
279
279
280 def nodetags(self, node):
280 def nodetags(self, node):
281 '''return the tags associated with a node'''
281 '''return the tags associated with a node'''
282 if not self.nodetagscache:
282 if not self.nodetagscache:
283 self.nodetagscache = {}
283 self.nodetagscache = {}
284 for t, n in self.tags().items():
284 for t, n in self.tags().items():
285 self.nodetagscache.setdefault(n, []).append(t)
285 self.nodetagscache.setdefault(n, []).append(t)
286 return self.nodetagscache.get(node, [])
286 return self.nodetagscache.get(node, [])
287
287
288 def lookup(self, key):
288 def lookup(self, key):
289 try:
289 try:
290 return self.tags()[key]
290 return self.tags()[key]
291 except KeyError:
291 except KeyError:
292 try:
292 try:
293 return self.changelog.lookup(key)
293 return self.changelog.lookup(key)
294 except:
294 except:
295 raise repo.RepoError(_("unknown revision '%s'") % key)
295 raise repo.RepoError(_("unknown revision '%s'") % key)
296
296
297 def dev(self):
297 def dev(self):
298 return os.lstat(self.path).st_dev
298 return os.lstat(self.path).st_dev
299
299
300 def local(self):
300 def local(self):
301 return True
301 return True
302
302
303 def join(self, f):
303 def join(self, f):
304 return os.path.join(self.path, f)
304 return os.path.join(self.path, f)
305
305
306 def wjoin(self, f):
306 def wjoin(self, f):
307 return os.path.join(self.root, f)
307 return os.path.join(self.root, f)
308
308
309 def file(self, f):
309 def file(self, f):
310 if f[0] == '/':
310 if f[0] == '/':
311 f = f[1:]
311 f = f[1:]
312 return filelog.filelog(self.opener, f, self.revlogversion)
312 return filelog.filelog(self.opener, f, self.revlogversion)
313
313
314 def changectx(self, changeid):
314 def changectx(self, changeid):
315 return context.changectx(self, changeid)
315 return context.changectx(self, changeid)
316
316
317 def filectx(self, path, changeid=None, fileid=None):
317 def filectx(self, path, changeid=None, fileid=None):
318 """changeid can be a changeset revision, node, or tag.
318 """changeid can be a changeset revision, node, or tag.
319 fileid can be a file revision or node."""
319 fileid can be a file revision or node."""
320 return context.filectx(self, path, changeid, fileid)
320 return context.filectx(self, path, changeid, fileid)
321
321
322 def getcwd(self):
322 def getcwd(self):
323 return self.dirstate.getcwd()
323 return self.dirstate.getcwd()
324
324
325 def wfile(self, f, mode='r'):
325 def wfile(self, f, mode='r'):
326 return self.wopener(f, mode)
326 return self.wopener(f, mode)
327
327
328 def wread(self, filename):
328 def wread(self, filename):
329 if self.encodepats == None:
329 if self.encodepats == None:
330 l = []
330 l = []
331 for pat, cmd in self.ui.configitems("encode"):
331 for pat, cmd in self.ui.configitems("encode"):
332 mf = util.matcher(self.root, "", [pat], [], [])[1]
332 mf = util.matcher(self.root, "", [pat], [], [])[1]
333 l.append((mf, cmd))
333 l.append((mf, cmd))
334 self.encodepats = l
334 self.encodepats = l
335
335
336 data = self.wopener(filename, 'r').read()
336 data = self.wopener(filename, 'r').read()
337
337
338 for mf, cmd in self.encodepats:
338 for mf, cmd in self.encodepats:
339 if mf(filename):
339 if mf(filename):
340 self.ui.debug(_("filtering %s through %s\n") % (filename, cmd))
340 self.ui.debug(_("filtering %s through %s\n") % (filename, cmd))
341 data = util.filter(data, cmd)
341 data = util.filter(data, cmd)
342 break
342 break
343
343
344 return data
344 return data
345
345
346 def wwrite(self, filename, data, fd=None):
346 def wwrite(self, filename, data, fd=None):
347 if self.decodepats == None:
347 if self.decodepats == None:
348 l = []
348 l = []
349 for pat, cmd in self.ui.configitems("decode"):
349 for pat, cmd in self.ui.configitems("decode"):
350 mf = util.matcher(self.root, "", [pat], [], [])[1]
350 mf = util.matcher(self.root, "", [pat], [], [])[1]
351 l.append((mf, cmd))
351 l.append((mf, cmd))
352 self.decodepats = l
352 self.decodepats = l
353
353
354 for mf, cmd in self.decodepats:
354 for mf, cmd in self.decodepats:
355 if mf(filename):
355 if mf(filename):
356 self.ui.debug(_("filtering %s through %s\n") % (filename, cmd))
356 self.ui.debug(_("filtering %s through %s\n") % (filename, cmd))
357 data = util.filter(data, cmd)
357 data = util.filter(data, cmd)
358 break
358 break
359
359
360 if fd:
360 if fd:
361 return fd.write(data)
361 return fd.write(data)
362 return self.wopener(filename, 'w').write(data)
362 return self.wopener(filename, 'w').write(data)
363
363
364 def transaction(self):
364 def transaction(self):
365 tr = self.transhandle
365 tr = self.transhandle
366 if tr != None and tr.running():
366 if tr != None and tr.running():
367 return tr.nest()
367 return tr.nest()
368
368
369 # save dirstate for rollback
369 # save dirstate for rollback
370 try:
370 try:
371 ds = self.opener("dirstate").read()
371 ds = self.opener("dirstate").read()
372 except IOError:
372 except IOError:
373 ds = ""
373 ds = ""
374 self.opener("journal.dirstate", "w").write(ds)
374 self.opener("journal.dirstate", "w").write(ds)
375
375
376 tr = transaction.transaction(self.ui.warn, self.opener,
376 tr = transaction.transaction(self.ui.warn, self.opener,
377 self.join("journal"),
377 self.join("journal"),
378 aftertrans(self.path))
378 aftertrans(self.path))
379 self.transhandle = tr
379 self.transhandle = tr
380 return tr
380 return tr
381
381
382 def recover(self):
382 def recover(self):
383 l = self.lock()
383 l = self.lock()
384 if os.path.exists(self.join("journal")):
384 if os.path.exists(self.join("journal")):
385 self.ui.status(_("rolling back interrupted transaction\n"))
385 self.ui.status(_("rolling back interrupted transaction\n"))
386 transaction.rollback(self.opener, self.join("journal"))
386 transaction.rollback(self.opener, self.join("journal"))
387 self.reload()
387 self.reload()
388 return True
388 return True
389 else:
389 else:
390 self.ui.warn(_("no interrupted transaction available\n"))
390 self.ui.warn(_("no interrupted transaction available\n"))
391 return False
391 return False
392
392
393 def rollback(self, wlock=None):
393 def rollback(self, wlock=None):
394 if not wlock:
394 if not wlock:
395 wlock = self.wlock()
395 wlock = self.wlock()
396 l = self.lock()
396 l = self.lock()
397 if os.path.exists(self.join("undo")):
397 if os.path.exists(self.join("undo")):
398 self.ui.status(_("rolling back last transaction\n"))
398 self.ui.status(_("rolling back last transaction\n"))
399 transaction.rollback(self.opener, self.join("undo"))
399 transaction.rollback(self.opener, self.join("undo"))
400 util.rename(self.join("undo.dirstate"), self.join("dirstate"))
400 util.rename(self.join("undo.dirstate"), self.join("dirstate"))
401 self.reload()
401 self.reload()
402 self.wreload()
402 self.wreload()
403 else:
403 else:
404 self.ui.warn(_("no rollback information available\n"))
404 self.ui.warn(_("no rollback information available\n"))
405
405
406 def wreload(self):
406 def wreload(self):
407 self.dirstate.read()
407 self.dirstate.read()
408
408
409 def reload(self):
409 def reload(self):
410 self.changelog.load()
410 self.changelog.load()
411 self.manifest.load()
411 self.manifest.load()
412 self.tagscache = None
412 self.tagscache = None
413 self.nodetagscache = None
413 self.nodetagscache = None
414
414
415 def do_lock(self, lockname, wait, releasefn=None, acquirefn=None,
415 def do_lock(self, lockname, wait, releasefn=None, acquirefn=None,
416 desc=None):
416 desc=None):
417 try:
417 try:
418 l = lock.lock(self.join(lockname), 0, releasefn, desc=desc)
418 l = lock.lock(self.join(lockname), 0, releasefn, desc=desc)
419 except lock.LockHeld, inst:
419 except lock.LockHeld, inst:
420 if not wait:
420 if not wait:
421 raise
421 raise
422 self.ui.warn(_("waiting for lock on %s held by %s\n") %
422 self.ui.warn(_("waiting for lock on %s held by %s\n") %
423 (desc, inst.args[0]))
423 (desc, inst.args[0]))
424 # default to 600 seconds timeout
424 # default to 600 seconds timeout
425 l = lock.lock(self.join(lockname),
425 l = lock.lock(self.join(lockname),
426 int(self.ui.config("ui", "timeout") or 600),
426 int(self.ui.config("ui", "timeout") or 600),
427 releasefn, desc=desc)
427 releasefn, desc=desc)
428 if acquirefn:
428 if acquirefn:
429 acquirefn()
429 acquirefn()
430 return l
430 return l
431
431
432 def lock(self, wait=1):
432 def lock(self, wait=1):
433 return self.do_lock("lock", wait, acquirefn=self.reload,
433 return self.do_lock("lock", wait, acquirefn=self.reload,
434 desc=_('repository %s') % self.origroot)
434 desc=_('repository %s') % self.origroot)
435
435
436 def wlock(self, wait=1):
436 def wlock(self, wait=1):
437 return self.do_lock("wlock", wait, self.dirstate.write,
437 return self.do_lock("wlock", wait, self.dirstate.write,
438 self.wreload,
438 self.wreload,
439 desc=_('working directory of %s') % self.origroot)
439 desc=_('working directory of %s') % self.origroot)
440
440
441 def checkfilemerge(self, filename, text, filelog, manifest1, manifest2):
441 def checkfilemerge(self, filename, text, filelog, manifest1, manifest2):
442 "determine whether a new filenode is needed"
442 "determine whether a new filenode is needed"
443 fp1 = manifest1.get(filename, nullid)
443 fp1 = manifest1.get(filename, nullid)
444 fp2 = manifest2.get(filename, nullid)
444 fp2 = manifest2.get(filename, nullid)
445
445
446 if fp2 != nullid:
446 if fp2 != nullid:
447 # is one parent an ancestor of the other?
447 # is one parent an ancestor of the other?
448 fpa = filelog.ancestor(fp1, fp2)
448 fpa = filelog.ancestor(fp1, fp2)
449 if fpa == fp1:
449 if fpa == fp1:
450 fp1, fp2 = fp2, nullid
450 fp1, fp2 = fp2, nullid
451 elif fpa == fp2:
451 elif fpa == fp2:
452 fp2 = nullid
452 fp2 = nullid
453
453
454 # is the file unmodified from the parent? report existing entry
454 # is the file unmodified from the parent? report existing entry
455 if fp2 == nullid and text == filelog.read(fp1):
455 if fp2 == nullid and text == filelog.read(fp1):
456 return (fp1, None, None)
456 return (fp1, None, None)
457
457
458 return (None, fp1, fp2)
458 return (None, fp1, fp2)
459
459
460 def rawcommit(self, files, text, user, date, p1=None, p2=None, wlock=None):
460 def rawcommit(self, files, text, user, date, p1=None, p2=None, wlock=None):
461 orig_parent = self.dirstate.parents()[0] or nullid
461 orig_parent = self.dirstate.parents()[0] or nullid
462 p1 = p1 or self.dirstate.parents()[0] or nullid
462 p1 = p1 or self.dirstate.parents()[0] or nullid
463 p2 = p2 or self.dirstate.parents()[1] or nullid
463 p2 = p2 or self.dirstate.parents()[1] or nullid
464 c1 = self.changelog.read(p1)
464 c1 = self.changelog.read(p1)
465 c2 = self.changelog.read(p2)
465 c2 = self.changelog.read(p2)
466 m1 = self.manifest.read(c1[0])
466 m1 = self.manifest.read(c1[0])
467 mf1 = self.manifest.readflags(c1[0])
467 mf1 = self.manifest.readflags(c1[0])
468 m2 = self.manifest.read(c2[0])
468 m2 = self.manifest.read(c2[0])
469 changed = []
469 changed = []
470
470
471 if orig_parent == p1:
471 if orig_parent == p1:
472 update_dirstate = 1
472 update_dirstate = 1
473 else:
473 else:
474 update_dirstate = 0
474 update_dirstate = 0
475
475
476 if not wlock:
476 if not wlock:
477 wlock = self.wlock()
477 wlock = self.wlock()
478 l = self.lock()
478 l = self.lock()
479 tr = self.transaction()
479 tr = self.transaction()
480 mm = m1.copy()
480 mm = m1.copy()
481 mfm = mf1.copy()
481 mfm = mf1.copy()
482 linkrev = self.changelog.count()
482 linkrev = self.changelog.count()
483 for f in files:
483 for f in files:
484 try:
484 try:
485 t = self.wread(f)
485 t = self.wread(f)
486 tm = util.is_exec(self.wjoin(f), mfm.get(f, False))
486 tm = util.is_exec(self.wjoin(f), mfm.get(f, False))
487 r = self.file(f)
487 r = self.file(f)
488 mfm[f] = tm
488 mfm[f] = tm
489
489
490 (entry, fp1, fp2) = self.checkfilemerge(f, t, r, m1, m2)
490 (entry, fp1, fp2) = self.checkfilemerge(f, t, r, m1, m2)
491 if entry:
491 if entry:
492 mm[f] = entry
492 mm[f] = entry
493 continue
493 continue
494
494
495 mm[f] = r.add(t, {}, tr, linkrev, fp1, fp2)
495 mm[f] = r.add(t, {}, tr, linkrev, fp1, fp2)
496 changed.append(f)
496 changed.append(f)
497 if update_dirstate:
497 if update_dirstate:
498 self.dirstate.update([f], "n")
498 self.dirstate.update([f], "n")
499 except IOError:
499 except IOError:
500 try:
500 try:
501 del mm[f]
501 del mm[f]
502 del mfm[f]
502 del mfm[f]
503 if update_dirstate:
503 if update_dirstate:
504 self.dirstate.forget([f])
504 self.dirstate.forget([f])
505 except:
505 except:
506 # deleted from p2?
506 # deleted from p2?
507 pass
507 pass
508
508
509 mnode = self.manifest.add(mm, mfm, tr, linkrev, c1[0], c2[0])
509 mnode = self.manifest.add(mm, mfm, tr, linkrev, c1[0], c2[0])
510 user = user or self.ui.username()
510 user = user or self.ui.username()
511 n = self.changelog.add(mnode, changed, text, tr, p1, p2, user, date)
511 n = self.changelog.add(mnode, changed, text, tr, p1, p2, user, date)
512 tr.close()
512 tr.close()
513 if update_dirstate:
513 if update_dirstate:
514 self.dirstate.setparents(n, nullid)
514 self.dirstate.setparents(n, nullid)
515
515
516 def commit(self, files=None, text="", user=None, date=None,
516 def commit(self, files=None, text="", user=None, date=None,
517 match=util.always, force=False, lock=None, wlock=None,
517 match=util.always, force=False, lock=None, wlock=None,
518 force_editor=False):
518 force_editor=False):
519 commit = []
519 commit = []
520 remove = []
520 remove = []
521 changed = []
521 changed = []
522
522
523 if files:
523 if files:
524 for f in files:
524 for f in files:
525 s = self.dirstate.state(f)
525 s = self.dirstate.state(f)
526 if s in 'nmai':
526 if s in 'nmai':
527 commit.append(f)
527 commit.append(f)
528 elif s == 'r':
528 elif s == 'r':
529 remove.append(f)
529 remove.append(f)
530 else:
530 else:
531 self.ui.warn(_("%s not tracked!\n") % f)
531 self.ui.warn(_("%s not tracked!\n") % f)
532 else:
532 else:
533 modified, added, removed, deleted, unknown = self.changes(match=match)
533 modified, added, removed, deleted, unknown = self.changes(match=match)
534 commit = modified + added
534 commit = modified + added
535 remove = removed
535 remove = removed
536
536
537 p1, p2 = self.dirstate.parents()
537 p1, p2 = self.dirstate.parents()
538 c1 = self.changelog.read(p1)
538 c1 = self.changelog.read(p1)
539 c2 = self.changelog.read(p2)
539 c2 = self.changelog.read(p2)
540 m1 = self.manifest.read(c1[0])
540 m1 = self.manifest.read(c1[0])
541 mf1 = self.manifest.readflags(c1[0])
541 mf1 = self.manifest.readflags(c1[0])
542 m2 = self.manifest.read(c2[0])
542 m2 = self.manifest.read(c2[0])
543
543
544 if not commit and not remove and not force and p2 == nullid:
544 if not commit and not remove and not force and p2 == nullid:
545 self.ui.status(_("nothing changed\n"))
545 self.ui.status(_("nothing changed\n"))
546 return None
546 return None
547
547
548 xp1 = hex(p1)
548 xp1 = hex(p1)
549 if p2 == nullid: xp2 = ''
549 if p2 == nullid: xp2 = ''
550 else: xp2 = hex(p2)
550 else: xp2 = hex(p2)
551
551
552 self.hook("precommit", throw=True, parent1=xp1, parent2=xp2)
552 self.hook("precommit", throw=True, parent1=xp1, parent2=xp2)
553
553
554 if not wlock:
554 if not wlock:
555 wlock = self.wlock()
555 wlock = self.wlock()
556 if not lock:
556 if not lock:
557 lock = self.lock()
557 lock = self.lock()
558 tr = self.transaction()
558 tr = self.transaction()
559
559
560 # check in files
560 # check in files
561 new = {}
561 new = {}
562 linkrev = self.changelog.count()
562 linkrev = self.changelog.count()
563 commit.sort()
563 commit.sort()
564 for f in commit:
564 for f in commit:
565 self.ui.note(f + "\n")
565 self.ui.note(f + "\n")
566 try:
566 try:
567 mf1[f] = util.is_exec(self.wjoin(f), mf1.get(f, False))
567 mf1[f] = util.is_exec(self.wjoin(f), mf1.get(f, False))
568 t = self.wread(f)
568 t = self.wread(f)
569 except IOError:
569 except IOError:
570 self.ui.warn(_("trouble committing %s!\n") % f)
570 self.ui.warn(_("trouble committing %s!\n") % f)
571 raise
571 raise
572
572
573 r = self.file(f)
573 r = self.file(f)
574
574
575 meta = {}
575 meta = {}
576 cp = self.dirstate.copied(f)
576 cp = self.dirstate.copied(f)
577 if cp:
577 if cp:
578 meta["copy"] = cp
578 meta["copy"] = cp
579 meta["copyrev"] = hex(m1.get(cp, m2.get(cp, nullid)))
579 meta["copyrev"] = hex(m1.get(cp, m2.get(cp, nullid)))
580 self.ui.debug(_(" %s: copy %s:%s\n") % (f, cp, meta["copyrev"]))
580 self.ui.debug(_(" %s: copy %s:%s\n") % (f, cp, meta["copyrev"]))
581 fp1, fp2 = nullid, nullid
581 fp1, fp2 = nullid, nullid
582 else:
582 else:
583 entry, fp1, fp2 = self.checkfilemerge(f, t, r, m1, m2)
583 entry, fp1, fp2 = self.checkfilemerge(f, t, r, m1, m2)
584 if entry:
584 if entry:
585 new[f] = entry
585 new[f] = entry
586 continue
586 continue
587
587
588 new[f] = r.add(t, meta, tr, linkrev, fp1, fp2)
588 new[f] = r.add(t, meta, tr, linkrev, fp1, fp2)
589 # remember what we've added so that we can later calculate
589 # remember what we've added so that we can later calculate
590 # the files to pull from a set of changesets
590 # the files to pull from a set of changesets
591 changed.append(f)
591 changed.append(f)
592
592
593 # update manifest
593 # update manifest
594 m1 = m1.copy()
594 m1 = m1.copy()
595 m1.update(new)
595 m1.update(new)
596 for f in remove:
596 for f in remove:
597 if f in m1:
597 if f in m1:
598 del m1[f]
598 del m1[f]
599 mn = self.manifest.add(m1, mf1, tr, linkrev, c1[0], c2[0],
599 mn = self.manifest.add(m1, mf1, tr, linkrev, c1[0], c2[0],
600 (new, remove))
600 (new, remove))
601
601
602 # add changeset
602 # add changeset
603 new = new.keys()
603 new = new.keys()
604 new.sort()
604 new.sort()
605
605
606 user = user or self.ui.username()
606 user = user or self.ui.username()
607 if not text or force_editor:
607 if not text or force_editor:
608 edittext = []
608 edittext = []
609 if text:
609 if text:
610 edittext.append(text)
610 edittext.append(text)
611 edittext.append("")
611 edittext.append("")
612 if p2 != nullid:
612 if p2 != nullid:
613 edittext.append("HG: branch merge")
613 edittext.append("HG: branch merge")
614 edittext.extend(["HG: changed %s" % f for f in changed])
614 edittext.extend(["HG: changed %s" % f for f in changed])
615 edittext.extend(["HG: removed %s" % f for f in remove])
615 edittext.extend(["HG: removed %s" % f for f in remove])
616 if not changed and not remove:
616 if not changed and not remove:
617 edittext.append("HG: no files changed")
617 edittext.append("HG: no files changed")
618 edittext.append("")
618 edittext.append("")
619 # run editor in the repository root
619 # run editor in the repository root
620 olddir = os.getcwd()
620 olddir = os.getcwd()
621 os.chdir(self.root)
621 os.chdir(self.root)
622 text = self.ui.edit("\n".join(edittext), user)
622 text = self.ui.edit("\n".join(edittext), user)
623 os.chdir(olddir)
623 os.chdir(olddir)
624
624
625 lines = [line.rstrip() for line in text.rstrip().splitlines()]
625 lines = [line.rstrip() for line in text.rstrip().splitlines()]
626 while lines and not lines[0]:
626 while lines and not lines[0]:
627 del lines[0]
627 del lines[0]
628 if not lines:
628 if not lines:
629 return None
629 return None
630 text = '\n'.join(lines)
630 text = '\n'.join(lines)
631 n = self.changelog.add(mn, changed + remove, text, tr, p1, p2, user, date)
631 n = self.changelog.add(mn, changed + remove, text, tr, p1, p2, user, date)
632 self.hook('pretxncommit', throw=True, node=hex(n), parent1=xp1,
632 self.hook('pretxncommit', throw=True, node=hex(n), parent1=xp1,
633 parent2=xp2)
633 parent2=xp2)
634 tr.close()
634 tr.close()
635
635
636 self.dirstate.setparents(n)
636 self.dirstate.setparents(n)
637 self.dirstate.update(new, "n")
637 self.dirstate.update(new, "n")
638 self.dirstate.forget(remove)
638 self.dirstate.forget(remove)
639
639
640 self.hook("commit", node=hex(n), parent1=xp1, parent2=xp2)
640 self.hook("commit", node=hex(n), parent1=xp1, parent2=xp2)
641 return n
641 return n
642
642
643 def walk(self, node=None, files=[], match=util.always, badmatch=None):
643 def walk(self, node=None, files=[], match=util.always, badmatch=None):
644 if node:
644 if node:
645 fdict = dict.fromkeys(files)
645 fdict = dict.fromkeys(files)
646 for fn in self.manifest.read(self.changelog.read(node)[0]):
646 for fn in self.manifest.read(self.changelog.read(node)[0]):
647 fdict.pop(fn, None)
647 fdict.pop(fn, None)
648 if match(fn):
648 if match(fn):
649 yield 'm', fn
649 yield 'm', fn
650 for fn in fdict:
650 for fn in fdict:
651 if badmatch and badmatch(fn):
651 if badmatch and badmatch(fn):
652 if match(fn):
652 if match(fn):
653 yield 'b', fn
653 yield 'b', fn
654 else:
654 else:
655 self.ui.warn(_('%s: No such file in rev %s\n') % (
655 self.ui.warn(_('%s: No such file in rev %s\n') % (
656 util.pathto(self.getcwd(), fn), short(node)))
656 util.pathto(self.getcwd(), fn), short(node)))
657 else:
657 else:
658 for src, fn in self.dirstate.walk(files, match, badmatch=badmatch):
658 for src, fn in self.dirstate.walk(files, match, badmatch=badmatch):
659 yield src, fn
659 yield src, fn
660
660
661 def changes(self, node1=None, node2=None, files=[], match=util.always,
661 def changes(self, node1=None, node2=None, files=[], match=util.always,
662 wlock=None, show_ignored=None):
662 wlock=None, show_ignored=None):
663 """return changes between two nodes or node and working directory
663 """return changes between two nodes or node and working directory
664
664
665 If node1 is None, use the first dirstate parent instead.
665 If node1 is None, use the first dirstate parent instead.
666 If node2 is None, compare node1 with working directory.
666 If node2 is None, compare node1 with working directory.
667 """
667 """
668
668
669 def fcmp(fn, mf):
669 def fcmp(fn, mf):
670 t1 = self.wread(fn)
670 t1 = self.wread(fn)
671 t2 = self.file(fn).read(mf.get(fn, nullid))
671 t2 = self.file(fn).read(mf.get(fn, nullid))
672 return cmp(t1, t2)
672 return cmp(t1, t2)
673
673
674 def mfmatches(node):
674 def mfmatches(node):
675 change = self.changelog.read(node)
675 change = self.changelog.read(node)
676 mf = dict(self.manifest.read(change[0]))
676 mf = dict(self.manifest.read(change[0]))
677 for fn in mf.keys():
677 for fn in mf.keys():
678 if not match(fn):
678 if not match(fn):
679 del mf[fn]
679 del mf[fn]
680 return mf
680 return mf
681
681
682 modified, added, removed, deleted, unknown, ignored = [],[],[],[],[],[]
682 modified, added, removed, deleted, unknown, ignored = [],[],[],[],[],[]
683 compareworking = False
683 compareworking = False
684 if not node1 or (not node2 and node1 == self.dirstate.parents()[0]):
684 if not node1 or (not node2 and node1 == self.dirstate.parents()[0]):
685 compareworking = True
685 compareworking = True
686
686
687 if not compareworking:
687 if not compareworking:
688 # read the manifest from node1 before the manifest from node2,
688 # read the manifest from node1 before the manifest from node2,
689 # so that we'll hit the manifest cache if we're going through
689 # so that we'll hit the manifest cache if we're going through
690 # all the revisions in parent->child order.
690 # all the revisions in parent->child order.
691 mf1 = mfmatches(node1)
691 mf1 = mfmatches(node1)
692
692
693 # are we comparing the working directory?
693 # are we comparing the working directory?
694 if not node2:
694 if not node2:
695 if not wlock:
695 if not wlock:
696 try:
696 try:
697 wlock = self.wlock(wait=0)
697 wlock = self.wlock(wait=0)
698 except lock.LockException:
698 except lock.LockException:
699 wlock = None
699 wlock = None
700 lookup, modified, added, removed, deleted, unknown, ignored = (
700 lookup, modified, added, removed, deleted, unknown, ignored = (
701 self.dirstate.changes(files, match, show_ignored))
701 self.dirstate.changes(files, match, show_ignored))
702
702
703 # are we comparing working dir against its parent?
703 # are we comparing working dir against its parent?
704 if compareworking:
704 if compareworking:
705 if lookup:
705 if lookup:
706 # do a full compare of any files that might have changed
706 # do a full compare of any files that might have changed
707 mf2 = mfmatches(self.dirstate.parents()[0])
707 mf2 = mfmatches(self.dirstate.parents()[0])
708 for f in lookup:
708 for f in lookup:
709 if fcmp(f, mf2):
709 if fcmp(f, mf2):
710 modified.append(f)
710 modified.append(f)
711 elif wlock is not None:
711 elif wlock is not None:
712 self.dirstate.update([f], "n")
712 self.dirstate.update([f], "n")
713 else:
713 else:
714 # we are comparing working dir against non-parent
714 # we are comparing working dir against non-parent
715 # generate a pseudo-manifest for the working dir
715 # generate a pseudo-manifest for the working dir
716 mf2 = mfmatches(self.dirstate.parents()[0])
716 mf2 = mfmatches(self.dirstate.parents()[0])
717 for f in lookup + modified + added:
717 for f in lookup + modified + added:
718 mf2[f] = ""
718 mf2[f] = ""
719 for f in removed:
719 for f in removed:
720 if f in mf2:
720 if f in mf2:
721 del mf2[f]
721 del mf2[f]
722 else:
722 else:
723 # we are comparing two revisions
723 # we are comparing two revisions
724 deleted, unknown, ignored = [], [], []
724 deleted, unknown, ignored = [], [], []
725 mf2 = mfmatches(node2)
725 mf2 = mfmatches(node2)
726
726
727 if not compareworking:
727 if not compareworking:
728 # flush lists from dirstate before comparing manifests
728 # flush lists from dirstate before comparing manifests
729 modified, added = [], []
729 modified, added = [], []
730
730
731 # make sure to sort the files so we talk to the disk in a
731 # make sure to sort the files so we talk to the disk in a
732 # reasonable order
732 # reasonable order
733 mf2keys = mf2.keys()
733 mf2keys = mf2.keys()
734 mf2keys.sort()
734 mf2keys.sort()
735 for fn in mf2keys:
735 for fn in mf2keys:
736 if mf1.has_key(fn):
736 if mf1.has_key(fn):
737 if mf1[fn] != mf2[fn] and (mf2[fn] != "" or fcmp(fn, mf1)):
737 if mf1[fn] != mf2[fn] and (mf2[fn] != "" or fcmp(fn, mf1)):
738 modified.append(fn)
738 modified.append(fn)
739 del mf1[fn]
739 del mf1[fn]
740 else:
740 else:
741 added.append(fn)
741 added.append(fn)
742
742
743 removed = mf1.keys()
743 removed = mf1.keys()
744
744
745 # sort and return results:
745 # sort and return results:
746 for l in modified, added, removed, deleted, unknown, ignored:
746 for l in modified, added, removed, deleted, unknown, ignored:
747 l.sort()
747 l.sort()
748 if show_ignored is None:
748 if show_ignored is None:
749 return (modified, added, removed, deleted, unknown)
749 return (modified, added, removed, deleted, unknown)
750 else:
750 else:
751 return (modified, added, removed, deleted, unknown, ignored)
751 return (modified, added, removed, deleted, unknown, ignored)
752
752
753 def add(self, list, wlock=None):
753 def add(self, list, wlock=None):
754 if not wlock:
754 if not wlock:
755 wlock = self.wlock()
755 wlock = self.wlock()
756 for f in list:
756 for f in list:
757 p = self.wjoin(f)
757 p = self.wjoin(f)
758 if not os.path.exists(p):
758 if not os.path.exists(p):
759 self.ui.warn(_("%s does not exist!\n") % f)
759 self.ui.warn(_("%s does not exist!\n") % f)
760 elif not os.path.isfile(p):
760 elif not os.path.isfile(p):
761 self.ui.warn(_("%s not added: only files supported currently\n")
761 self.ui.warn(_("%s not added: only files supported currently\n")
762 % f)
762 % f)
763 elif self.dirstate.state(f) in 'an':
763 elif self.dirstate.state(f) in 'an':
764 self.ui.warn(_("%s already tracked!\n") % f)
764 self.ui.warn(_("%s already tracked!\n") % f)
765 else:
765 else:
766 self.dirstate.update([f], "a")
766 self.dirstate.update([f], "a")
767
767
768 def forget(self, list, wlock=None):
768 def forget(self, list, wlock=None):
769 if not wlock:
769 if not wlock:
770 wlock = self.wlock()
770 wlock = self.wlock()
771 for f in list:
771 for f in list:
772 if self.dirstate.state(f) not in 'ai':
772 if self.dirstate.state(f) not in 'ai':
773 self.ui.warn(_("%s not added!\n") % f)
773 self.ui.warn(_("%s not added!\n") % f)
774 else:
774 else:
775 self.dirstate.forget([f])
775 self.dirstate.forget([f])
776
776
777 def remove(self, list, unlink=False, wlock=None):
777 def remove(self, list, unlink=False, wlock=None):
778 if unlink:
778 if unlink:
779 for f in list:
779 for f in list:
780 try:
780 try:
781 util.unlink(self.wjoin(f))
781 util.unlink(self.wjoin(f))
782 except OSError, inst:
782 except OSError, inst:
783 if inst.errno != errno.ENOENT:
783 if inst.errno != errno.ENOENT:
784 raise
784 raise
785 if not wlock:
785 if not wlock:
786 wlock = self.wlock()
786 wlock = self.wlock()
787 for f in list:
787 for f in list:
788 p = self.wjoin(f)
788 p = self.wjoin(f)
789 if os.path.exists(p):
789 if os.path.exists(p):
790 self.ui.warn(_("%s still exists!\n") % f)
790 self.ui.warn(_("%s still exists!\n") % f)
791 elif self.dirstate.state(f) == 'a':
791 elif self.dirstate.state(f) == 'a':
792 self.dirstate.forget([f])
792 self.dirstate.forget([f])
793 elif f not in self.dirstate:
793 elif f not in self.dirstate:
794 self.ui.warn(_("%s not tracked!\n") % f)
794 self.ui.warn(_("%s not tracked!\n") % f)
795 else:
795 else:
796 self.dirstate.update([f], "r")
796 self.dirstate.update([f], "r")
797
797
798 def undelete(self, list, wlock=None):
798 def undelete(self, list, wlock=None):
799 p = self.dirstate.parents()[0]
799 p = self.dirstate.parents()[0]
800 mn = self.changelog.read(p)[0]
800 mn = self.changelog.read(p)[0]
801 mf = self.manifest.readflags(mn)
801 mf = self.manifest.readflags(mn)
802 m = self.manifest.read(mn)
802 m = self.manifest.read(mn)
803 if not wlock:
803 if not wlock:
804 wlock = self.wlock()
804 wlock = self.wlock()
805 for f in list:
805 for f in list:
806 if self.dirstate.state(f) not in "r":
806 if self.dirstate.state(f) not in "r":
807 self.ui.warn("%s not removed!\n" % f)
807 self.ui.warn("%s not removed!\n" % f)
808 else:
808 else:
809 t = self.file(f).read(m[f])
809 t = self.file(f).read(m[f])
810 self.wwrite(f, t)
810 self.wwrite(f, t)
811 util.set_exec(self.wjoin(f), mf[f])
811 util.set_exec(self.wjoin(f), mf[f])
812 self.dirstate.update([f], "n")
812 self.dirstate.update([f], "n")
813
813
814 def copy(self, source, dest, wlock=None):
814 def copy(self, source, dest, wlock=None):
815 p = self.wjoin(dest)
815 p = self.wjoin(dest)
816 if not os.path.exists(p):
816 if not os.path.exists(p):
817 self.ui.warn(_("%s does not exist!\n") % dest)
817 self.ui.warn(_("%s does not exist!\n") % dest)
818 elif not os.path.isfile(p):
818 elif not os.path.isfile(p):
819 self.ui.warn(_("copy failed: %s is not a file\n") % dest)
819 self.ui.warn(_("copy failed: %s is not a file\n") % dest)
820 else:
820 else:
821 if not wlock:
821 if not wlock:
822 wlock = self.wlock()
822 wlock = self.wlock()
823 if self.dirstate.state(dest) == '?':
823 if self.dirstate.state(dest) == '?':
824 self.dirstate.update([dest], "a")
824 self.dirstate.update([dest], "a")
825 self.dirstate.copy(source, dest)
825 self.dirstate.copy(source, dest)
826
826
827 def heads(self, start=None):
827 def heads(self, start=None):
828 heads = self.changelog.heads(start)
828 heads = self.changelog.heads(start)
829 # sort the output in rev descending order
829 # sort the output in rev descending order
830 heads = [(-self.changelog.rev(h), h) for h in heads]
830 heads = [(-self.changelog.rev(h), h) for h in heads]
831 heads.sort()
831 heads.sort()
832 return [n for (r, n) in heads]
832 return [n for (r, n) in heads]
833
833
834 # branchlookup returns a dict giving a list of branches for
834 # branchlookup returns a dict giving a list of branches for
835 # each head. A branch is defined as the tag of a node or
835 # each head. A branch is defined as the tag of a node or
836 # the branch of the node's parents. If a node has multiple
836 # the branch of the node's parents. If a node has multiple
837 # branch tags, tags are eliminated if they are visible from other
837 # branch tags, tags are eliminated if they are visible from other
838 # branch tags.
838 # branch tags.
839 #
839 #
840 # So, for this graph: a->b->c->d->e
840 # So, for this graph: a->b->c->d->e
841 # \ /
841 # \ /
842 # aa -----/
842 # aa -----/
843 # a has tag 2.6.12
843 # a has tag 2.6.12
844 # d has tag 2.6.13
844 # d has tag 2.6.13
845 # e would have branch tags for 2.6.12 and 2.6.13. Because the node
845 # e would have branch tags for 2.6.12 and 2.6.13. Because the node
846 # for 2.6.12 can be reached from the node 2.6.13, that is eliminated
846 # for 2.6.12 can be reached from the node 2.6.13, that is eliminated
847 # from the list.
847 # from the list.
848 #
848 #
849 # It is possible that more than one head will have the same branch tag.
849 # It is possible that more than one head will have the same branch tag.
850 # callers need to check the result for multiple heads under the same
850 # callers need to check the result for multiple heads under the same
851 # branch tag if that is a problem for them (ie checkout of a specific
851 # branch tag if that is a problem for them (ie checkout of a specific
852 # branch).
852 # branch).
853 #
853 #
854 # passing in a specific branch will limit the depth of the search
854 # passing in a specific branch will limit the depth of the search
855 # through the parents. It won't limit the branches returned in the
855 # through the parents. It won't limit the branches returned in the
856 # result though.
856 # result though.
857 def branchlookup(self, heads=None, branch=None):
857 def branchlookup(self, heads=None, branch=None):
858 if not heads:
858 if not heads:
859 heads = self.heads()
859 heads = self.heads()
860 headt = [ h for h in heads ]
860 headt = [ h for h in heads ]
861 chlog = self.changelog
861 chlog = self.changelog
862 branches = {}
862 branches = {}
863 merges = []
863 merges = []
864 seenmerge = {}
864 seenmerge = {}
865
865
866 # traverse the tree once for each head, recording in the branches
866 # traverse the tree once for each head, recording in the branches
867 # dict which tags are visible from this head. The branches
867 # dict which tags are visible from this head. The branches
868 # dict also records which tags are visible from each tag
868 # dict also records which tags are visible from each tag
869 # while we traverse.
869 # while we traverse.
870 while headt or merges:
870 while headt or merges:
871 if merges:
871 if merges:
872 n, found = merges.pop()
872 n, found = merges.pop()
873 visit = [n]
873 visit = [n]
874 else:
874 else:
875 h = headt.pop()
875 h = headt.pop()
876 visit = [h]
876 visit = [h]
877 found = [h]
877 found = [h]
878 seen = {}
878 seen = {}
879 while visit:
879 while visit:
880 n = visit.pop()
880 n = visit.pop()
881 if n in seen:
881 if n in seen:
882 continue
882 continue
883 pp = chlog.parents(n)
883 pp = chlog.parents(n)
884 tags = self.nodetags(n)
884 tags = self.nodetags(n)
885 if tags:
885 if tags:
886 for x in tags:
886 for x in tags:
887 if x == 'tip':
887 if x == 'tip':
888 continue
888 continue
889 for f in found:
889 for f in found:
890 branches.setdefault(f, {})[n] = 1
890 branches.setdefault(f, {})[n] = 1
891 branches.setdefault(n, {})[n] = 1
891 branches.setdefault(n, {})[n] = 1
892 break
892 break
893 if n not in found:
893 if n not in found:
894 found.append(n)
894 found.append(n)
895 if branch in tags:
895 if branch in tags:
896 continue
896 continue
897 seen[n] = 1
897 seen[n] = 1
898 if pp[1] != nullid and n not in seenmerge:
898 if pp[1] != nullid and n not in seenmerge:
899 merges.append((pp[1], [x for x in found]))
899 merges.append((pp[1], [x for x in found]))
900 seenmerge[n] = 1
900 seenmerge[n] = 1
901 if pp[0] != nullid:
901 if pp[0] != nullid:
902 visit.append(pp[0])
902 visit.append(pp[0])
903 # traverse the branches dict, eliminating branch tags from each
903 # traverse the branches dict, eliminating branch tags from each
904 # head that are visible from another branch tag for that head.
904 # head that are visible from another branch tag for that head.
905 out = {}
905 out = {}
906 viscache = {}
906 viscache = {}
907 for h in heads:
907 for h in heads:
908 def visible(node):
908 def visible(node):
909 if node in viscache:
909 if node in viscache:
910 return viscache[node]
910 return viscache[node]
911 ret = {}
911 ret = {}
912 visit = [node]
912 visit = [node]
913 while visit:
913 while visit:
914 x = visit.pop()
914 x = visit.pop()
915 if x in viscache:
915 if x in viscache:
916 ret.update(viscache[x])
916 ret.update(viscache[x])
917 elif x not in ret:
917 elif x not in ret:
918 ret[x] = 1
918 ret[x] = 1
919 if x in branches:
919 if x in branches:
920 visit[len(visit):] = branches[x].keys()
920 visit[len(visit):] = branches[x].keys()
921 viscache[node] = ret
921 viscache[node] = ret
922 return ret
922 return ret
923 if h not in branches:
923 if h not in branches:
924 continue
924 continue
925 # O(n^2), but somewhat limited. This only searches the
925 # O(n^2), but somewhat limited. This only searches the
926 # tags visible from a specific head, not all the tags in the
926 # tags visible from a specific head, not all the tags in the
927 # whole repo.
927 # whole repo.
928 for b in branches[h]:
928 for b in branches[h]:
929 vis = False
929 vis = False
930 for bb in branches[h].keys():
930 for bb in branches[h].keys():
931 if b != bb:
931 if b != bb:
932 if b in visible(bb):
932 if b in visible(bb):
933 vis = True
933 vis = True
934 break
934 break
935 if not vis:
935 if not vis:
936 l = out.setdefault(h, [])
936 l = out.setdefault(h, [])
937 l[len(l):] = self.nodetags(b)
937 l[len(l):] = self.nodetags(b)
938 return out
938 return out
939
939
940 def branches(self, nodes):
940 def branches(self, nodes):
941 if not nodes:
941 if not nodes:
942 nodes = [self.changelog.tip()]
942 nodes = [self.changelog.tip()]
943 b = []
943 b = []
944 for n in nodes:
944 for n in nodes:
945 t = n
945 t = n
946 while 1:
946 while 1:
947 p = self.changelog.parents(n)
947 p = self.changelog.parents(n)
948 if p[1] != nullid or p[0] == nullid:
948 if p[1] != nullid or p[0] == nullid:
949 b.append((t, n, p[0], p[1]))
949 b.append((t, n, p[0], p[1]))
950 break
950 break
951 n = p[0]
951 n = p[0]
952 return b
952 return b
953
953
954 def between(self, pairs):
954 def between(self, pairs):
955 r = []
955 r = []
956
956
957 for top, bottom in pairs:
957 for top, bottom in pairs:
958 n, l, i = top, [], 0
958 n, l, i = top, [], 0
959 f = 1
959 f = 1
960
960
961 while n != bottom:
961 while n != bottom:
962 p = self.changelog.parents(n)[0]
962 p = self.changelog.parents(n)[0]
963 if i == f:
963 if i == f:
964 l.append(n)
964 l.append(n)
965 f = f * 2
965 f = f * 2
966 n = p
966 n = p
967 i += 1
967 i += 1
968
968
969 r.append(l)
969 r.append(l)
970
970
971 return r
971 return r
972
972
973 def findincoming(self, remote, base=None, heads=None, force=False):
973 def findincoming(self, remote, base=None, heads=None, force=False):
974 """Return list of roots of the subsets of missing nodes from remote
974 """Return list of roots of the subsets of missing nodes from remote
975
975
976 If base dict is specified, assume that these nodes and their parents
976 If base dict is specified, assume that these nodes and their parents
977 exist on the remote side and that no child of a node of base exists
977 exist on the remote side and that no child of a node of base exists
978 in both remote and self.
978 in both remote and self.
979 Furthermore base will be updated to include the nodes that exists
979 Furthermore base will be updated to include the nodes that exists
980 in self and remote but no children exists in self and remote.
980 in self and remote but no children exists in self and remote.
981 If a list of heads is specified, return only nodes which are heads
981 If a list of heads is specified, return only nodes which are heads
982 or ancestors of these heads.
982 or ancestors of these heads.
983
983
984 All the ancestors of base are in self and in remote.
984 All the ancestors of base are in self and in remote.
985 All the descendants of the list returned are missing in self.
985 All the descendants of the list returned are missing in self.
986 (and so we know that the rest of the nodes are missing in remote, see
986 (and so we know that the rest of the nodes are missing in remote, see
987 outgoing)
987 outgoing)
988 """
988 """
989 m = self.changelog.nodemap
989 m = self.changelog.nodemap
990 search = []
990 search = []
991 fetch = {}
991 fetch = {}
992 seen = {}
992 seen = {}
993 seenbranch = {}
993 seenbranch = {}
994 if base == None:
994 if base == None:
995 base = {}
995 base = {}
996
996
997 if not heads:
997 if not heads:
998 heads = remote.heads()
998 heads = remote.heads()
999
999
1000 if self.changelog.tip() == nullid:
1000 if self.changelog.tip() == nullid:
1001 base[nullid] = 1
1001 base[nullid] = 1
1002 if heads != [nullid]:
1002 if heads != [nullid]:
1003 return [nullid]
1003 return [nullid]
1004 return []
1004 return []
1005
1005
1006 # assume we're closer to the tip than the root
1006 # assume we're closer to the tip than the root
1007 # and start by examining the heads
1007 # and start by examining the heads
1008 self.ui.status(_("searching for changes\n"))
1008 self.ui.status(_("searching for changes\n"))
1009
1009
1010 unknown = []
1010 unknown = []
1011 for h in heads:
1011 for h in heads:
1012 if h not in m:
1012 if h not in m:
1013 unknown.append(h)
1013 unknown.append(h)
1014 else:
1014 else:
1015 base[h] = 1
1015 base[h] = 1
1016
1016
1017 if not unknown:
1017 if not unknown:
1018 return []
1018 return []
1019
1019
1020 req = dict.fromkeys(unknown)
1020 req = dict.fromkeys(unknown)
1021 reqcnt = 0
1021 reqcnt = 0
1022
1022
1023 # search through remote branches
1023 # search through remote branches
1024 # a 'branch' here is a linear segment of history, with four parts:
1024 # a 'branch' here is a linear segment of history, with four parts:
1025 # head, root, first parent, second parent
1025 # head, root, first parent, second parent
1026 # (a branch always has two parents (or none) by definition)
1026 # (a branch always has two parents (or none) by definition)
1027 unknown = remote.branches(unknown)
1027 unknown = remote.branches(unknown)
1028 while unknown:
1028 while unknown:
1029 r = []
1029 r = []
1030 while unknown:
1030 while unknown:
1031 n = unknown.pop(0)
1031 n = unknown.pop(0)
1032 if n[0] in seen:
1032 if n[0] in seen:
1033 continue
1033 continue
1034
1034
1035 self.ui.debug(_("examining %s:%s\n")
1035 self.ui.debug(_("examining %s:%s\n")
1036 % (short(n[0]), short(n[1])))
1036 % (short(n[0]), short(n[1])))
1037 if n[0] == nullid: # found the end of the branch
1037 if n[0] == nullid: # found the end of the branch
1038 pass
1038 pass
1039 elif n in seenbranch:
1039 elif n in seenbranch:
1040 self.ui.debug(_("branch already found\n"))
1040 self.ui.debug(_("branch already found\n"))
1041 continue
1041 continue
1042 elif n[1] and n[1] in m: # do we know the base?
1042 elif n[1] and n[1] in m: # do we know the base?
1043 self.ui.debug(_("found incomplete branch %s:%s\n")
1043 self.ui.debug(_("found incomplete branch %s:%s\n")
1044 % (short(n[0]), short(n[1])))
1044 % (short(n[0]), short(n[1])))
1045 search.append(n) # schedule branch range for scanning
1045 search.append(n) # schedule branch range for scanning
1046 seenbranch[n] = 1
1046 seenbranch[n] = 1
1047 else:
1047 else:
1048 if n[1] not in seen and n[1] not in fetch:
1048 if n[1] not in seen and n[1] not in fetch:
1049 if n[2] in m and n[3] in m:
1049 if n[2] in m and n[3] in m:
1050 self.ui.debug(_("found new changeset %s\n") %
1050 self.ui.debug(_("found new changeset %s\n") %
1051 short(n[1]))
1051 short(n[1]))
1052 fetch[n[1]] = 1 # earliest unknown
1052 fetch[n[1]] = 1 # earliest unknown
1053 for p in n[2:4]:
1053 for p in n[2:4]:
1054 if p in m:
1054 if p in m:
1055 base[p] = 1 # latest known
1055 base[p] = 1 # latest known
1056
1056
1057 for p in n[2:4]:
1057 for p in n[2:4]:
1058 if p not in req and p not in m:
1058 if p not in req and p not in m:
1059 r.append(p)
1059 r.append(p)
1060 req[p] = 1
1060 req[p] = 1
1061 seen[n[0]] = 1
1061 seen[n[0]] = 1
1062
1062
1063 if r:
1063 if r:
1064 reqcnt += 1
1064 reqcnt += 1
1065 self.ui.debug(_("request %d: %s\n") %
1065 self.ui.debug(_("request %d: %s\n") %
1066 (reqcnt, " ".join(map(short, r))))
1066 (reqcnt, " ".join(map(short, r))))
1067 for p in range(0, len(r), 10):
1067 for p in range(0, len(r), 10):
1068 for b in remote.branches(r[p:p+10]):
1068 for b in remote.branches(r[p:p+10]):
1069 self.ui.debug(_("received %s:%s\n") %
1069 self.ui.debug(_("received %s:%s\n") %
1070 (short(b[0]), short(b[1])))
1070 (short(b[0]), short(b[1])))
1071 unknown.append(b)
1071 unknown.append(b)
1072
1072
1073 # do binary search on the branches we found
1073 # do binary search on the branches we found
1074 while search:
1074 while search:
1075 n = search.pop(0)
1075 n = search.pop(0)
1076 reqcnt += 1
1076 reqcnt += 1
1077 l = remote.between([(n[0], n[1])])[0]
1077 l = remote.between([(n[0], n[1])])[0]
1078 l.append(n[1])
1078 l.append(n[1])
1079 p = n[0]
1079 p = n[0]
1080 f = 1
1080 f = 1
1081 for i in l:
1081 for i in l:
1082 self.ui.debug(_("narrowing %d:%d %s\n") % (f, len(l), short(i)))
1082 self.ui.debug(_("narrowing %d:%d %s\n") % (f, len(l), short(i)))
1083 if i in m:
1083 if i in m:
1084 if f <= 2:
1084 if f <= 2:
1085 self.ui.debug(_("found new branch changeset %s\n") %
1085 self.ui.debug(_("found new branch changeset %s\n") %
1086 short(p))
1086 short(p))
1087 fetch[p] = 1
1087 fetch[p] = 1
1088 base[i] = 1
1088 base[i] = 1
1089 else:
1089 else:
1090 self.ui.debug(_("narrowed branch search to %s:%s\n")
1090 self.ui.debug(_("narrowed branch search to %s:%s\n")
1091 % (short(p), short(i)))
1091 % (short(p), short(i)))
1092 search.append((p, i))
1092 search.append((p, i))
1093 break
1093 break
1094 p, f = i, f * 2
1094 p, f = i, f * 2
1095
1095
1096 # sanity check our fetch list
1096 # sanity check our fetch list
1097 for f in fetch.keys():
1097 for f in fetch.keys():
1098 if f in m:
1098 if f in m:
1099 raise repo.RepoError(_("already have changeset ") + short(f[:4]))
1099 raise repo.RepoError(_("already have changeset ") + short(f[:4]))
1100
1100
1101 if base.keys() == [nullid]:
1101 if base.keys() == [nullid]:
1102 if force:
1102 if force:
1103 self.ui.warn(_("warning: repository is unrelated\n"))
1103 self.ui.warn(_("warning: repository is unrelated\n"))
1104 else:
1104 else:
1105 raise util.Abort(_("repository is unrelated"))
1105 raise util.Abort(_("repository is unrelated"))
1106
1106
1107 self.ui.note(_("found new changesets starting at ") +
1107 self.ui.note(_("found new changesets starting at ") +
1108 " ".join([short(f) for f in fetch]) + "\n")
1108 " ".join([short(f) for f in fetch]) + "\n")
1109
1109
1110 self.ui.debug(_("%d total queries\n") % reqcnt)
1110 self.ui.debug(_("%d total queries\n") % reqcnt)
1111
1111
1112 return fetch.keys()
1112 return fetch.keys()
1113
1113
1114 def findoutgoing(self, remote, base=None, heads=None, force=False):
1114 def findoutgoing(self, remote, base=None, heads=None, force=False):
1115 """Return list of nodes that are roots of subsets not in remote
1115 """Return list of nodes that are roots of subsets not in remote
1116
1116
1117 If base dict is specified, assume that these nodes and their parents
1117 If base dict is specified, assume that these nodes and their parents
1118 exist on the remote side.
1118 exist on the remote side.
1119 If a list of heads is specified, return only nodes which are heads
1119 If a list of heads is specified, return only nodes which are heads
1120 or ancestors of these heads, and return a second element which
1120 or ancestors of these heads, and return a second element which
1121 contains all remote heads which get new children.
1121 contains all remote heads which get new children.
1122 """
1122 """
1123 if base == None:
1123 if base == None:
1124 base = {}
1124 base = {}
1125 self.findincoming(remote, base, heads, force=force)
1125 self.findincoming(remote, base, heads, force=force)
1126
1126
1127 self.ui.debug(_("common changesets up to ")
1127 self.ui.debug(_("common changesets up to ")
1128 + " ".join(map(short, base.keys())) + "\n")
1128 + " ".join(map(short, base.keys())) + "\n")
1129
1129
1130 remain = dict.fromkeys(self.changelog.nodemap)
1130 remain = dict.fromkeys(self.changelog.nodemap)
1131
1131
1132 # prune everything remote has from the tree
1132 # prune everything remote has from the tree
1133 del remain[nullid]
1133 del remain[nullid]
1134 remove = base.keys()
1134 remove = base.keys()
1135 while remove:
1135 while remove:
1136 n = remove.pop(0)
1136 n = remove.pop(0)
1137 if n in remain:
1137 if n in remain:
1138 del remain[n]
1138 del remain[n]
1139 for p in self.changelog.parents(n):
1139 for p in self.changelog.parents(n):
1140 remove.append(p)
1140 remove.append(p)
1141
1141
1142 # find every node whose parents have been pruned
1142 # find every node whose parents have been pruned
1143 subset = []
1143 subset = []
1144 # find every remote head that will get new children
1144 # find every remote head that will get new children
1145 updated_heads = {}
1145 updated_heads = {}
1146 for n in remain:
1146 for n in remain:
1147 p1, p2 = self.changelog.parents(n)
1147 p1, p2 = self.changelog.parents(n)
1148 if p1 not in remain and p2 not in remain:
1148 if p1 not in remain and p2 not in remain:
1149 subset.append(n)
1149 subset.append(n)
1150 if heads:
1150 if heads:
1151 if p1 in heads:
1151 if p1 in heads:
1152 updated_heads[p1] = True
1152 updated_heads[p1] = True
1153 if p2 in heads:
1153 if p2 in heads:
1154 updated_heads[p2] = True
1154 updated_heads[p2] = True
1155
1155
1156 # this is the set of all roots we have to push
1156 # this is the set of all roots we have to push
1157 if heads:
1157 if heads:
1158 return subset, updated_heads.keys()
1158 return subset, updated_heads.keys()
1159 else:
1159 else:
1160 return subset
1160 return subset
1161
1161
1162 def pull(self, remote, heads=None, force=False):
1162 def pull(self, remote, heads=None, force=False):
1163 l = self.lock()
1163 l = self.lock()
1164
1164
1165 fetch = self.findincoming(remote, force=force)
1165 fetch = self.findincoming(remote, force=force)
1166 if fetch == [nullid]:
1166 if fetch == [nullid]:
1167 self.ui.status(_("requesting all changes\n"))
1167 self.ui.status(_("requesting all changes\n"))
1168
1168
1169 if not fetch:
1169 if not fetch:
1170 self.ui.status(_("no changes found\n"))
1170 self.ui.status(_("no changes found\n"))
1171 return 0
1171 return 0
1172
1172
1173 if heads is None:
1173 if heads is None:
1174 cg = remote.changegroup(fetch, 'pull')
1174 cg = remote.changegroup(fetch, 'pull')
1175 else:
1175 else:
1176 cg = remote.changegroupsubset(fetch, heads, 'pull')
1176 cg = remote.changegroupsubset(fetch, heads, 'pull')
1177 return self.addchangegroup(cg, 'pull')
1177 return self.addchangegroup(cg, 'pull')
1178
1178
1179 def push(self, remote, force=False, revs=None):
1179 def push(self, remote, force=False, revs=None):
1180 # there are two ways to push to remote repo:
1180 # there are two ways to push to remote repo:
1181 #
1181 #
1182 # addchangegroup assumes local user can lock remote
1182 # addchangegroup assumes local user can lock remote
1183 # repo (local filesystem, old ssh servers).
1183 # repo (local filesystem, old ssh servers).
1184 #
1184 #
1185 # unbundle assumes local user cannot lock remote repo (new ssh
1185 # unbundle assumes local user cannot lock remote repo (new ssh
1186 # servers, http servers).
1186 # servers, http servers).
1187
1187
1188 if remote.capable('unbundle'):
1188 if remote.capable('unbundle'):
1189 return self.push_unbundle(remote, force, revs)
1189 return self.push_unbundle(remote, force, revs)
1190 return self.push_addchangegroup(remote, force, revs)
1190 return self.push_addchangegroup(remote, force, revs)
1191
1191
1192 def prepush(self, remote, force, revs):
1192 def prepush(self, remote, force, revs):
1193 base = {}
1193 base = {}
1194 remote_heads = remote.heads()
1194 remote_heads = remote.heads()
1195 inc = self.findincoming(remote, base, remote_heads, force=force)
1195 inc = self.findincoming(remote, base, remote_heads, force=force)
1196 if not force and inc:
1196 if not force and inc:
1197 self.ui.warn(_("abort: unsynced remote changes!\n"))
1197 self.ui.warn(_("abort: unsynced remote changes!\n"))
1198 self.ui.status(_("(did you forget to sync?"
1198 self.ui.status(_("(did you forget to sync?"
1199 " use push -f to force)\n"))
1199 " use push -f to force)\n"))
1200 return None, 1
1200 return None, 1
1201
1201
1202 update, updated_heads = self.findoutgoing(remote, base, remote_heads)
1202 update, updated_heads = self.findoutgoing(remote, base, remote_heads)
1203 if revs is not None:
1203 if revs is not None:
1204 msng_cl, bases, heads = self.changelog.nodesbetween(update, revs)
1204 msng_cl, bases, heads = self.changelog.nodesbetween(update, revs)
1205 else:
1205 else:
1206 bases, heads = update, self.changelog.heads()
1206 bases, heads = update, self.changelog.heads()
1207
1207
1208 if not bases:
1208 if not bases:
1209 self.ui.status(_("no changes found\n"))
1209 self.ui.status(_("no changes found\n"))
1210 return None, 1
1210 return None, 1
1211 elif not force:
1211 elif not force:
1212 # FIXME we don't properly detect creation of new heads
1212 # FIXME we don't properly detect creation of new heads
1213 # in the push -r case, assume the user knows what he's doing
1213 # in the push -r case, assume the user knows what he's doing
1214 if not revs and len(remote_heads) < len(heads) \
1214 if not revs and len(remote_heads) < len(heads) \
1215 and remote_heads != [nullid]:
1215 and remote_heads != [nullid]:
1216 self.ui.warn(_("abort: push creates new remote branches!\n"))
1216 self.ui.warn(_("abort: push creates new remote branches!\n"))
1217 self.ui.status(_("(did you forget to merge?"
1217 self.ui.status(_("(did you forget to merge?"
1218 " use push -f to force)\n"))
1218 " use push -f to force)\n"))
1219 return None, 1
1219 return None, 1
1220
1220
1221 if revs is None:
1221 if revs is None:
1222 cg = self.changegroup(update, 'push')
1222 cg = self.changegroup(update, 'push')
1223 else:
1223 else:
1224 cg = self.changegroupsubset(update, revs, 'push')
1224 cg = self.changegroupsubset(update, revs, 'push')
1225 return cg, remote_heads
1225 return cg, remote_heads
1226
1226
1227 def push_addchangegroup(self, remote, force, revs):
1227 def push_addchangegroup(self, remote, force, revs):
1228 lock = remote.lock()
1228 lock = remote.lock()
1229
1229
1230 ret = self.prepush(remote, force, revs)
1230 ret = self.prepush(remote, force, revs)
1231 if ret[0] is not None:
1231 if ret[0] is not None:
1232 cg, remote_heads = ret
1232 cg, remote_heads = ret
1233 return remote.addchangegroup(cg, 'push')
1233 return remote.addchangegroup(cg, 'push')
1234 return ret[1]
1234 return ret[1]
1235
1235
1236 def push_unbundle(self, remote, force, revs):
1236 def push_unbundle(self, remote, force, revs):
1237 # local repo finds heads on server, finds out what revs it
1237 # local repo finds heads on server, finds out what revs it
1238 # must push. once revs transferred, if server finds it has
1238 # must push. once revs transferred, if server finds it has
1239 # different heads (someone else won commit/push race), server
1239 # different heads (someone else won commit/push race), server
1240 # aborts.
1240 # aborts.
1241
1241
1242 ret = self.prepush(remote, force, revs)
1242 ret = self.prepush(remote, force, revs)
1243 if ret[0] is not None:
1243 if ret[0] is not None:
1244 cg, remote_heads = ret
1244 cg, remote_heads = ret
1245 if force: remote_heads = ['force']
1245 if force: remote_heads = ['force']
1246 return remote.unbundle(cg, remote_heads, 'push')
1246 return remote.unbundle(cg, remote_heads, 'push')
1247 return ret[1]
1247 return ret[1]
1248
1248
1249 def changegroupsubset(self, bases, heads, source):
1249 def changegroupsubset(self, bases, heads, source):
1250 """This function generates a changegroup consisting of all the nodes
1250 """This function generates a changegroup consisting of all the nodes
1251 that are descendents of any of the bases, and ancestors of any of
1251 that are descendents of any of the bases, and ancestors of any of
1252 the heads.
1252 the heads.
1253
1253
1254 It is fairly complex as determining which filenodes and which
1254 It is fairly complex as determining which filenodes and which
1255 manifest nodes need to be included for the changeset to be complete
1255 manifest nodes need to be included for the changeset to be complete
1256 is non-trivial.
1256 is non-trivial.
1257
1257
1258 Another wrinkle is doing the reverse, figuring out which changeset in
1258 Another wrinkle is doing the reverse, figuring out which changeset in
1259 the changegroup a particular filenode or manifestnode belongs to."""
1259 the changegroup a particular filenode or manifestnode belongs to."""
1260
1260
1261 self.hook('preoutgoing', throw=True, source=source)
1261 self.hook('preoutgoing', throw=True, source=source)
1262
1262
1263 # Set up some initial variables
1263 # Set up some initial variables
1264 # Make it easy to refer to self.changelog
1264 # Make it easy to refer to self.changelog
1265 cl = self.changelog
1265 cl = self.changelog
1266 # msng is short for missing - compute the list of changesets in this
1266 # msng is short for missing - compute the list of changesets in this
1267 # changegroup.
1267 # changegroup.
1268 msng_cl_lst, bases, heads = cl.nodesbetween(bases, heads)
1268 msng_cl_lst, bases, heads = cl.nodesbetween(bases, heads)
1269 # Some bases may turn out to be superfluous, and some heads may be
1269 # Some bases may turn out to be superfluous, and some heads may be
1270 # too. nodesbetween will return the minimal set of bases and heads
1270 # too. nodesbetween will return the minimal set of bases and heads
1271 # necessary to re-create the changegroup.
1271 # necessary to re-create the changegroup.
1272
1272
1273 # Known heads are the list of heads that it is assumed the recipient
1273 # Known heads are the list of heads that it is assumed the recipient
1274 # of this changegroup will know about.
1274 # of this changegroup will know about.
1275 knownheads = {}
1275 knownheads = {}
1276 # We assume that all parents of bases are known heads.
1276 # We assume that all parents of bases are known heads.
1277 for n in bases:
1277 for n in bases:
1278 for p in cl.parents(n):
1278 for p in cl.parents(n):
1279 if p != nullid:
1279 if p != nullid:
1280 knownheads[p] = 1
1280 knownheads[p] = 1
1281 knownheads = knownheads.keys()
1281 knownheads = knownheads.keys()
1282 if knownheads:
1282 if knownheads:
1283 # Now that we know what heads are known, we can compute which
1283 # Now that we know what heads are known, we can compute which
1284 # changesets are known. The recipient must know about all
1284 # changesets are known. The recipient must know about all
1285 # changesets required to reach the known heads from the null
1285 # changesets required to reach the known heads from the null
1286 # changeset.
1286 # changeset.
1287 has_cl_set, junk, junk = cl.nodesbetween(None, knownheads)
1287 has_cl_set, junk, junk = cl.nodesbetween(None, knownheads)
1288 junk = None
1288 junk = None
1289 # Transform the list into an ersatz set.
1289 # Transform the list into an ersatz set.
1290 has_cl_set = dict.fromkeys(has_cl_set)
1290 has_cl_set = dict.fromkeys(has_cl_set)
1291 else:
1291 else:
1292 # If there were no known heads, the recipient cannot be assumed to
1292 # If there were no known heads, the recipient cannot be assumed to
1293 # know about any changesets.
1293 # know about any changesets.
1294 has_cl_set = {}
1294 has_cl_set = {}
1295
1295
1296 # Make it easy to refer to self.manifest
1296 # Make it easy to refer to self.manifest
1297 mnfst = self.manifest
1297 mnfst = self.manifest
1298 # We don't know which manifests are missing yet
1298 # We don't know which manifests are missing yet
1299 msng_mnfst_set = {}
1299 msng_mnfst_set = {}
1300 # Nor do we know which filenodes are missing.
1300 # Nor do we know which filenodes are missing.
1301 msng_filenode_set = {}
1301 msng_filenode_set = {}
1302
1302
1303 junk = mnfst.index[mnfst.count() - 1] # Get around a bug in lazyindex
1303 junk = mnfst.index[mnfst.count() - 1] # Get around a bug in lazyindex
1304 junk = None
1304 junk = None
1305
1305
1306 # A changeset always belongs to itself, so the changenode lookup
1306 # A changeset always belongs to itself, so the changenode lookup
1307 # function for a changenode is identity.
1307 # function for a changenode is identity.
1308 def identity(x):
1308 def identity(x):
1309 return x
1309 return x
1310
1310
1311 # A function generating function. Sets up an environment for the
1311 # A function generating function. Sets up an environment for the
1312 # inner function.
1312 # inner function.
1313 def cmp_by_rev_func(revlog):
1313 def cmp_by_rev_func(revlog):
1314 # Compare two nodes by their revision number in the environment's
1314 # Compare two nodes by their revision number in the environment's
1315 # revision history. Since the revision number both represents the
1315 # revision history. Since the revision number both represents the
1316 # most efficient order to read the nodes in, and represents a
1316 # most efficient order to read the nodes in, and represents a
1317 # topological sorting of the nodes, this function is often useful.
1317 # topological sorting of the nodes, this function is often useful.
1318 def cmp_by_rev(a, b):
1318 def cmp_by_rev(a, b):
1319 return cmp(revlog.rev(a), revlog.rev(b))
1319 return cmp(revlog.rev(a), revlog.rev(b))
1320 return cmp_by_rev
1320 return cmp_by_rev
1321
1321
1322 # If we determine that a particular file or manifest node must be a
1322 # If we determine that a particular file or manifest node must be a
1323 # node that the recipient of the changegroup will already have, we can
1323 # node that the recipient of the changegroup will already have, we can
1324 # also assume the recipient will have all the parents. This function
1324 # also assume the recipient will have all the parents. This function
1325 # prunes them from the set of missing nodes.
1325 # prunes them from the set of missing nodes.
1326 def prune_parents(revlog, hasset, msngset):
1326 def prune_parents(revlog, hasset, msngset):
1327 haslst = hasset.keys()
1327 haslst = hasset.keys()
1328 haslst.sort(cmp_by_rev_func(revlog))
1328 haslst.sort(cmp_by_rev_func(revlog))
1329 for node in haslst:
1329 for node in haslst:
1330 parentlst = [p for p in revlog.parents(node) if p != nullid]
1330 parentlst = [p for p in revlog.parents(node) if p != nullid]
1331 while parentlst:
1331 while parentlst:
1332 n = parentlst.pop()
1332 n = parentlst.pop()
1333 if n not in hasset:
1333 if n not in hasset:
1334 hasset[n] = 1
1334 hasset[n] = 1
1335 p = [p for p in revlog.parents(n) if p != nullid]
1335 p = [p for p in revlog.parents(n) if p != nullid]
1336 parentlst.extend(p)
1336 parentlst.extend(p)
1337 for n in hasset:
1337 for n in hasset:
1338 msngset.pop(n, None)
1338 msngset.pop(n, None)
1339
1339
1340 # This is a function generating function used to set up an environment
1340 # This is a function generating function used to set up an environment
1341 # for the inner function to execute in.
1341 # for the inner function to execute in.
1342 def manifest_and_file_collector(changedfileset):
1342 def manifest_and_file_collector(changedfileset):
1343 # This is an information gathering function that gathers
1343 # This is an information gathering function that gathers
1344 # information from each changeset node that goes out as part of
1344 # information from each changeset node that goes out as part of
1345 # the changegroup. The information gathered is a list of which
1345 # the changegroup. The information gathered is a list of which
1346 # manifest nodes are potentially required (the recipient may
1346 # manifest nodes are potentially required (the recipient may
1347 # already have them) and total list of all files which were
1347 # already have them) and total list of all files which were
1348 # changed in any changeset in the changegroup.
1348 # changed in any changeset in the changegroup.
1349 #
1349 #
1350 # We also remember the first changenode we saw any manifest
1350 # We also remember the first changenode we saw any manifest
1351 # referenced by so we can later determine which changenode 'owns'
1351 # referenced by so we can later determine which changenode 'owns'
1352 # the manifest.
1352 # the manifest.
1353 def collect_manifests_and_files(clnode):
1353 def collect_manifests_and_files(clnode):
1354 c = cl.read(clnode)
1354 c = cl.read(clnode)
1355 for f in c[3]:
1355 for f in c[3]:
1356 # This is to make sure we only have one instance of each
1356 # This is to make sure we only have one instance of each
1357 # filename string for each filename.
1357 # filename string for each filename.
1358 changedfileset.setdefault(f, f)
1358 changedfileset.setdefault(f, f)
1359 msng_mnfst_set.setdefault(c[0], clnode)
1359 msng_mnfst_set.setdefault(c[0], clnode)
1360 return collect_manifests_and_files
1360 return collect_manifests_and_files
1361
1361
1362 # Figure out which manifest nodes (of the ones we think might be part
1362 # Figure out which manifest nodes (of the ones we think might be part
1363 # of the changegroup) the recipient must know about and remove them
1363 # of the changegroup) the recipient must know about and remove them
1364 # from the changegroup.
1364 # from the changegroup.
1365 def prune_manifests():
1365 def prune_manifests():
1366 has_mnfst_set = {}
1366 has_mnfst_set = {}
1367 for n in msng_mnfst_set:
1367 for n in msng_mnfst_set:
1368 # If a 'missing' manifest thinks it belongs to a changenode
1368 # If a 'missing' manifest thinks it belongs to a changenode
1369 # the recipient is assumed to have, obviously the recipient
1369 # the recipient is assumed to have, obviously the recipient
1370 # must have that manifest.
1370 # must have that manifest.
1371 linknode = cl.node(mnfst.linkrev(n))
1371 linknode = cl.node(mnfst.linkrev(n))
1372 if linknode in has_cl_set:
1372 if linknode in has_cl_set:
1373 has_mnfst_set[n] = 1
1373 has_mnfst_set[n] = 1
1374 prune_parents(mnfst, has_mnfst_set, msng_mnfst_set)
1374 prune_parents(mnfst, has_mnfst_set, msng_mnfst_set)
1375
1375
1376 # Use the information collected in collect_manifests_and_files to say
1376 # Use the information collected in collect_manifests_and_files to say
1377 # which changenode any manifestnode belongs to.
1377 # which changenode any manifestnode belongs to.
1378 def lookup_manifest_link(mnfstnode):
1378 def lookup_manifest_link(mnfstnode):
1379 return msng_mnfst_set[mnfstnode]
1379 return msng_mnfst_set[mnfstnode]
1380
1380
1381 # A function generating function that sets up the initial environment
1381 # A function generating function that sets up the initial environment
1382 # the inner function.
1382 # the inner function.
1383 def filenode_collector(changedfiles):
1383 def filenode_collector(changedfiles):
1384 next_rev = [0]
1384 next_rev = [0]
1385 # This gathers information from each manifestnode included in the
1385 # This gathers information from each manifestnode included in the
1386 # changegroup about which filenodes the manifest node references
1386 # changegroup about which filenodes the manifest node references
1387 # so we can include those in the changegroup too.
1387 # so we can include those in the changegroup too.
1388 #
1388 #
1389 # It also remembers which changenode each filenode belongs to. It
1389 # It also remembers which changenode each filenode belongs to. It
1390 # does this by assuming the a filenode belongs to the changenode
1390 # does this by assuming the a filenode belongs to the changenode
1391 # the first manifest that references it belongs to.
1391 # the first manifest that references it belongs to.
1392 def collect_msng_filenodes(mnfstnode):
1392 def collect_msng_filenodes(mnfstnode):
1393 r = mnfst.rev(mnfstnode)
1393 r = mnfst.rev(mnfstnode)
1394 if r == next_rev[0]:
1394 if r == next_rev[0]:
1395 # If the last rev we looked at was the one just previous,
1395 # If the last rev we looked at was the one just previous,
1396 # we only need to see a diff.
1396 # we only need to see a diff.
1397 delta = mdiff.patchtext(mnfst.delta(mnfstnode))
1397 delta = mdiff.patchtext(mnfst.delta(mnfstnode))
1398 # For each line in the delta
1398 # For each line in the delta
1399 for dline in delta.splitlines():
1399 for dline in delta.splitlines():
1400 # get the filename and filenode for that line
1400 # get the filename and filenode for that line
1401 f, fnode = dline.split('\0')
1401 f, fnode = dline.split('\0')
1402 fnode = bin(fnode[:40])
1402 fnode = bin(fnode[:40])
1403 f = changedfiles.get(f, None)
1403 f = changedfiles.get(f, None)
1404 # And if the file is in the list of files we care
1404 # And if the file is in the list of files we care
1405 # about.
1405 # about.
1406 if f is not None:
1406 if f is not None:
1407 # Get the changenode this manifest belongs to
1407 # Get the changenode this manifest belongs to
1408 clnode = msng_mnfst_set[mnfstnode]
1408 clnode = msng_mnfst_set[mnfstnode]
1409 # Create the set of filenodes for the file if
1409 # Create the set of filenodes for the file if
1410 # there isn't one already.
1410 # there isn't one already.
1411 ndset = msng_filenode_set.setdefault(f, {})
1411 ndset = msng_filenode_set.setdefault(f, {})
1412 # And set the filenode's changelog node to the
1412 # And set the filenode's changelog node to the
1413 # manifest's if it hasn't been set already.
1413 # manifest's if it hasn't been set already.
1414 ndset.setdefault(fnode, clnode)
1414 ndset.setdefault(fnode, clnode)
1415 else:
1415 else:
1416 # Otherwise we need a full manifest.
1416 # Otherwise we need a full manifest.
1417 m = mnfst.read(mnfstnode)
1417 m = mnfst.read(mnfstnode)
1418 # For every file in we care about.
1418 # For every file in we care about.
1419 for f in changedfiles:
1419 for f in changedfiles:
1420 fnode = m.get(f, None)
1420 fnode = m.get(f, None)
1421 # If it's in the manifest
1421 # If it's in the manifest
1422 if fnode is not None:
1422 if fnode is not None:
1423 # See comments above.
1423 # See comments above.
1424 clnode = msng_mnfst_set[mnfstnode]
1424 clnode = msng_mnfst_set[mnfstnode]
1425 ndset = msng_filenode_set.setdefault(f, {})
1425 ndset = msng_filenode_set.setdefault(f, {})
1426 ndset.setdefault(fnode, clnode)
1426 ndset.setdefault(fnode, clnode)
1427 # Remember the revision we hope to see next.
1427 # Remember the revision we hope to see next.
1428 next_rev[0] = r + 1
1428 next_rev[0] = r + 1
1429 return collect_msng_filenodes
1429 return collect_msng_filenodes
1430
1430
1431 # We have a list of filenodes we think we need for a file, lets remove
1431 # We have a list of filenodes we think we need for a file, lets remove
1432 # all those we now the recipient must have.
1432 # all those we now the recipient must have.
1433 def prune_filenodes(f, filerevlog):
1433 def prune_filenodes(f, filerevlog):
1434 msngset = msng_filenode_set[f]
1434 msngset = msng_filenode_set[f]
1435 hasset = {}
1435 hasset = {}
1436 # If a 'missing' filenode thinks it belongs to a changenode we
1436 # If a 'missing' filenode thinks it belongs to a changenode we
1437 # assume the recipient must have, then the recipient must have
1437 # assume the recipient must have, then the recipient must have
1438 # that filenode.
1438 # that filenode.
1439 for n in msngset:
1439 for n in msngset:
1440 clnode = cl.node(filerevlog.linkrev(n))
1440 clnode = cl.node(filerevlog.linkrev(n))
1441 if clnode in has_cl_set:
1441 if clnode in has_cl_set:
1442 hasset[n] = 1
1442 hasset[n] = 1
1443 prune_parents(filerevlog, hasset, msngset)
1443 prune_parents(filerevlog, hasset, msngset)
1444
1444
1445 # A function generator function that sets up the a context for the
1445 # A function generator function that sets up the a context for the
1446 # inner function.
1446 # inner function.
1447 def lookup_filenode_link_func(fname):
1447 def lookup_filenode_link_func(fname):
1448 msngset = msng_filenode_set[fname]
1448 msngset = msng_filenode_set[fname]
1449 # Lookup the changenode the filenode belongs to.
1449 # Lookup the changenode the filenode belongs to.
1450 def lookup_filenode_link(fnode):
1450 def lookup_filenode_link(fnode):
1451 return msngset[fnode]
1451 return msngset[fnode]
1452 return lookup_filenode_link
1452 return lookup_filenode_link
1453
1453
1454 # Now that we have all theses utility functions to help out and
1454 # Now that we have all theses utility functions to help out and
1455 # logically divide up the task, generate the group.
1455 # logically divide up the task, generate the group.
1456 def gengroup():
1456 def gengroup():
1457 # The set of changed files starts empty.
1457 # The set of changed files starts empty.
1458 changedfiles = {}
1458 changedfiles = {}
1459 # Create a changenode group generator that will call our functions
1459 # Create a changenode group generator that will call our functions
1460 # back to lookup the owning changenode and collect information.
1460 # back to lookup the owning changenode and collect information.
1461 group = cl.group(msng_cl_lst, identity,
1461 group = cl.group(msng_cl_lst, identity,
1462 manifest_and_file_collector(changedfiles))
1462 manifest_and_file_collector(changedfiles))
1463 for chnk in group:
1463 for chnk in group:
1464 yield chnk
1464 yield chnk
1465
1465
1466 # The list of manifests has been collected by the generator
1466 # The list of manifests has been collected by the generator
1467 # calling our functions back.
1467 # calling our functions back.
1468 prune_manifests()
1468 prune_manifests()
1469 msng_mnfst_lst = msng_mnfst_set.keys()
1469 msng_mnfst_lst = msng_mnfst_set.keys()
1470 # Sort the manifestnodes by revision number.
1470 # Sort the manifestnodes by revision number.
1471 msng_mnfst_lst.sort(cmp_by_rev_func(mnfst))
1471 msng_mnfst_lst.sort(cmp_by_rev_func(mnfst))
1472 # Create a generator for the manifestnodes that calls our lookup
1472 # Create a generator for the manifestnodes that calls our lookup
1473 # and data collection functions back.
1473 # and data collection functions back.
1474 group = mnfst.group(msng_mnfst_lst, lookup_manifest_link,
1474 group = mnfst.group(msng_mnfst_lst, lookup_manifest_link,
1475 filenode_collector(changedfiles))
1475 filenode_collector(changedfiles))
1476 for chnk in group:
1476 for chnk in group:
1477 yield chnk
1477 yield chnk
1478
1478
1479 # These are no longer needed, dereference and toss the memory for
1479 # These are no longer needed, dereference and toss the memory for
1480 # them.
1480 # them.
1481 msng_mnfst_lst = None
1481 msng_mnfst_lst = None
1482 msng_mnfst_set.clear()
1482 msng_mnfst_set.clear()
1483
1483
1484 changedfiles = changedfiles.keys()
1484 changedfiles = changedfiles.keys()
1485 changedfiles.sort()
1485 changedfiles.sort()
1486 # Go through all our files in order sorted by name.
1486 # Go through all our files in order sorted by name.
1487 for fname in changedfiles:
1487 for fname in changedfiles:
1488 filerevlog = self.file(fname)
1488 filerevlog = self.file(fname)
1489 # Toss out the filenodes that the recipient isn't really
1489 # Toss out the filenodes that the recipient isn't really
1490 # missing.
1490 # missing.
1491 if msng_filenode_set.has_key(fname):
1491 if msng_filenode_set.has_key(fname):
1492 prune_filenodes(fname, filerevlog)
1492 prune_filenodes(fname, filerevlog)
1493 msng_filenode_lst = msng_filenode_set[fname].keys()
1493 msng_filenode_lst = msng_filenode_set[fname].keys()
1494 else:
1494 else:
1495 msng_filenode_lst = []
1495 msng_filenode_lst = []
1496 # If any filenodes are left, generate the group for them,
1496 # If any filenodes are left, generate the group for them,
1497 # otherwise don't bother.
1497 # otherwise don't bother.
1498 if len(msng_filenode_lst) > 0:
1498 if len(msng_filenode_lst) > 0:
1499 yield changegroup.genchunk(fname)
1499 yield changegroup.genchunk(fname)
1500 # Sort the filenodes by their revision #
1500 # Sort the filenodes by their revision #
1501 msng_filenode_lst.sort(cmp_by_rev_func(filerevlog))
1501 msng_filenode_lst.sort(cmp_by_rev_func(filerevlog))
1502 # Create a group generator and only pass in a changenode
1502 # Create a group generator and only pass in a changenode
1503 # lookup function as we need to collect no information
1503 # lookup function as we need to collect no information
1504 # from filenodes.
1504 # from filenodes.
1505 group = filerevlog.group(msng_filenode_lst,
1505 group = filerevlog.group(msng_filenode_lst,
1506 lookup_filenode_link_func(fname))
1506 lookup_filenode_link_func(fname))
1507 for chnk in group:
1507 for chnk in group:
1508 yield chnk
1508 yield chnk
1509 if msng_filenode_set.has_key(fname):
1509 if msng_filenode_set.has_key(fname):
1510 # Don't need this anymore, toss it to free memory.
1510 # Don't need this anymore, toss it to free memory.
1511 del msng_filenode_set[fname]
1511 del msng_filenode_set[fname]
1512 # Signal that no more groups are left.
1512 # Signal that no more groups are left.
1513 yield changegroup.closechunk()
1513 yield changegroup.closechunk()
1514
1514
1515 if msng_cl_lst:
1515 if msng_cl_lst:
1516 self.hook('outgoing', node=hex(msng_cl_lst[0]), source=source)
1516 self.hook('outgoing', node=hex(msng_cl_lst[0]), source=source)
1517
1517
1518 return util.chunkbuffer(gengroup())
1518 return util.chunkbuffer(gengroup())
1519
1519
1520 def changegroup(self, basenodes, source):
1520 def changegroup(self, basenodes, source):
1521 """Generate a changegroup of all nodes that we have that a recipient
1521 """Generate a changegroup of all nodes that we have that a recipient
1522 doesn't.
1522 doesn't.
1523
1523
1524 This is much easier than the previous function as we can assume that
1524 This is much easier than the previous function as we can assume that
1525 the recipient has any changenode we aren't sending them."""
1525 the recipient has any changenode we aren't sending them."""
1526
1526
1527 self.hook('preoutgoing', throw=True, source=source)
1527 self.hook('preoutgoing', throw=True, source=source)
1528
1528
1529 cl = self.changelog
1529 cl = self.changelog
1530 nodes = cl.nodesbetween(basenodes, None)[0]
1530 nodes = cl.nodesbetween(basenodes, None)[0]
1531 revset = dict.fromkeys([cl.rev(n) for n in nodes])
1531 revset = dict.fromkeys([cl.rev(n) for n in nodes])
1532
1532
1533 def identity(x):
1533 def identity(x):
1534 return x
1534 return x
1535
1535
1536 def gennodelst(revlog):
1536 def gennodelst(revlog):
1537 for r in xrange(0, revlog.count()):
1537 for r in xrange(0, revlog.count()):
1538 n = revlog.node(r)
1538 n = revlog.node(r)
1539 if revlog.linkrev(n) in revset:
1539 if revlog.linkrev(n) in revset:
1540 yield n
1540 yield n
1541
1541
1542 def changed_file_collector(changedfileset):
1542 def changed_file_collector(changedfileset):
1543 def collect_changed_files(clnode):
1543 def collect_changed_files(clnode):
1544 c = cl.read(clnode)
1544 c = cl.read(clnode)
1545 for fname in c[3]:
1545 for fname in c[3]:
1546 changedfileset[fname] = 1
1546 changedfileset[fname] = 1
1547 return collect_changed_files
1547 return collect_changed_files
1548
1548
1549 def lookuprevlink_func(revlog):
1549 def lookuprevlink_func(revlog):
1550 def lookuprevlink(n):
1550 def lookuprevlink(n):
1551 return cl.node(revlog.linkrev(n))
1551 return cl.node(revlog.linkrev(n))
1552 return lookuprevlink
1552 return lookuprevlink
1553
1553
1554 def gengroup():
1554 def gengroup():
1555 # construct a list of all changed files
1555 # construct a list of all changed files
1556 changedfiles = {}
1556 changedfiles = {}
1557
1557
1558 for chnk in cl.group(nodes, identity,
1558 for chnk in cl.group(nodes, identity,
1559 changed_file_collector(changedfiles)):
1559 changed_file_collector(changedfiles)):
1560 yield chnk
1560 yield chnk
1561 changedfiles = changedfiles.keys()
1561 changedfiles = changedfiles.keys()
1562 changedfiles.sort()
1562 changedfiles.sort()
1563
1563
1564 mnfst = self.manifest
1564 mnfst = self.manifest
1565 nodeiter = gennodelst(mnfst)
1565 nodeiter = gennodelst(mnfst)
1566 for chnk in mnfst.group(nodeiter, lookuprevlink_func(mnfst)):
1566 for chnk in mnfst.group(nodeiter, lookuprevlink_func(mnfst)):
1567 yield chnk
1567 yield chnk
1568
1568
1569 for fname in changedfiles:
1569 for fname in changedfiles:
1570 filerevlog = self.file(fname)
1570 filerevlog = self.file(fname)
1571 nodeiter = gennodelst(filerevlog)
1571 nodeiter = gennodelst(filerevlog)
1572 nodeiter = list(nodeiter)
1572 nodeiter = list(nodeiter)
1573 if nodeiter:
1573 if nodeiter:
1574 yield changegroup.genchunk(fname)
1574 yield changegroup.genchunk(fname)
1575 lookup = lookuprevlink_func(filerevlog)
1575 lookup = lookuprevlink_func(filerevlog)
1576 for chnk in filerevlog.group(nodeiter, lookup):
1576 for chnk in filerevlog.group(nodeiter, lookup):
1577 yield chnk
1577 yield chnk
1578
1578
1579 yield changegroup.closechunk()
1579 yield changegroup.closechunk()
1580
1580
1581 if nodes:
1581 if nodes:
1582 self.hook('outgoing', node=hex(nodes[0]), source=source)
1582 self.hook('outgoing', node=hex(nodes[0]), source=source)
1583
1583
1584 return util.chunkbuffer(gengroup())
1584 return util.chunkbuffer(gengroup())
1585
1585
1586 def addchangegroup(self, source, srctype):
1586 def addchangegroup(self, source, srctype):
1587 """add changegroup to repo.
1587 """add changegroup to repo.
1588 returns number of heads modified or added + 1."""
1588 returns number of heads modified or added + 1."""
1589
1589
1590 def csmap(x):
1590 def csmap(x):
1591 self.ui.debug(_("add changeset %s\n") % short(x))
1591 self.ui.debug(_("add changeset %s\n") % short(x))
1592 return cl.count()
1592 return cl.count()
1593
1593
1594 def revmap(x):
1594 def revmap(x):
1595 return cl.rev(x)
1595 return cl.rev(x)
1596
1596
1597 if not source:
1597 if not source:
1598 return 0
1598 return 0
1599
1599
1600 self.hook('prechangegroup', throw=True, source=srctype)
1600 self.hook('prechangegroup', throw=True, source=srctype)
1601
1601
1602 changesets = files = revisions = 0
1602 changesets = files = revisions = 0
1603
1603
1604 tr = self.transaction()
1604 tr = self.transaction()
1605
1605
1606 # write changelog data to temp files so concurrent readers will not see
1606 # write changelog data to temp files so concurrent readers will not see
1607 # inconsistent view
1607 # inconsistent view
1608 cl = None
1608 cl = None
1609 try:
1609 try:
1610 cl = appendfile.appendchangelog(self.opener, self.changelog.version)
1610 cl = appendfile.appendchangelog(self.opener, self.changelog.version)
1611
1611
1612 oldheads = len(cl.heads())
1612 oldheads = len(cl.heads())
1613
1613
1614 # pull off the changeset group
1614 # pull off the changeset group
1615 self.ui.status(_("adding changesets\n"))
1615 self.ui.status(_("adding changesets\n"))
1616 cor = cl.count() - 1
1616 cor = cl.count() - 1
1617 chunkiter = changegroup.chunkiter(source)
1617 chunkiter = changegroup.chunkiter(source)
1618 if cl.addgroup(chunkiter, csmap, tr, 1) is None:
1618 if cl.addgroup(chunkiter, csmap, tr, 1) is None:
1619 raise util.Abort(_("received changelog group is empty"))
1619 raise util.Abort(_("received changelog group is empty"))
1620 cnr = cl.count() - 1
1620 cnr = cl.count() - 1
1621 changesets = cnr - cor
1621 changesets = cnr - cor
1622
1622
1623 # pull off the manifest group
1623 # pull off the manifest group
1624 self.ui.status(_("adding manifests\n"))
1624 self.ui.status(_("adding manifests\n"))
1625 chunkiter = changegroup.chunkiter(source)
1625 chunkiter = changegroup.chunkiter(source)
1626 # no need to check for empty manifest group here:
1626 # no need to check for empty manifest group here:
1627 # if the result of the merge of 1 and 2 is the same in 3 and 4,
1627 # if the result of the merge of 1 and 2 is the same in 3 and 4,
1628 # no new manifest will be created and the manifest group will
1628 # no new manifest will be created and the manifest group will
1629 # be empty during the pull
1629 # be empty during the pull
1630 self.manifest.addgroup(chunkiter, revmap, tr)
1630 self.manifest.addgroup(chunkiter, revmap, tr)
1631
1631
1632 # process the files
1632 # process the files
1633 self.ui.status(_("adding file changes\n"))
1633 self.ui.status(_("adding file changes\n"))
1634 while 1:
1634 while 1:
1635 f = changegroup.getchunk(source)
1635 f = changegroup.getchunk(source)
1636 if not f:
1636 if not f:
1637 break
1637 break
1638 self.ui.debug(_("adding %s revisions\n") % f)
1638 self.ui.debug(_("adding %s revisions\n") % f)
1639 fl = self.file(f)
1639 fl = self.file(f)
1640 o = fl.count()
1640 o = fl.count()
1641 chunkiter = changegroup.chunkiter(source)
1641 chunkiter = changegroup.chunkiter(source)
1642 if fl.addgroup(chunkiter, revmap, tr) is None:
1642 if fl.addgroup(chunkiter, revmap, tr) is None:
1643 raise util.Abort(_("received file revlog group is empty"))
1643 raise util.Abort(_("received file revlog group is empty"))
1644 revisions += fl.count() - o
1644 revisions += fl.count() - o
1645 files += 1
1645 files += 1
1646
1646
1647 cl.writedata()
1647 cl.writedata()
1648 finally:
1648 finally:
1649 if cl:
1649 if cl:
1650 cl.cleanup()
1650 cl.cleanup()
1651
1651
1652 # make changelog see real files again
1652 # make changelog see real files again
1653 self.changelog = changelog.changelog(self.opener, self.changelog.version)
1653 self.changelog = changelog.changelog(self.opener, self.changelog.version)
1654 self.changelog.checkinlinesize(tr)
1654 self.changelog.checkinlinesize(tr)
1655
1655
1656 newheads = len(self.changelog.heads())
1656 newheads = len(self.changelog.heads())
1657 heads = ""
1657 heads = ""
1658 if oldheads and newheads != oldheads:
1658 if oldheads and newheads != oldheads:
1659 heads = _(" (%+d heads)") % (newheads - oldheads)
1659 heads = _(" (%+d heads)") % (newheads - oldheads)
1660
1660
1661 self.ui.status(_("added %d changesets"
1661 self.ui.status(_("added %d changesets"
1662 " with %d changes to %d files%s\n")
1662 " with %d changes to %d files%s\n")
1663 % (changesets, revisions, files, heads))
1663 % (changesets, revisions, files, heads))
1664
1664
1665 if changesets > 0:
1665 if changesets > 0:
1666 self.hook('pretxnchangegroup', throw=True,
1666 self.hook('pretxnchangegroup', throw=True,
1667 node=hex(self.changelog.node(cor+1)), source=srctype)
1667 node=hex(self.changelog.node(cor+1)), source=srctype)
1668
1668
1669 tr.close()
1669 tr.close()
1670
1670
1671 if changesets > 0:
1671 if changesets > 0:
1672 self.hook("changegroup", node=hex(self.changelog.node(cor+1)),
1672 self.hook("changegroup", node=hex(self.changelog.node(cor+1)),
1673 source=srctype)
1673 source=srctype)
1674
1674
1675 for i in range(cor + 1, cnr + 1):
1675 for i in range(cor + 1, cnr + 1):
1676 self.hook("incoming", node=hex(self.changelog.node(i)),
1676 self.hook("incoming", node=hex(self.changelog.node(i)),
1677 source=srctype)
1677 source=srctype)
1678
1678
1679 return newheads - oldheads + 1
1679 return newheads - oldheads + 1
1680
1680
1681 def update(self, node, allow=False, force=False, choose=None,
1681 def update(self, node, allow=False, force=False, choose=None,
1682 moddirstate=True, forcemerge=False, wlock=None, show_stats=True):
1682 moddirstate=True, forcemerge=False, wlock=None, show_stats=True):
1683 pl = self.dirstate.parents()
1683 pl = self.dirstate.parents()
1684 if not force and pl[1] != nullid:
1684 if not force and pl[1] != nullid:
1685 raise util.Abort(_("outstanding uncommitted merges"))
1685 raise util.Abort(_("outstanding uncommitted merges"))
1686
1686
1687 err = False
1687 err = False
1688
1688
1689 p1, p2 = pl[0], node
1689 p1, p2 = pl[0], node
1690 pa = self.changelog.ancestor(p1, p2)
1690 pa = self.changelog.ancestor(p1, p2)
1691 m1n = self.changelog.read(p1)[0]
1691 m1n = self.changelog.read(p1)[0]
1692 m2n = self.changelog.read(p2)[0]
1692 m2n = self.changelog.read(p2)[0]
1693 man = self.manifest.ancestor(m1n, m2n)
1693 man = self.manifest.ancestor(m1n, m2n)
1694 m1 = self.manifest.read(m1n)
1694 m1 = self.manifest.read(m1n)
1695 mf1 = self.manifest.readflags(m1n)
1695 mf1 = self.manifest.readflags(m1n)
1696 m2 = self.manifest.read(m2n).copy()
1696 m2 = self.manifest.read(m2n).copy()
1697 mf2 = self.manifest.readflags(m2n)
1697 mf2 = self.manifest.readflags(m2n)
1698 ma = self.manifest.read(man)
1698 ma = self.manifest.read(man)
1699 mfa = self.manifest.readflags(man)
1699 mfa = self.manifest.readflags(man)
1700
1700
1701 modified, added, removed, deleted, unknown = self.changes()
1701 modified, added, removed, deleted, unknown = self.changes()
1702
1702
1703 # is this a jump, or a merge? i.e. is there a linear path
1703 # is this a jump, or a merge? i.e. is there a linear path
1704 # from p1 to p2?
1704 # from p1 to p2?
1705 linear_path = (pa == p1 or pa == p2)
1705 linear_path = (pa == p1 or pa == p2)
1706
1706
1707 if allow and linear_path:
1707 if allow and linear_path:
1708 raise util.Abort(_("there is nothing to merge, just use "
1708 raise util.Abort(_("there is nothing to merge, just use "
1709 "'hg update' or look at 'hg heads'"))
1709 "'hg update' or look at 'hg heads'"))
1710 if allow and not forcemerge:
1710 if allow and not forcemerge:
1711 if modified or added or removed:
1711 if modified or added or removed:
1712 raise util.Abort(_("outstanding uncommitted changes"))
1712 raise util.Abort(_("outstanding uncommitted changes"))
1713
1713
1714 if not forcemerge and not force:
1714 if not forcemerge and not force:
1715 for f in unknown:
1715 for f in unknown:
1716 if f in m2:
1716 if f in m2:
1717 t1 = self.wread(f)
1717 t1 = self.wread(f)
1718 t2 = self.file(f).read(m2[f])
1718 t2 = self.file(f).read(m2[f])
1719 if cmp(t1, t2) != 0:
1719 if cmp(t1, t2) != 0:
1720 raise util.Abort(_("'%s' already exists in the working"
1720 raise util.Abort(_("'%s' already exists in the working"
1721 " dir and differs from remote") % f)
1721 " dir and differs from remote") % f)
1722
1722
1723 # resolve the manifest to determine which files
1723 # resolve the manifest to determine which files
1724 # we care about merging
1724 # we care about merging
1725 self.ui.note(_("resolving manifests\n"))
1725 self.ui.note(_("resolving manifests\n"))
1726 self.ui.debug(_(" force %s allow %s moddirstate %s linear %s\n") %
1726 self.ui.debug(_(" force %s allow %s moddirstate %s linear %s\n") %
1727 (force, allow, moddirstate, linear_path))
1727 (force, allow, moddirstate, linear_path))
1728 self.ui.debug(_(" ancestor %s local %s remote %s\n") %
1728 self.ui.debug(_(" ancestor %s local %s remote %s\n") %
1729 (short(man), short(m1n), short(m2n)))
1729 (short(man), short(m1n), short(m2n)))
1730
1730
1731 merge = {}
1731 merge = {}
1732 get = {}
1732 get = {}
1733 remove = []
1733 remove = []
1734
1734
1735 # construct a working dir manifest
1735 # construct a working dir manifest
1736 mw = m1.copy()
1736 mw = m1.copy()
1737 mfw = mf1.copy()
1737 mfw = mf1.copy()
1738 umap = dict.fromkeys(unknown)
1738 umap = dict.fromkeys(unknown)
1739
1739
1740 for f in added + modified + unknown:
1740 for f in added + modified + unknown:
1741 mw[f] = ""
1741 mw[f] = ""
1742 mfw[f] = util.is_exec(self.wjoin(f), mfw.get(f, False))
1742 mfw[f] = util.is_exec(self.wjoin(f), mfw.get(f, False))
1743
1743
1744 if moddirstate and not wlock:
1744 if moddirstate and not wlock:
1745 wlock = self.wlock()
1745 wlock = self.wlock()
1746
1746
1747 for f in deleted + removed:
1747 for f in deleted + removed:
1748 if f in mw:
1748 if f in mw:
1749 del mw[f]
1749 del mw[f]
1750
1750
1751 # If we're jumping between revisions (as opposed to merging),
1751 # If we're jumping between revisions (as opposed to merging),
1752 # and if neither the working directory nor the target rev has
1752 # and if neither the working directory nor the target rev has
1753 # the file, then we need to remove it from the dirstate, to
1753 # the file, then we need to remove it from the dirstate, to
1754 # prevent the dirstate from listing the file when it is no
1754 # prevent the dirstate from listing the file when it is no
1755 # longer in the manifest.
1755 # longer in the manifest.
1756 if moddirstate and linear_path and f not in m2:
1756 if moddirstate and linear_path and f not in m2:
1757 self.dirstate.forget((f,))
1757 self.dirstate.forget((f,))
1758
1758
1759 # Compare manifests
1759 # Compare manifests
1760 for f, n in mw.iteritems():
1760 for f, n in mw.iteritems():
1761 if choose and not choose(f):
1761 if choose and not choose(f):
1762 continue
1762 continue
1763 if f in m2:
1763 if f in m2:
1764 s = 0
1764 s = 0
1765
1765
1766 # is the wfile new since m1, and match m2?
1766 # is the wfile new since m1, and match m2?
1767 if f not in m1:
1767 if f not in m1:
1768 t1 = self.wread(f)
1768 t1 = self.wread(f)
1769 t2 = self.file(f).read(m2[f])
1769 t2 = self.file(f).read(m2[f])
1770 if cmp(t1, t2) == 0:
1770 if cmp(t1, t2) == 0:
1771 n = m2[f]
1771 n = m2[f]
1772 del t1, t2
1772 del t1, t2
1773
1773
1774 # are files different?
1774 # are files different?
1775 if n != m2[f]:
1775 if n != m2[f]:
1776 a = ma.get(f, nullid)
1776 a = ma.get(f, nullid)
1777 # are both different from the ancestor?
1777 # are both different from the ancestor?
1778 if n != a and m2[f] != a:
1778 if n != a and m2[f] != a:
1779 self.ui.debug(_(" %s versions differ, resolve\n") % f)
1779 self.ui.debug(_(" %s versions differ, resolve\n") % f)
1780 # merge executable bits
1780 # merge executable bits
1781 # "if we changed or they changed, change in merge"
1781 # "if we changed or they changed, change in merge"
1782 a, b, c = mfa.get(f, 0), mfw[f], mf2[f]
1782 a, b, c = mfa.get(f, 0), mfw[f], mf2[f]
1783 mode = ((a^b) | (a^c)) ^ a
1783 mode = ((a^b) | (a^c)) ^ a
1784 merge[f] = (m1.get(f, nullid), m2[f], mode)
1784 merge[f] = (m1.get(f, nullid), m2[f], mode)
1785 s = 1
1785 s = 1
1786 # are we clobbering?
1786 # are we clobbering?
1787 # is remote's version newer?
1787 # is remote's version newer?
1788 # or are we going back in time?
1788 # or are we going back in time?
1789 elif force or m2[f] != a or (p2 == pa and mw[f] == m1[f]):
1789 elif force or m2[f] != a or (p2 == pa and mw[f] == m1[f]):
1790 self.ui.debug(_(" remote %s is newer, get\n") % f)
1790 self.ui.debug(_(" remote %s is newer, get\n") % f)
1791 get[f] = m2[f]
1791 get[f] = m2[f]
1792 s = 1
1792 s = 1
1793 elif f in umap or f in added:
1793 elif f in umap or f in added:
1794 # this unknown file is the same as the checkout
1794 # this unknown file is the same as the checkout
1795 # we need to reset the dirstate if the file was added
1795 # we need to reset the dirstate if the file was added
1796 get[f] = m2[f]
1796 get[f] = m2[f]
1797
1797
1798 if not s and mfw[f] != mf2[f]:
1798 if not s and mfw[f] != mf2[f]:
1799 if force:
1799 if force:
1800 self.ui.debug(_(" updating permissions for %s\n") % f)
1800 self.ui.debug(_(" updating permissions for %s\n") % f)
1801 util.set_exec(self.wjoin(f), mf2[f])
1801 util.set_exec(self.wjoin(f), mf2[f])
1802 else:
1802 else:
1803 a, b, c = mfa.get(f, 0), mfw[f], mf2[f]
1803 a, b, c = mfa.get(f, 0), mfw[f], mf2[f]
1804 mode = ((a^b) | (a^c)) ^ a
1804 mode = ((a^b) | (a^c)) ^ a
1805 if mode != b:
1805 if mode != b:
1806 self.ui.debug(_(" updating permissions for %s\n")
1806 self.ui.debug(_(" updating permissions for %s\n")
1807 % f)
1807 % f)
1808 util.set_exec(self.wjoin(f), mode)
1808 util.set_exec(self.wjoin(f), mode)
1809 del m2[f]
1809 del m2[f]
1810 elif f in ma:
1810 elif f in ma:
1811 if n != ma[f]:
1811 if n != ma[f]:
1812 r = _("d")
1812 r = _("d")
1813 if not force and (linear_path or allow):
1813 if not force and (linear_path or allow):
1814 r = self.ui.prompt(
1814 r = self.ui.prompt(
1815 (_(" local changed %s which remote deleted\n") % f) +
1815 (_(" local changed %s which remote deleted\n") % f) +
1816 _("(k)eep or (d)elete?"), _("[kd]"), _("k"))
1816 _("(k)eep or (d)elete?"), _("[kd]"), _("k"))
1817 if r == _("d"):
1817 if r == _("d"):
1818 remove.append(f)
1818 remove.append(f)
1819 else:
1819 else:
1820 self.ui.debug(_("other deleted %s\n") % f)
1820 self.ui.debug(_("other deleted %s\n") % f)
1821 remove.append(f) # other deleted it
1821 remove.append(f) # other deleted it
1822 else:
1822 else:
1823 # file is created on branch or in working directory
1823 # file is created on branch or in working directory
1824 if force and f not in umap:
1824 if force and f not in umap:
1825 self.ui.debug(_("remote deleted %s, clobbering\n") % f)
1825 self.ui.debug(_("remote deleted %s, clobbering\n") % f)
1826 remove.append(f)
1826 remove.append(f)
1827 elif n == m1.get(f, nullid): # same as parent
1827 elif n == m1.get(f, nullid): # same as parent
1828 if p2 == pa: # going backwards?
1828 if p2 == pa: # going backwards?
1829 self.ui.debug(_("remote deleted %s\n") % f)
1829 self.ui.debug(_("remote deleted %s\n") % f)
1830 remove.append(f)
1830 remove.append(f)
1831 else:
1831 else:
1832 self.ui.debug(_("local modified %s, keeping\n") % f)
1832 self.ui.debug(_("local modified %s, keeping\n") % f)
1833 else:
1833 else:
1834 self.ui.debug(_("working dir created %s, keeping\n") % f)
1834 self.ui.debug(_("working dir created %s, keeping\n") % f)
1835
1835
1836 for f, n in m2.iteritems():
1836 for f, n in m2.iteritems():
1837 if choose and not choose(f):
1837 if choose and not choose(f):
1838 continue
1838 continue
1839 if f[0] == "/":
1839 if f[0] == "/":
1840 continue
1840 continue
1841 if f in ma and n != ma[f]:
1841 if f in ma and n != ma[f]:
1842 r = _("k")
1842 r = _("k")
1843 if not force and (linear_path or allow):
1843 if not force and (linear_path or allow):
1844 r = self.ui.prompt(
1844 r = self.ui.prompt(
1845 (_("remote changed %s which local deleted\n") % f) +
1845 (_("remote changed %s which local deleted\n") % f) +
1846 _("(k)eep or (d)elete?"), _("[kd]"), _("k"))
1846 _("(k)eep or (d)elete?"), _("[kd]"), _("k"))
1847 if r == _("k"):
1847 if r == _("k"):
1848 get[f] = n
1848 get[f] = n
1849 elif f not in ma:
1849 elif f not in ma:
1850 self.ui.debug(_("remote created %s\n") % f)
1850 self.ui.debug(_("remote created %s\n") % f)
1851 get[f] = n
1851 get[f] = n
1852 else:
1852 else:
1853 if force or p2 == pa: # going backwards?
1853 if force or p2 == pa: # going backwards?
1854 self.ui.debug(_("local deleted %s, recreating\n") % f)
1854 self.ui.debug(_("local deleted %s, recreating\n") % f)
1855 get[f] = n
1855 get[f] = n
1856 else:
1856 else:
1857 self.ui.debug(_("local deleted %s\n") % f)
1857 self.ui.debug(_("local deleted %s\n") % f)
1858
1858
1859 del mw, m1, m2, ma
1859 del mw, m1, m2, ma
1860
1860
1861 if force:
1861 if force:
1862 for f in merge:
1862 for f in merge:
1863 get[f] = merge[f][1]
1863 get[f] = merge[f][1]
1864 merge = {}
1864 merge = {}
1865
1865
1866 if linear_path or force:
1866 if linear_path or force:
1867 # we don't need to do any magic, just jump to the new rev
1867 # we don't need to do any magic, just jump to the new rev
1868 branch_merge = False
1868 branch_merge = False
1869 p1, p2 = p2, nullid
1869 p1, p2 = p2, nullid
1870 else:
1870 else:
1871 if not allow:
1871 if not allow:
1872 self.ui.status(_("this update spans a branch"
1872 self.ui.status(_("this update spans a branch"
1873 " affecting the following files:\n"))
1873 " affecting the following files:\n"))
1874 fl = merge.keys() + get.keys()
1874 fl = merge.keys() + get.keys()
1875 fl.sort()
1875 fl.sort()
1876 for f in fl:
1876 for f in fl:
1877 cf = ""
1877 cf = ""
1878 if f in merge:
1878 if f in merge:
1879 cf = _(" (resolve)")
1879 cf = _(" (resolve)")
1880 self.ui.status(" %s%s\n" % (f, cf))
1880 self.ui.status(" %s%s\n" % (f, cf))
1881 self.ui.warn(_("aborting update spanning branches!\n"))
1881 self.ui.warn(_("aborting update spanning branches!\n"))
1882 self.ui.status(_("(use 'hg merge' to merge across branches"
1882 self.ui.status(_("(use 'hg merge' to merge across branches"
1883 " or 'hg update -C' to lose changes)\n"))
1883 " or 'hg update -C' to lose changes)\n"))
1884 return 1
1884 return 1
1885 branch_merge = True
1885 branch_merge = True
1886
1886
1887 xp1 = hex(p1)
1887 xp1 = hex(p1)
1888 xp2 = hex(p2)
1888 xp2 = hex(p2)
1889 if p2 == nullid: xxp2 = ''
1889 if p2 == nullid: xxp2 = ''
1890 else: xxp2 = xp2
1890 else: xxp2 = xp2
1891
1891
1892 self.hook('preupdate', throw=True, parent1=xp1, parent2=xxp2)
1892 self.hook('preupdate', throw=True, parent1=xp1, parent2=xxp2)
1893
1893
1894 # get the files we don't need to change
1894 # get the files we don't need to change
1895 files = get.keys()
1895 files = get.keys()
1896 files.sort()
1896 files.sort()
1897 for f in files:
1897 for f in files:
1898 if f[0] == "/":
1898 if f[0] == "/":
1899 continue
1899 continue
1900 self.ui.note(_("getting %s\n") % f)
1900 self.ui.note(_("getting %s\n") % f)
1901 t = self.file(f).read(get[f])
1901 t = self.file(f).read(get[f])
1902 self.wwrite(f, t)
1902 self.wwrite(f, t)
1903 util.set_exec(self.wjoin(f), mf2[f])
1903 util.set_exec(self.wjoin(f), mf2[f])
1904 if moddirstate:
1904 if moddirstate:
1905 if branch_merge:
1905 if branch_merge:
1906 self.dirstate.update([f], 'n', st_mtime=-1)
1906 self.dirstate.update([f], 'n', st_mtime=-1)
1907 else:
1907 else:
1908 self.dirstate.update([f], 'n')
1908 self.dirstate.update([f], 'n')
1909
1909
1910 # merge the tricky bits
1910 # merge the tricky bits
1911 failedmerge = []
1911 failedmerge = []
1912 files = merge.keys()
1912 files = merge.keys()
1913 files.sort()
1913 files.sort()
1914 for f in files:
1914 for f in files:
1915 self.ui.status(_("merging %s\n") % f)
1915 self.ui.status(_("merging %s\n") % f)
1916 my, other, flag = merge[f]
1916 my, other, flag = merge[f]
1917 ret = self.merge3(f, my, other, xp1, xp2)
1917 ret = self.merge3(f, my, other, xp1, xp2)
1918 if ret:
1918 if ret:
1919 err = True
1919 err = True
1920 failedmerge.append(f)
1920 failedmerge.append(f)
1921 util.set_exec(self.wjoin(f), flag)
1921 util.set_exec(self.wjoin(f), flag)
1922 if moddirstate:
1922 if moddirstate:
1923 if branch_merge:
1923 if branch_merge:
1924 # We've done a branch merge, mark this file as merged
1924 # We've done a branch merge, mark this file as merged
1925 # so that we properly record the merger later
1925 # so that we properly record the merger later
1926 self.dirstate.update([f], 'm')
1926 self.dirstate.update([f], 'm')
1927 else:
1927 else:
1928 # We've update-merged a locally modified file, so
1928 # We've update-merged a locally modified file, so
1929 # we set the dirstate to emulate a normal checkout
1929 # we set the dirstate to emulate a normal checkout
1930 # of that file some time in the past. Thus our
1930 # of that file some time in the past. Thus our
1931 # merge will appear as a normal local file
1931 # merge will appear as a normal local file
1932 # modification.
1932 # modification.
1933 f_len = len(self.file(f).read(other))
1933 f_len = len(self.file(f).read(other))
1934 self.dirstate.update([f], 'n', st_size=f_len, st_mtime=-1)
1934 self.dirstate.update([f], 'n', st_size=f_len, st_mtime=-1)
1935
1935
1936 remove.sort()
1936 remove.sort()
1937 for f in remove:
1937 for f in remove:
1938 self.ui.note(_("removing %s\n") % f)
1938 self.ui.note(_("removing %s\n") % f)
1939 util.audit_path(f)
1939 util.audit_path(f)
1940 try:
1940 try:
1941 util.unlink(self.wjoin(f))
1941 util.unlink(self.wjoin(f))
1942 except OSError, inst:
1942 except OSError, inst:
1943 if inst.errno != errno.ENOENT:
1943 if inst.errno != errno.ENOENT:
1944 self.ui.warn(_("update failed to remove %s: %s!\n") %
1944 self.ui.warn(_("update failed to remove %s: %s!\n") %
1945 (f, inst.strerror))
1945 (f, inst.strerror))
1946 if moddirstate:
1946 if moddirstate:
1947 if branch_merge:
1947 if branch_merge:
1948 self.dirstate.update(remove, 'r')
1948 self.dirstate.update(remove, 'r')
1949 else:
1949 else:
1950 self.dirstate.forget(remove)
1950 self.dirstate.forget(remove)
1951
1951
1952 if moddirstate:
1952 if moddirstate:
1953 self.dirstate.setparents(p1, p2)
1953 self.dirstate.setparents(p1, p2)
1954
1954
1955 if show_stats:
1955 if show_stats:
1956 stats = ((len(get), _("updated")),
1956 stats = ((len(get), _("updated")),
1957 (len(merge) - len(failedmerge), _("merged")),
1957 (len(merge) - len(failedmerge), _("merged")),
1958 (len(remove), _("removed")),
1958 (len(remove), _("removed")),
1959 (len(failedmerge), _("unresolved")))
1959 (len(failedmerge), _("unresolved")))
1960 note = ", ".join([_("%d files %s") % s for s in stats])
1960 note = ", ".join([_("%d files %s") % s for s in stats])
1961 self.ui.status("%s\n" % note)
1961 self.ui.status("%s\n" % note)
1962 if moddirstate:
1962 if moddirstate:
1963 if branch_merge:
1963 if branch_merge:
1964 if failedmerge:
1964 if failedmerge:
1965 self.ui.status(_("There are unresolved merges,"
1965 self.ui.status(_("There are unresolved merges,"
1966 " you can redo the full merge using:\n"
1966 " you can redo the full merge using:\n"
1967 " hg update -C %s\n"
1967 " hg update -C %s\n"
1968 " hg merge %s\n"
1968 " hg merge %s\n"
1969 % (self.changelog.rev(p1),
1969 % (self.changelog.rev(p1),
1970 self.changelog.rev(p2))))
1970 self.changelog.rev(p2))))
1971 else:
1971 else:
1972 self.ui.status(_("(branch merge, don't forget to commit)\n"))
1972 self.ui.status(_("(branch merge, don't forget to commit)\n"))
1973 elif failedmerge:
1973 elif failedmerge:
1974 self.ui.status(_("There are unresolved merges with"
1974 self.ui.status(_("There are unresolved merges with"
1975 " locally modified files.\n"))
1975 " locally modified files.\n"))
1976
1976
1977 self.hook('update', parent1=xp1, parent2=xxp2, error=int(err))
1977 self.hook('update', parent1=xp1, parent2=xxp2, error=int(err))
1978 return err
1978 return err
1979
1979
1980 def merge3(self, fn, my, other, p1, p2):
1980 def merge3(self, fn, my, other, p1, p2):
1981 """perform a 3-way merge in the working directory"""
1981 """perform a 3-way merge in the working directory"""
1982
1982
1983 def temp(prefix, node):
1983 def temp(prefix, node):
1984 pre = "%s~%s." % (os.path.basename(fn), prefix)
1984 pre = "%s~%s." % (os.path.basename(fn), prefix)
1985 (fd, name) = tempfile.mkstemp(prefix=pre)
1985 (fd, name) = tempfile.mkstemp(prefix=pre)
1986 f = os.fdopen(fd, "wb")
1986 f = os.fdopen(fd, "wb")
1987 self.wwrite(fn, fl.read(node), f)
1987 self.wwrite(fn, fl.read(node), f)
1988 f.close()
1988 f.close()
1989 return name
1989 return name
1990
1990
1991 fl = self.file(fn)
1991 fl = self.file(fn)
1992 base = fl.ancestor(my, other)
1992 base = fl.ancestor(my, other)
1993 a = self.wjoin(fn)
1993 a = self.wjoin(fn)
1994 b = temp("base", base)
1994 b = temp("base", base)
1995 c = temp("other", other)
1995 c = temp("other", other)
1996
1996
1997 self.ui.note(_("resolving %s\n") % fn)
1997 self.ui.note(_("resolving %s\n") % fn)
1998 self.ui.debug(_("file %s: my %s other %s ancestor %s\n") %
1998 self.ui.debug(_("file %s: my %s other %s ancestor %s\n") %
1999 (fn, short(my), short(other), short(base)))
1999 (fn, short(my), short(other), short(base)))
2000
2000
2001 cmd = (os.environ.get("HGMERGE") or self.ui.config("ui", "merge")
2001 cmd = (os.environ.get("HGMERGE") or self.ui.config("ui", "merge")
2002 or "hgmerge")
2002 or "hgmerge")
2003 r = util.system('%s "%s" "%s" "%s"' % (cmd, a, b, c), cwd=self.root,
2003 r = util.system('%s "%s" "%s" "%s"' % (cmd, a, b, c), cwd=self.root,
2004 environ={'HG_FILE': fn,
2004 environ={'HG_FILE': fn,
2005 'HG_MY_NODE': p1,
2005 'HG_MY_NODE': p1,
2006 'HG_OTHER_NODE': p2,
2006 'HG_OTHER_NODE': p2,
2007 'HG_FILE_MY_NODE': hex(my),
2007 'HG_FILE_MY_NODE': hex(my),
2008 'HG_FILE_OTHER_NODE': hex(other),
2008 'HG_FILE_OTHER_NODE': hex(other),
2009 'HG_FILE_BASE_NODE': hex(base)})
2009 'HG_FILE_BASE_NODE': hex(base)})
2010 if r:
2010 if r:
2011 self.ui.warn(_("merging %s failed!\n") % fn)
2011 self.ui.warn(_("merging %s failed!\n") % fn)
2012
2012
2013 os.unlink(b)
2013 os.unlink(b)
2014 os.unlink(c)
2014 os.unlink(c)
2015 return r
2015 return r
2016
2016
2017 def verify(self):
2017 def verify(self):
2018 filelinkrevs = {}
2018 filelinkrevs = {}
2019 filenodes = {}
2019 filenodes = {}
2020 changesets = revisions = files = 0
2020 changesets = revisions = files = 0
2021 errors = [0]
2021 errors = [0]
2022 warnings = [0]
2022 warnings = [0]
2023 neededmanifests = {}
2023 neededmanifests = {}
2024
2024
2025 def err(msg):
2025 def err(msg):
2026 self.ui.warn(msg + "\n")
2026 self.ui.warn(msg + "\n")
2027 errors[0] += 1
2027 errors[0] += 1
2028
2028
2029 def warn(msg):
2029 def warn(msg):
2030 self.ui.warn(msg + "\n")
2030 self.ui.warn(msg + "\n")
2031 warnings[0] += 1
2031 warnings[0] += 1
2032
2032
2033 def checksize(obj, name):
2033 def checksize(obj, name):
2034 d = obj.checksize()
2034 d = obj.checksize()
2035 if d[0]:
2035 if d[0]:
2036 err(_("%s data length off by %d bytes") % (name, d[0]))
2036 err(_("%s data length off by %d bytes") % (name, d[0]))
2037 if d[1]:
2037 if d[1]:
2038 err(_("%s index contains %d extra bytes") % (name, d[1]))
2038 err(_("%s index contains %d extra bytes") % (name, d[1]))
2039
2039
2040 def checkversion(obj, name):
2040 def checkversion(obj, name):
2041 if obj.version != revlog.REVLOGV0:
2041 if obj.version != revlog.REVLOGV0:
2042 if not revlogv1:
2042 if not revlogv1:
2043 warn(_("warning: `%s' uses revlog format 1") % name)
2043 warn(_("warning: `%s' uses revlog format 1") % name)
2044 elif revlogv1:
2044 elif revlogv1:
2045 warn(_("warning: `%s' uses revlog format 0") % name)
2045 warn(_("warning: `%s' uses revlog format 0") % name)
2046
2046
2047 revlogv1 = self.revlogversion != revlog.REVLOGV0
2047 revlogv1 = self.revlogversion != revlog.REVLOGV0
2048 if self.ui.verbose or revlogv1 != self.revlogv1:
2048 if self.ui.verbose or revlogv1 != self.revlogv1:
2049 self.ui.status(_("repository uses revlog format %d\n") %
2049 self.ui.status(_("repository uses revlog format %d\n") %
2050 (revlogv1 and 1 or 0))
2050 (revlogv1 and 1 or 0))
2051
2051
2052 seen = {}
2052 seen = {}
2053 self.ui.status(_("checking changesets\n"))
2053 self.ui.status(_("checking changesets\n"))
2054 checksize(self.changelog, "changelog")
2054 checksize(self.changelog, "changelog")
2055
2055
2056 for i in range(self.changelog.count()):
2056 for i in range(self.changelog.count()):
2057 changesets += 1
2057 changesets += 1
2058 n = self.changelog.node(i)
2058 n = self.changelog.node(i)
2059 l = self.changelog.linkrev(n)
2059 l = self.changelog.linkrev(n)
2060 if l != i:
2060 if l != i:
2061 err(_("incorrect link (%d) for changeset revision %d") %(l, i))
2061 err(_("incorrect link (%d) for changeset revision %d") %(l, i))
2062 if n in seen:
2062 if n in seen:
2063 err(_("duplicate changeset at revision %d") % i)
2063 err(_("duplicate changeset at revision %d") % i)
2064 seen[n] = 1
2064 seen[n] = 1
2065
2065
2066 for p in self.changelog.parents(n):
2066 for p in self.changelog.parents(n):
2067 if p not in self.changelog.nodemap:
2067 if p not in self.changelog.nodemap:
2068 err(_("changeset %s has unknown parent %s") %
2068 err(_("changeset %s has unknown parent %s") %
2069 (short(n), short(p)))
2069 (short(n), short(p)))
2070 try:
2070 try:
2071 changes = self.changelog.read(n)
2071 changes = self.changelog.read(n)
2072 except KeyboardInterrupt:
2072 except KeyboardInterrupt:
2073 self.ui.warn(_("interrupted"))
2073 self.ui.warn(_("interrupted"))
2074 raise
2074 raise
2075 except Exception, inst:
2075 except Exception, inst:
2076 err(_("unpacking changeset %s: %s") % (short(n), inst))
2076 err(_("unpacking changeset %s: %s") % (short(n), inst))
2077 continue
2077 continue
2078
2078
2079 neededmanifests[changes[0]] = n
2079 neededmanifests[changes[0]] = n
2080
2080
2081 for f in changes[3]:
2081 for f in changes[3]:
2082 filelinkrevs.setdefault(f, []).append(i)
2082 filelinkrevs.setdefault(f, []).append(i)
2083
2083
2084 seen = {}
2084 seen = {}
2085 self.ui.status(_("checking manifests\n"))
2085 self.ui.status(_("checking manifests\n"))
2086 checkversion(self.manifest, "manifest")
2086 checkversion(self.manifest, "manifest")
2087 checksize(self.manifest, "manifest")
2087 checksize(self.manifest, "manifest")
2088
2088
2089 for i in range(self.manifest.count()):
2089 for i in range(self.manifest.count()):
2090 n = self.manifest.node(i)
2090 n = self.manifest.node(i)
2091 l = self.manifest.linkrev(n)
2091 l = self.manifest.linkrev(n)
2092
2092
2093 if l < 0 or l >= self.changelog.count():
2093 if l < 0 or l >= self.changelog.count():
2094 err(_("bad manifest link (%d) at revision %d") % (l, i))
2094 err(_("bad manifest link (%d) at revision %d") % (l, i))
2095
2095
2096 if n in neededmanifests:
2096 if n in neededmanifests:
2097 del neededmanifests[n]
2097 del neededmanifests[n]
2098
2098
2099 if n in seen:
2099 if n in seen:
2100 err(_("duplicate manifest at revision %d") % i)
2100 err(_("duplicate manifest at revision %d") % i)
2101
2101
2102 seen[n] = 1
2102 seen[n] = 1
2103
2103
2104 for p in self.manifest.parents(n):
2104 for p in self.manifest.parents(n):
2105 if p not in self.manifest.nodemap:
2105 if p not in self.manifest.nodemap:
2106 err(_("manifest %s has unknown parent %s") %
2106 err(_("manifest %s has unknown parent %s") %
2107 (short(n), short(p)))
2107 (short(n), short(p)))
2108
2108
2109 try:
2109 try:
2110 delta = mdiff.patchtext(self.manifest.delta(n))
2110 delta = mdiff.patchtext(self.manifest.delta(n))
2111 except KeyboardInterrupt:
2111 except KeyboardInterrupt:
2112 self.ui.warn(_("interrupted"))
2112 self.ui.warn(_("interrupted"))
2113 raise
2113 raise
2114 except Exception, inst:
2114 except Exception, inst:
2115 err(_("unpacking manifest %s: %s") % (short(n), inst))
2115 err(_("unpacking manifest %s: %s") % (short(n), inst))
2116 continue
2116 continue
2117
2117
2118 try:
2118 try:
2119 ff = [ l.split('\0') for l in delta.splitlines() ]
2119 ff = [ l.split('\0') for l in delta.splitlines() ]
2120 for f, fn in ff:
2120 for f, fn in ff:
2121 filenodes.setdefault(f, {})[bin(fn[:40])] = 1
2121 filenodes.setdefault(f, {})[bin(fn[:40])] = 1
2122 except (ValueError, TypeError), inst:
2122 except (ValueError, TypeError), inst:
2123 err(_("broken delta in manifest %s: %s") % (short(n), inst))
2123 err(_("broken delta in manifest %s: %s") % (short(n), inst))
2124
2124
2125 self.ui.status(_("crosschecking files in changesets and manifests\n"))
2125 self.ui.status(_("crosschecking files in changesets and manifests\n"))
2126
2126
2127 for m, c in neededmanifests.items():
2127 for m, c in neededmanifests.items():
2128 err(_("Changeset %s refers to unknown manifest %s") %
2128 err(_("Changeset %s refers to unknown manifest %s") %
2129 (short(m), short(c)))
2129 (short(m), short(c)))
2130 del neededmanifests
2130 del neededmanifests
2131
2131
2132 for f in filenodes:
2132 for f in filenodes:
2133 if f not in filelinkrevs:
2133 if f not in filelinkrevs:
2134 err(_("file %s in manifest but not in changesets") % f)
2134 err(_("file %s in manifest but not in changesets") % f)
2135
2135
2136 for f in filelinkrevs:
2136 for f in filelinkrevs:
2137 if f not in filenodes:
2137 if f not in filenodes:
2138 err(_("file %s in changeset but not in manifest") % f)
2138 err(_("file %s in changeset but not in manifest") % f)
2139
2139
2140 self.ui.status(_("checking files\n"))
2140 self.ui.status(_("checking files\n"))
2141 ff = filenodes.keys()
2141 ff = filenodes.keys()
2142 ff.sort()
2142 ff.sort()
2143 for f in ff:
2143 for f in ff:
2144 if f == "/dev/null":
2144 if f == "/dev/null":
2145 continue
2145 continue
2146 files += 1
2146 files += 1
2147 if not f:
2147 if not f:
2148 err(_("file without name in manifest %s") % short(n))
2148 err(_("file without name in manifest %s") % short(n))
2149 continue
2149 continue
2150 fl = self.file(f)
2150 fl = self.file(f)
2151 checkversion(fl, f)
2151 checkversion(fl, f)
2152 checksize(fl, f)
2152 checksize(fl, f)
2153
2153
2154 nodes = {nullid: 1}
2154 nodes = {nullid: 1}
2155 seen = {}
2155 seen = {}
2156 for i in range(fl.count()):
2156 for i in range(fl.count()):
2157 revisions += 1
2157 revisions += 1
2158 n = fl.node(i)
2158 n = fl.node(i)
2159
2159
2160 if n in seen:
2160 if n in seen:
2161 err(_("%s: duplicate revision %d") % (f, i))
2161 err(_("%s: duplicate revision %d") % (f, i))
2162 if n not in filenodes[f]:
2162 if n not in filenodes[f]:
2163 err(_("%s: %d:%s not in manifests") % (f, i, short(n)))
2163 err(_("%s: %d:%s not in manifests") % (f, i, short(n)))
2164 else:
2164 else:
2165 del filenodes[f][n]
2165 del filenodes[f][n]
2166
2166
2167 flr = fl.linkrev(n)
2167 flr = fl.linkrev(n)
2168 if flr not in filelinkrevs.get(f, []):
2168 if flr not in filelinkrevs.get(f, []):
2169 err(_("%s:%s points to unexpected changeset %d")
2169 err(_("%s:%s points to unexpected changeset %d")
2170 % (f, short(n), flr))
2170 % (f, short(n), flr))
2171 else:
2171 else:
2172 filelinkrevs[f].remove(flr)
2172 filelinkrevs[f].remove(flr)
2173
2173
2174 # verify contents
2174 # verify contents
2175 try:
2175 try:
2176 t = fl.read(n)
2176 t = fl.read(n)
2177 except KeyboardInterrupt:
2177 except KeyboardInterrupt:
2178 self.ui.warn(_("interrupted"))
2178 self.ui.warn(_("interrupted"))
2179 raise
2179 raise
2180 except Exception, inst:
2180 except Exception, inst:
2181 err(_("unpacking file %s %s: %s") % (f, short(n), inst))
2181 err(_("unpacking file %s %s: %s") % (f, short(n), inst))
2182
2182
2183 # verify parents
2183 # verify parents
2184 (p1, p2) = fl.parents(n)
2184 (p1, p2) = fl.parents(n)
2185 if p1 not in nodes:
2185 if p1 not in nodes:
2186 err(_("file %s:%s unknown parent 1 %s") %
2186 err(_("file %s:%s unknown parent 1 %s") %
2187 (f, short(n), short(p1)))
2187 (f, short(n), short(p1)))
2188 if p2 not in nodes:
2188 if p2 not in nodes:
2189 err(_("file %s:%s unknown parent 2 %s") %
2189 err(_("file %s:%s unknown parent 2 %s") %
2190 (f, short(n), short(p1)))
2190 (f, short(n), short(p1)))
2191 nodes[n] = 1
2191 nodes[n] = 1
2192
2192
2193 # cross-check
2193 # cross-check
2194 for node in filenodes[f]:
2194 for node in filenodes[f]:
2195 err(_("node %s in manifests not in %s") % (hex(node), f))
2195 err(_("node %s in manifests not in %s") % (hex(node), f))
2196
2196
2197 self.ui.status(_("%d files, %d changesets, %d total revisions\n") %
2197 self.ui.status(_("%d files, %d changesets, %d total revisions\n") %
2198 (files, changesets, revisions))
2198 (files, changesets, revisions))
2199
2199
2200 if warnings[0]:
2200 if warnings[0]:
2201 self.ui.warn(_("%d warnings encountered!\n") % warnings[0])
2201 self.ui.warn(_("%d warnings encountered!\n") % warnings[0])
2202 if errors[0]:
2202 if errors[0]:
2203 self.ui.warn(_("%d integrity errors encountered!\n") % errors[0])
2203 self.ui.warn(_("%d integrity errors encountered!\n") % errors[0])
2204 return 1
2204 return 1
2205
2205
2206 def stream_in(self, remote):
2206 def stream_in(self, remote):
2207 fp = remote.stream_out()
2208 resp = int(fp.readline())
2209 if resp != 0:
2210 raise util.Abort(_('operation forbidden by server'))
2207 self.ui.status(_('streaming all changes\n'))
2211 self.ui.status(_('streaming all changes\n'))
2208 fp = remote.stream_out()
2209 total_files, total_bytes = map(int, fp.readline().split(' ', 1))
2212 total_files, total_bytes = map(int, fp.readline().split(' ', 1))
2210 self.ui.status(_('%d files to transfer, %s of data\n') %
2213 self.ui.status(_('%d files to transfer, %s of data\n') %
2211 (total_files, util.bytecount(total_bytes)))
2214 (total_files, util.bytecount(total_bytes)))
2212 start = time.time()
2215 start = time.time()
2213 for i in xrange(total_files):
2216 for i in xrange(total_files):
2214 name, size = fp.readline().split('\0', 1)
2217 name, size = fp.readline().split('\0', 1)
2215 size = int(size)
2218 size = int(size)
2216 self.ui.debug('adding %s (%s)\n' % (name, util.bytecount(size)))
2219 self.ui.debug('adding %s (%s)\n' % (name, util.bytecount(size)))
2217 ofp = self.opener(name, 'w')
2220 ofp = self.opener(name, 'w')
2218 for chunk in util.filechunkiter(fp, limit=size):
2221 for chunk in util.filechunkiter(fp, limit=size):
2219 ofp.write(chunk)
2222 ofp.write(chunk)
2220 ofp.close()
2223 ofp.close()
2221 elapsed = time.time() - start
2224 elapsed = time.time() - start
2222 self.ui.status(_('transferred %s in %.1f seconds (%s/sec)\n') %
2225 self.ui.status(_('transferred %s in %.1f seconds (%s/sec)\n') %
2223 (util.bytecount(total_bytes), elapsed,
2226 (util.bytecount(total_bytes), elapsed,
2224 util.bytecount(total_bytes / elapsed)))
2227 util.bytecount(total_bytes / elapsed)))
2225 self.reload()
2228 self.reload()
2226 return len(self.heads()) + 1
2229 return len(self.heads()) + 1
2227
2230
2228 def clone(self, remote, heads=[], stream=False):
2231 def clone(self, remote, heads=[], stream=False):
2229 '''clone remote repository.
2232 '''clone remote repository.
2230
2233
2231 keyword arguments:
2234 keyword arguments:
2232 heads: list of revs to clone (forces use of pull)
2235 heads: list of revs to clone (forces use of pull)
2233 pull: force use of pull, even if remote can stream'''
2236 stream: use streaming clone if possible'''
2234
2237
2235 # now, all clients that can stream can read repo formats
2238 # now, all clients that can request uncompressed clones can
2236 # supported by all servers that can stream.
2239 # read repo formats supported by all servers that can serve
2240 # them.
2237
2241
2238 # if revlog format changes, client will have to check version
2242 # if revlog format changes, client will have to check version
2239 # and format flags on "stream" capability, and stream only if
2243 # and format flags on "stream" capability, and use
2240 # compatible.
2244 # uncompressed only if compatible.
2241
2245
2242 if stream and not heads and remote.capable('stream'):
2246 if stream and not heads and remote.capable('stream'):
2243 return self.stream_in(remote)
2247 return self.stream_in(remote)
2244 return self.pull(remote, heads)
2248 return self.pull(remote, heads)
2245
2249
2246 # used to avoid circular references so destructors work
2250 # used to avoid circular references so destructors work
2247 def aftertrans(base):
2251 def aftertrans(base):
2248 p = base
2252 p = base
2249 def a():
2253 def a():
2250 util.rename(os.path.join(p, "journal"), os.path.join(p, "undo"))
2254 util.rename(os.path.join(p, "journal"), os.path.join(p, "undo"))
2251 util.rename(os.path.join(p, "journal.dirstate"),
2255 util.rename(os.path.join(p, "journal.dirstate"),
2252 os.path.join(p, "undo.dirstate"))
2256 os.path.join(p, "undo.dirstate"))
2253 return a
2257 return a
2254
2258
@@ -1,171 +1,173 b''
1 # sshserver.py - ssh protocol server support for mercurial
1 # sshserver.py - ssh protocol server support for mercurial
2 #
2 #
3 # Copyright 2005 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms
5 # This software may be used and distributed according to the terms
6 # of the GNU General Public License, incorporated herein by reference.
6 # of the GNU General Public License, incorporated herein by reference.
7
7
8 from demandload import demandload
8 from demandload import demandload
9 from i18n import gettext as _
9 from i18n import gettext as _
10 from node import *
10 from node import *
11 demandload(globals(), "os streamclone sys tempfile util")
11 demandload(globals(), "os streamclone sys tempfile util")
12
12
13 class sshserver(object):
13 class sshserver(object):
14 def __init__(self, ui, repo):
14 def __init__(self, ui, repo):
15 self.ui = ui
15 self.ui = ui
16 self.repo = repo
16 self.repo = repo
17 self.lock = None
17 self.lock = None
18 self.fin = sys.stdin
18 self.fin = sys.stdin
19 self.fout = sys.stdout
19 self.fout = sys.stdout
20
20
21 sys.stdout = sys.stderr
21 sys.stdout = sys.stderr
22
22
23 # Prevent insertion/deletion of CRs
23 # Prevent insertion/deletion of CRs
24 util.set_binary(self.fin)
24 util.set_binary(self.fin)
25 util.set_binary(self.fout)
25 util.set_binary(self.fout)
26
26
27 def getarg(self):
27 def getarg(self):
28 argline = self.fin.readline()[:-1]
28 argline = self.fin.readline()[:-1]
29 arg, l = argline.split()
29 arg, l = argline.split()
30 val = self.fin.read(int(l))
30 val = self.fin.read(int(l))
31 return arg, val
31 return arg, val
32
32
33 def respond(self, v):
33 def respond(self, v):
34 self.fout.write("%d\n" % len(v))
34 self.fout.write("%d\n" % len(v))
35 self.fout.write(v)
35 self.fout.write(v)
36 self.fout.flush()
36 self.fout.flush()
37
37
38 def serve_forever(self):
38 def serve_forever(self):
39 while self.serve_one(): pass
39 while self.serve_one(): pass
40 sys.exit(0)
40 sys.exit(0)
41
41
42 def serve_one(self):
42 def serve_one(self):
43 cmd = self.fin.readline()[:-1]
43 cmd = self.fin.readline()[:-1]
44 if cmd:
44 if cmd:
45 impl = getattr(self, 'do_' + cmd, None)
45 impl = getattr(self, 'do_' + cmd, None)
46 if impl: impl()
46 if impl: impl()
47 else: self.respond("")
47 else: self.respond("")
48 return cmd != ''
48 return cmd != ''
49
49
50 def do_heads(self):
50 def do_heads(self):
51 h = self.repo.heads()
51 h = self.repo.heads()
52 self.respond(" ".join(map(hex, h)) + "\n")
52 self.respond(" ".join(map(hex, h)) + "\n")
53
53
54 def do_hello(self):
54 def do_hello(self):
55 '''the hello command returns a set of lines describing various
55 '''the hello command returns a set of lines describing various
56 interesting things about the server, in an RFC822-like format.
56 interesting things about the server, in an RFC822-like format.
57 Currently the only one defined is "capabilities", which
57 Currently the only one defined is "capabilities", which
58 consists of a line in the form:
58 consists of a line in the form:
59
59
60 capabilities: space separated list of tokens
60 capabilities: space separated list of tokens
61 '''
61 '''
62
62
63 r = "capabilities: unbundle stream=%d\n" % (self.repo.revlogversion,)
63 caps = ['unbundle']
64 self.respond(r)
64 if self.ui.configbool('server', 'stream'):
65 caps.append('stream=%d' % self.repo.revlogversion)
66 self.respond("capabilities: %s\n" % (' '.join(caps),))
65
67
66 def do_lock(self):
68 def do_lock(self):
67 '''DEPRECATED - allowing remote client to lock repo is not safe'''
69 '''DEPRECATED - allowing remote client to lock repo is not safe'''
68
70
69 self.lock = self.repo.lock()
71 self.lock = self.repo.lock()
70 self.respond("")
72 self.respond("")
71
73
72 def do_unlock(self):
74 def do_unlock(self):
73 '''DEPRECATED'''
75 '''DEPRECATED'''
74
76
75 if self.lock:
77 if self.lock:
76 self.lock.release()
78 self.lock.release()
77 self.lock = None
79 self.lock = None
78 self.respond("")
80 self.respond("")
79
81
80 def do_branches(self):
82 def do_branches(self):
81 arg, nodes = self.getarg()
83 arg, nodes = self.getarg()
82 nodes = map(bin, nodes.split(" "))
84 nodes = map(bin, nodes.split(" "))
83 r = []
85 r = []
84 for b in self.repo.branches(nodes):
86 for b in self.repo.branches(nodes):
85 r.append(" ".join(map(hex, b)) + "\n")
87 r.append(" ".join(map(hex, b)) + "\n")
86 self.respond("".join(r))
88 self.respond("".join(r))
87
89
88 def do_between(self):
90 def do_between(self):
89 arg, pairs = self.getarg()
91 arg, pairs = self.getarg()
90 pairs = [map(bin, p.split("-")) for p in pairs.split(" ")]
92 pairs = [map(bin, p.split("-")) for p in pairs.split(" ")]
91 r = []
93 r = []
92 for b in self.repo.between(pairs):
94 for b in self.repo.between(pairs):
93 r.append(" ".join(map(hex, b)) + "\n")
95 r.append(" ".join(map(hex, b)) + "\n")
94 self.respond("".join(r))
96 self.respond("".join(r))
95
97
96 def do_changegroup(self):
98 def do_changegroup(self):
97 nodes = []
99 nodes = []
98 arg, roots = self.getarg()
100 arg, roots = self.getarg()
99 nodes = map(bin, roots.split(" "))
101 nodes = map(bin, roots.split(" "))
100
102
101 cg = self.repo.changegroup(nodes, 'serve')
103 cg = self.repo.changegroup(nodes, 'serve')
102 while True:
104 while True:
103 d = cg.read(4096)
105 d = cg.read(4096)
104 if not d:
106 if not d:
105 break
107 break
106 self.fout.write(d)
108 self.fout.write(d)
107
109
108 self.fout.flush()
110 self.fout.flush()
109
111
110 def do_addchangegroup(self):
112 def do_addchangegroup(self):
111 '''DEPRECATED'''
113 '''DEPRECATED'''
112
114
113 if not self.lock:
115 if not self.lock:
114 self.respond("not locked")
116 self.respond("not locked")
115 return
117 return
116
118
117 self.respond("")
119 self.respond("")
118 r = self.repo.addchangegroup(self.fin, 'serve')
120 r = self.repo.addchangegroup(self.fin, 'serve')
119 self.respond(str(r))
121 self.respond(str(r))
120
122
121 def do_unbundle(self):
123 def do_unbundle(self):
122 their_heads = self.getarg()[1].split()
124 their_heads = self.getarg()[1].split()
123
125
124 def check_heads():
126 def check_heads():
125 heads = map(hex, self.repo.heads())
127 heads = map(hex, self.repo.heads())
126 return their_heads == [hex('force')] or their_heads == heads
128 return their_heads == [hex('force')] or their_heads == heads
127
129
128 # fail early if possible
130 # fail early if possible
129 if not check_heads():
131 if not check_heads():
130 self.respond(_('unsynced changes'))
132 self.respond(_('unsynced changes'))
131 return
133 return
132
134
133 self.respond('')
135 self.respond('')
134
136
135 # write bundle data to temporary file because it can be big
137 # write bundle data to temporary file because it can be big
136
138
137 try:
139 try:
138 fd, tempname = tempfile.mkstemp(prefix='hg-unbundle-')
140 fd, tempname = tempfile.mkstemp(prefix='hg-unbundle-')
139 fp = os.fdopen(fd, 'wb+')
141 fp = os.fdopen(fd, 'wb+')
140
142
141 count = int(self.fin.readline())
143 count = int(self.fin.readline())
142 while count:
144 while count:
143 fp.write(self.fin.read(count))
145 fp.write(self.fin.read(count))
144 count = int(self.fin.readline())
146 count = int(self.fin.readline())
145
147
146 was_locked = self.lock is not None
148 was_locked = self.lock is not None
147 if not was_locked:
149 if not was_locked:
148 self.lock = self.repo.lock()
150 self.lock = self.repo.lock()
149 try:
151 try:
150 if not check_heads():
152 if not check_heads():
151 # someone else committed/pushed/unbundled while we
153 # someone else committed/pushed/unbundled while we
152 # were transferring data
154 # were transferring data
153 self.respond(_('unsynced changes'))
155 self.respond(_('unsynced changes'))
154 return
156 return
155 self.respond('')
157 self.respond('')
156
158
157 # push can proceed
159 # push can proceed
158
160
159 fp.seek(0)
161 fp.seek(0)
160 r = self.repo.addchangegroup(fp, 'serve')
162 r = self.repo.addchangegroup(fp, 'serve')
161 self.respond(str(r))
163 self.respond(str(r))
162 finally:
164 finally:
163 if not was_locked:
165 if not was_locked:
164 self.lock.release()
166 self.lock.release()
165 self.lock = None
167 self.lock = None
166 finally:
168 finally:
167 fp.close()
169 fp.close()
168 os.unlink(tempname)
170 os.unlink(tempname)
169
171
170 def do_stream_out(self):
172 def do_stream_out(self):
171 streamclone.stream_out(self.repo, self.fout)
173 streamclone.stream_out(self.repo, self.fout)
@@ -1,82 +1,89 b''
1 # streamclone.py - streaming clone server support for mercurial
1 # streamclone.py - streaming clone server support for mercurial
2 #
2 #
3 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
3 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
4 #
4 #
5 # This software may be used and distributed according to the terms
5 # This software may be used and distributed according to the terms
6 # of the GNU General Public License, incorporated herein by reference.
6 # of the GNU General Public License, incorporated herein by reference.
7
7
8 from demandload import demandload
8 from demandload import demandload
9 from i18n import gettext as _
9 from i18n import gettext as _
10 demandload(globals(), "os stat util")
10 demandload(globals(), "os stat util")
11
11
12 # if server supports streaming clone, it advertises "stream"
12 # if server supports streaming clone, it advertises "stream"
13 # capability with value that is version+flags of repo it is serving.
13 # capability with value that is version+flags of repo it is serving.
14 # client only streams if it can read that repo format.
14 # client only streams if it can read that repo format.
15
15
16 def walkrepo(root):
16 def walkrepo(root):
17 '''iterate over metadata files in repository.
17 '''iterate over metadata files in repository.
18 walk in natural (sorted) order.
18 walk in natural (sorted) order.
19 yields 2-tuples: name of .d or .i file, size of file.'''
19 yields 2-tuples: name of .d or .i file, size of file.'''
20
20
21 strip_count = len(root) + len(os.sep)
21 strip_count = len(root) + len(os.sep)
22 def walk(path, recurse):
22 def walk(path, recurse):
23 ents = os.listdir(path)
23 ents = os.listdir(path)
24 ents.sort()
24 ents.sort()
25 for e in ents:
25 for e in ents:
26 pe = os.path.join(path, e)
26 pe = os.path.join(path, e)
27 st = os.lstat(pe)
27 st = os.lstat(pe)
28 if stat.S_ISDIR(st.st_mode):
28 if stat.S_ISDIR(st.st_mode):
29 if recurse:
29 if recurse:
30 for x in walk(pe, True):
30 for x in walk(pe, True):
31 yield x
31 yield x
32 else:
32 else:
33 if not stat.S_ISREG(st.st_mode) or len(e) < 2:
33 if not stat.S_ISREG(st.st_mode) or len(e) < 2:
34 continue
34 continue
35 sfx = e[-2:]
35 sfx = e[-2:]
36 if sfx in ('.d', '.i'):
36 if sfx in ('.d', '.i'):
37 yield pe[strip_count:], st.st_size
37 yield pe[strip_count:], st.st_size
38 # write file data first
38 # write file data first
39 for x in walk(os.path.join(root, 'data'), True):
39 for x in walk(os.path.join(root, 'data'), True):
40 yield x
40 yield x
41 # write manifest before changelog
41 # write manifest before changelog
42 meta = list(walk(root, False))
42 meta = list(walk(root, False))
43 meta.sort(reverse=True)
43 meta.sort(reverse=True)
44 for x in meta:
44 for x in meta:
45 yield x
45 yield x
46
46
47 # stream file format is simple.
47 # stream file format is simple.
48 #
48 #
49 # server writes out line that says how many files, how many total
49 # server writes out line that says how many files, how many total
50 # bytes. separator is ascii space, byte counts are strings.
50 # bytes. separator is ascii space, byte counts are strings.
51 #
51 #
52 # then for each file:
52 # then for each file:
53 #
53 #
54 # server writes out line that says file name, how many bytes in
54 # server writes out line that says file name, how many bytes in
55 # file. separator is ascii nul, byte count is string.
55 # file. separator is ascii nul, byte count is string.
56 #
56 #
57 # server writes out raw file data.
57 # server writes out raw file data.
58
58
59 def stream_out(repo, fileobj):
59 def stream_out(repo, fileobj):
60 '''stream out all metadata files in repository.
60 '''stream out all metadata files in repository.
61 writes to file-like object, must support write() and optional flush().'''
61 writes to file-like object, must support write() and optional flush().'''
62
63 if not repo.ui.configbool('server', 'stream'):
64 fileobj.write('1\n')
65 return
66
67 fileobj.write('0\n')
68
62 # get consistent snapshot of repo. lock during scan so lock not
69 # get consistent snapshot of repo. lock during scan so lock not
63 # needed while we stream, and commits can happen.
70 # needed while we stream, and commits can happen.
64 lock = repo.lock()
71 lock = repo.lock()
65 repo.ui.debug('scanning\n')
72 repo.ui.debug('scanning\n')
66 entries = []
73 entries = []
67 total_bytes = 0
74 total_bytes = 0
68 for name, size in walkrepo(repo.path):
75 for name, size in walkrepo(repo.path):
69 entries.append((name, size))
76 entries.append((name, size))
70 total_bytes += size
77 total_bytes += size
71 lock.release()
78 lock.release()
72
79
73 repo.ui.debug('%d files, %d bytes to transfer\n' %
80 repo.ui.debug('%d files, %d bytes to transfer\n' %
74 (len(entries), total_bytes))
81 (len(entries), total_bytes))
75 fileobj.write('%d %d\n' % (len(entries), total_bytes))
82 fileobj.write('%d %d\n' % (len(entries), total_bytes))
76 for name, size in entries:
83 for name, size in entries:
77 repo.ui.debug('sending %s (%d bytes)\n' % (name, size))
84 repo.ui.debug('sending %s (%d bytes)\n' % (name, size))
78 fileobj.write('%s\0%d\n' % (name, size))
85 fileobj.write('%s\0%d\n' % (name, size))
79 for chunk in util.filechunkiter(repo.opener(name), limit=size):
86 for chunk in util.filechunkiter(repo.opener(name), limit=size):
80 fileobj.write(chunk)
87 fileobj.write(chunk)
81 flush = getattr(fileobj, 'flush', None)
88 flush = getattr(fileobj, 'flush', None)
82 if flush: flush()
89 if flush: flush()
@@ -1,25 +1,25 b''
1 #!/bin/sh
1 #!/bin/sh
2
2
3 mkdir test
3 hg init test
4 cd test
4 cd test
5 echo foo>foo
5 echo foo>foo
6 hg init
6 hg commit -A -d '0 0' -m 1
7 hg addremove
7 hg --config server.stream=True serve -p 20059 -d --pid-file=hg1.pid
8 hg commit -m 1
8 cat hg1.pid >> $DAEMON_PIDS
9 hg verify
9 hg serve -p 20060 -d --pid-file=hg2.pid
10 hg serve -p 20059 -d --pid-file=hg.pid
10 cat hg2.pid >> $DAEMON_PIDS
11 cat hg.pid >> $DAEMON_PIDS
12 cd ..
11 cd ..
13
12
14 echo % clone via stream
13 echo % clone via stream
15 http_proxy= hg clone --stream http://localhost:20059/ copy 2>&1 | \
14 http_proxy= hg clone --uncompressed http://localhost:20059/ copy 2>&1 | \
16 sed -e 's/[0-9][0-9.]*/XXX/g'
15 sed -e 's/[0-9][0-9.]*/XXX/g'
17 cd copy
16 cd copy
18 hg verify
17 hg verify
19
18
20 cd ..
19 echo % try to clone via stream, should use pull instead
20 http_proxy= hg clone --uncompressed http://localhost:20060/ copy2
21
21
22 echo % clone via pull
22 echo % clone via pull
23 http_proxy= hg clone http://localhost:20059/ copy-pull
23 http_proxy= hg clone http://localhost:20059/ copy-pull
24 cd copy-pull
24 cd copy-pull
25 hg verify
25 hg verify
@@ -1,41 +1,41 b''
1 #!/bin/sh
1 #!/bin/sh
2
2
3 hg init a
3 hg init a
4 cd a
4 cd a
5 echo a > a
5 echo a > a
6 hg ci -Ama -d '1123456789 0'
6 hg ci -Ama -d '1123456789 0'
7 hg serve -p 20059 -d --pid-file=hg.pid
7 hg --config server.stream=True serve -p 20059 -d --pid-file=hg.pid
8 cat hg.pid >> $DAEMON_PIDS
8 cat hg.pid >> $DAEMON_PIDS
9
9
10 cd ..
10 cd ..
11 ("$TESTDIR/tinyproxy.py" 20060 localhost >proxy.log 2>&1 </dev/null &
11 ("$TESTDIR/tinyproxy.py" 20060 localhost >proxy.log 2>&1 </dev/null &
12 echo $! > proxy.pid)
12 echo $! > proxy.pid)
13 cat proxy.pid >> $DAEMON_PIDS
13 cat proxy.pid >> $DAEMON_PIDS
14 sleep 2
14 sleep 2
15
15
16 echo %% url for proxy, stream
16 echo %% url for proxy, stream
17 http_proxy=http://localhost:20060/ hg --config http_proxy.always=True clone --stream http://localhost:20059/ b | \
17 http_proxy=http://localhost:20060/ hg --config http_proxy.always=True clone --uncompressed http://localhost:20059/ b | \
18 sed -e 's/[0-9][0-9.]*/XXX/g'
18 sed -e 's/[0-9][0-9.]*/XXX/g'
19 cd b
19 cd b
20 hg verify
20 hg verify
21 cd ..
21 cd ..
22
22
23 echo %% url for proxy, pull
23 echo %% url for proxy, pull
24 http_proxy=http://localhost:20060/ hg --config http_proxy.always=True clone http://localhost:20059/ b-pull
24 http_proxy=http://localhost:20060/ hg --config http_proxy.always=True clone http://localhost:20059/ b-pull
25 cd b-pull
25 cd b-pull
26 hg verify
26 hg verify
27 cd ..
27 cd ..
28
28
29 echo %% host:port for proxy
29 echo %% host:port for proxy
30 http_proxy=localhost:20060 hg clone --config http_proxy.always=True http://localhost:20059/ c
30 http_proxy=localhost:20060 hg clone --config http_proxy.always=True http://localhost:20059/ c
31
31
32 echo %% proxy url with user name and password
32 echo %% proxy url with user name and password
33 http_proxy=http://user:passwd@localhost:20060 hg clone --config http_proxy.always=True http://localhost:20059/ d
33 http_proxy=http://user:passwd@localhost:20060 hg clone --config http_proxy.always=True http://localhost:20059/ d
34
34
35 echo %% url with user name and password
35 echo %% url with user name and password
36 http_proxy=http://user:passwd@localhost:20060 hg clone --config http_proxy.always=True http://user:passwd@localhost:20059/ e
36 http_proxy=http://user:passwd@localhost:20060 hg clone --config http_proxy.always=True http://user:passwd@localhost:20059/ e
37
37
38 echo %% bad host:port for proxy
38 echo %% bad host:port for proxy
39 http_proxy=localhost:20061 hg clone --config http_proxy.always=True http://localhost:20059/ f
39 http_proxy=localhost:20061 hg clone --config http_proxy.always=True http://localhost:20059/ f
40
40
41 exit 0
41 exit 0
@@ -1,29 +1,30 b''
1 (the addremove command is deprecated; use add and remove --after instead)
2 adding foo
1 adding foo
3 checking changesets
4 checking manifests
5 crosschecking files in changesets and manifests
6 checking files
7 1 files, 1 changesets, 1 total revisions
8 % clone via stream
2 % clone via stream
9 streaming all changes
3 streaming all changes
10 XXX files to transfer, XXX bytes of data
4 XXX files to transfer, XXX bytes of data
11 transferred XXX bytes in XXX seconds (XXX KB/sec)
5 transferred XXX bytes in XXX seconds (XXX KB/sec)
12 XXX files updated, XXX files merged, XXX files removed, XXX files unresolved
6 XXX files updated, XXX files merged, XXX files removed, XXX files unresolved
13 checking changesets
7 checking changesets
14 checking manifests
8 checking manifests
15 crosschecking files in changesets and manifests
9 crosschecking files in changesets and manifests
16 checking files
10 checking files
17 1 files, 1 changesets, 1 total revisions
11 1 files, 1 changesets, 1 total revisions
12 % try to clone via stream, should use pull instead
13 requesting all changes
14 adding changesets
15 adding manifests
16 adding file changes
17 added 1 changesets with 1 changes to 1 files
18 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
18 % clone via pull
19 % clone via pull
19 requesting all changes
20 requesting all changes
20 adding changesets
21 adding changesets
21 adding manifests
22 adding manifests
22 adding file changes
23 adding file changes
23 added 1 changesets with 1 changes to 1 files
24 added 1 changesets with 1 changes to 1 files
24 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
25 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
25 checking changesets
26 checking changesets
26 checking manifests
27 checking manifests
27 crosschecking files in changesets and manifests
28 crosschecking files in changesets and manifests
28 checking files
29 checking files
29 1 files, 1 changesets, 1 total revisions
30 1 files, 1 changesets, 1 total revisions
@@ -1,90 +1,92 b''
1 #!/bin/sh
1 #!/bin/sh
2
2
3 # This test tries to exercise the ssh functionality with a dummy script
3 # This test tries to exercise the ssh functionality with a dummy script
4
4
5 cat <<'EOF' > dummyssh
5 cat <<'EOF' > dummyssh
6 #!/bin/sh
6 #!/bin/sh
7 # this attempts to deal with relative pathnames
7 # this attempts to deal with relative pathnames
8 cd `dirname $0`
8 cd `dirname $0`
9
9
10 # check for proper args
10 # check for proper args
11 if [ $1 != "user@dummy" ] ; then
11 if [ $1 != "user@dummy" ] ; then
12 exit -1
12 exit -1
13 fi
13 fi
14
14
15 # check that we're in the right directory
15 # check that we're in the right directory
16 if [ ! -x dummyssh ] ; then
16 if [ ! -x dummyssh ] ; then
17 exit -1
17 exit -1
18 fi
18 fi
19
19
20 echo Got arguments 1:$1 2:$2 3:$3 4:$4 5:$5 >> dummylog
20 echo Got arguments 1:$1 2:$2 3:$3 4:$4 5:$5 >> dummylog
21 $2
21 $2
22 EOF
22 EOF
23 chmod +x dummyssh
23 chmod +x dummyssh
24
24
25 echo "# creating 'remote'"
25 echo "# creating 'remote'"
26 hg init remote
26 hg init remote
27 cd remote
27 cd remote
28 echo this > foo
28 echo this > foo
29 hg ci -A -m "init" -d "1000000 0" foo
29 hg ci -A -m "init" -d "1000000 0" foo
30 echo '[server]' > .hg/hgrc
31 echo 'stream = True' >> .hg/hgrc
30
32
31 cd ..
33 cd ..
32
34
33 echo "# clone remote via stream"
35 echo "# clone remote via stream"
34 hg clone -e ./dummyssh --stream ssh://user@dummy/remote local-stream 2>&1 | \
36 hg clone -e ./dummyssh --uncompressed ssh://user@dummy/remote local-stream 2>&1 | \
35 sed -e 's/[0-9][0-9.]*/XXX/g'
37 sed -e 's/[0-9][0-9.]*/XXX/g'
36 cd local-stream
38 cd local-stream
37 hg verify
39 hg verify
38 cd ..
40 cd ..
39
41
40 echo "# clone remote via pull"
42 echo "# clone remote via pull"
41 hg clone -e ./dummyssh ssh://user@dummy/remote local
43 hg clone -e ./dummyssh ssh://user@dummy/remote local
42
44
43 echo "# verify"
45 echo "# verify"
44 cd local
46 cd local
45 hg verify
47 hg verify
46
48
47 echo "# empty default pull"
49 echo "# empty default pull"
48 hg paths
50 hg paths
49 hg pull -e ../dummyssh
51 hg pull -e ../dummyssh
50
52
51 echo "# local change"
53 echo "# local change"
52 echo bleah > foo
54 echo bleah > foo
53 hg ci -m "add" -d "1000000 0"
55 hg ci -m "add" -d "1000000 0"
54
56
55 echo "# updating rc"
57 echo "# updating rc"
56 echo "default-push = ssh://user@dummy/remote" >> .hg/hgrc
58 echo "default-push = ssh://user@dummy/remote" >> .hg/hgrc
57 echo "[ui]" >> .hg/hgrc
59 echo "[ui]" >> .hg/hgrc
58 echo "ssh = ../dummyssh" >> .hg/hgrc
60 echo "ssh = ../dummyssh" >> .hg/hgrc
59
61
60 echo "# find outgoing"
62 echo "# find outgoing"
61 hg out ssh://user@dummy/remote
63 hg out ssh://user@dummy/remote
62
64
63 echo "# find incoming on the remote side"
65 echo "# find incoming on the remote side"
64 hg incoming -R ../remote -e ../dummyssh ssh://user@dummy/local
66 hg incoming -R ../remote -e ../dummyssh ssh://user@dummy/local
65
67
66 echo "# push"
68 echo "# push"
67 hg push
69 hg push
68
70
69 cd ../remote
71 cd ../remote
70
72
71 echo "# check remote tip"
73 echo "# check remote tip"
72 hg tip
74 hg tip
73 hg verify
75 hg verify
74 hg cat foo
76 hg cat foo
75
77
76 echo z > z
78 echo z > z
77 hg ci -A -m z -d '1000001 0' z
79 hg ci -A -m z -d '1000001 0' z
78
80
79 cd ../local
81 cd ../local
80 echo r > r
82 echo r > r
81 hg ci -A -m z -d '1000002 0' r
83 hg ci -A -m z -d '1000002 0' r
82
84
83 echo "# push should fail"
85 echo "# push should fail"
84 hg push
86 hg push
85
87
86 echo "# push should succeed"
88 echo "# push should succeed"
87 hg push -f
89 hg push -f
88
90
89 cd ..
91 cd ..
90 cat dummylog
92 cat dummylog
General Comments 0
You need to be logged in to leave comments. Login now