##// END OF EJS Templates
hooks: add url to changegroup, incoming, prechangegroup, pretxnchangegroup hooks...
Vadim Gelfer -
r2673:109a22f5 default
parent child Browse files
Show More
@@ -1,467 +1,470 b''
1 HGRC(5)
1 HGRC(5)
2 =======
2 =======
3 Bryan O'Sullivan <bos@serpentine.com>
3 Bryan O'Sullivan <bos@serpentine.com>
4
4
5 NAME
5 NAME
6 ----
6 ----
7 hgrc - configuration files for Mercurial
7 hgrc - configuration files for Mercurial
8
8
9 SYNOPSIS
9 SYNOPSIS
10 --------
10 --------
11
11
12 The Mercurial system uses a set of configuration files to control
12 The Mercurial system uses a set of configuration files to control
13 aspects of its behaviour.
13 aspects of its behaviour.
14
14
15 FILES
15 FILES
16 -----
16 -----
17
17
18 Mercurial reads configuration data from several files, if they exist.
18 Mercurial reads configuration data from several files, if they exist.
19 The names of these files depend on the system on which Mercurial is
19 The names of these files depend on the system on which Mercurial is
20 installed.
20 installed.
21
21
22 (Unix) <install-root>/etc/mercurial/hgrc.d/*.rc::
22 (Unix) <install-root>/etc/mercurial/hgrc.d/*.rc::
23 (Unix) <install-root>/etc/mercurial/hgrc::
23 (Unix) <install-root>/etc/mercurial/hgrc::
24 Per-installation configuration files, searched for in the
24 Per-installation configuration files, searched for in the
25 directory where Mercurial is installed. For example, if installed
25 directory where Mercurial is installed. For example, if installed
26 in /shared/tools, Mercurial will look in
26 in /shared/tools, Mercurial will look in
27 /shared/tools/etc/mercurial/hgrc. Options in these files apply to
27 /shared/tools/etc/mercurial/hgrc. Options in these files apply to
28 all Mercurial commands executed by any user in any directory.
28 all Mercurial commands executed by any user in any directory.
29
29
30 (Unix) /etc/mercurial/hgrc.d/*.rc::
30 (Unix) /etc/mercurial/hgrc.d/*.rc::
31 (Unix) /etc/mercurial/hgrc::
31 (Unix) /etc/mercurial/hgrc::
32 (Windows) C:\Mercurial\Mercurial.ini::
32 (Windows) C:\Mercurial\Mercurial.ini::
33 Per-system configuration files, for the system on which Mercurial
33 Per-system configuration files, for the system on which Mercurial
34 is running. Options in these files apply to all Mercurial
34 is running. Options in these files apply to all Mercurial
35 commands executed by any user in any directory. Options in these
35 commands executed by any user in any directory. Options in these
36 files override per-installation options.
36 files override per-installation options.
37
37
38 (Unix) $HOME/.hgrc::
38 (Unix) $HOME/.hgrc::
39 (Windows) C:\Documents and Settings\USERNAME\Mercurial.ini::
39 (Windows) C:\Documents and Settings\USERNAME\Mercurial.ini::
40 (Windows) $HOME\Mercurial.ini::
40 (Windows) $HOME\Mercurial.ini::
41 Per-user configuration file, for the user running Mercurial.
41 Per-user configuration file, for the user running Mercurial.
42 Options in this file apply to all Mercurial commands executed by
42 Options in this file apply to all Mercurial commands executed by
43 any user in any directory. Options in this file override
43 any user in any directory. Options in this file override
44 per-installation and per-system options.
44 per-installation and per-system options.
45 On Windows system, one of these is chosen exclusively according
45 On Windows system, one of these is chosen exclusively according
46 to definition of HOME environment variable.
46 to definition of HOME environment variable.
47
47
48 (Unix, Windows) <repo>/.hg/hgrc::
48 (Unix, Windows) <repo>/.hg/hgrc::
49 Per-repository configuration options that only apply in a
49 Per-repository configuration options that only apply in a
50 particular repository. This file is not version-controlled, and
50 particular repository. This file is not version-controlled, and
51 will not get transferred during a "clone" operation. Options in
51 will not get transferred during a "clone" operation. Options in
52 this file override options in all other configuration files.
52 this file override options in all other configuration files.
53
53
54 SYNTAX
54 SYNTAX
55 ------
55 ------
56
56
57 A configuration file consists of sections, led by a "[section]" header
57 A configuration file consists of sections, led by a "[section]" header
58 and followed by "name: value" entries; "name=value" is also accepted.
58 and followed by "name: value" entries; "name=value" is also accepted.
59
59
60 [spam]
60 [spam]
61 eggs=ham
61 eggs=ham
62 green=
62 green=
63 eggs
63 eggs
64
64
65 Each line contains one entry. If the lines that follow are indented,
65 Each line contains one entry. If the lines that follow are indented,
66 they are treated as continuations of that entry.
66 they are treated as continuations of that entry.
67
67
68 Leading whitespace is removed from values. Empty lines are skipped.
68 Leading whitespace is removed from values. Empty lines are skipped.
69
69
70 The optional values can contain format strings which refer to other
70 The optional values can contain format strings which refer to other
71 values in the same section, or values in a special DEFAULT section.
71 values in the same section, or values in a special DEFAULT section.
72
72
73 Lines beginning with "#" or ";" are ignored and may be used to provide
73 Lines beginning with "#" or ";" are ignored and may be used to provide
74 comments.
74 comments.
75
75
76 SECTIONS
76 SECTIONS
77 --------
77 --------
78
78
79 This section describes the different sections that may appear in a
79 This section describes the different sections that may appear in a
80 Mercurial "hgrc" file, the purpose of each section, its possible
80 Mercurial "hgrc" file, the purpose of each section, its possible
81 keys, and their possible values.
81 keys, and their possible values.
82
82
83 decode/encode::
83 decode/encode::
84 Filters for transforming files on checkout/checkin. This would
84 Filters for transforming files on checkout/checkin. This would
85 typically be used for newline processing or other
85 typically be used for newline processing or other
86 localization/canonicalization of files.
86 localization/canonicalization of files.
87
87
88 Filters consist of a filter pattern followed by a filter command.
88 Filters consist of a filter pattern followed by a filter command.
89 Filter patterns are globs by default, rooted at the repository
89 Filter patterns are globs by default, rooted at the repository
90 root. For example, to match any file ending in ".txt" in the root
90 root. For example, to match any file ending in ".txt" in the root
91 directory only, use the pattern "*.txt". To match any file ending
91 directory only, use the pattern "*.txt". To match any file ending
92 in ".c" anywhere in the repository, use the pattern "**.c".
92 in ".c" anywhere in the repository, use the pattern "**.c".
93
93
94 The filter command can start with a specifier, either "pipe:" or
94 The filter command can start with a specifier, either "pipe:" or
95 "tempfile:". If no specifier is given, "pipe:" is used by default.
95 "tempfile:". If no specifier is given, "pipe:" is used by default.
96
96
97 A "pipe:" command must accept data on stdin and return the
97 A "pipe:" command must accept data on stdin and return the
98 transformed data on stdout.
98 transformed data on stdout.
99
99
100 Pipe example:
100 Pipe example:
101
101
102 [encode]
102 [encode]
103 # uncompress gzip files on checkin to improve delta compression
103 # uncompress gzip files on checkin to improve delta compression
104 # note: not necessarily a good idea, just an example
104 # note: not necessarily a good idea, just an example
105 *.gz = pipe: gunzip
105 *.gz = pipe: gunzip
106
106
107 [decode]
107 [decode]
108 # recompress gzip files when writing them to the working dir (we
108 # recompress gzip files when writing them to the working dir (we
109 # can safely omit "pipe:", because it's the default)
109 # can safely omit "pipe:", because it's the default)
110 *.gz = gzip
110 *.gz = gzip
111
111
112 A "tempfile:" command is a template. The string INFILE is replaced
112 A "tempfile:" command is a template. The string INFILE is replaced
113 with the name of a temporary file that contains the data to be
113 with the name of a temporary file that contains the data to be
114 filtered by the command. The string OUTFILE is replaced with the
114 filtered by the command. The string OUTFILE is replaced with the
115 name of an empty temporary file, where the filtered data must be
115 name of an empty temporary file, where the filtered data must be
116 written by the command.
116 written by the command.
117
117
118 NOTE: the tempfile mechanism is recommended for Windows systems,
118 NOTE: the tempfile mechanism is recommended for Windows systems,
119 where the standard shell I/O redirection operators often have
119 where the standard shell I/O redirection operators often have
120 strange effects. In particular, if you are doing line ending
120 strange effects. In particular, if you are doing line ending
121 conversion on Windows using the popular dos2unix and unix2dos
121 conversion on Windows using the popular dos2unix and unix2dos
122 programs, you *must* use the tempfile mechanism, as using pipes will
122 programs, you *must* use the tempfile mechanism, as using pipes will
123 corrupt the contents of your files.
123 corrupt the contents of your files.
124
124
125 Tempfile example:
125 Tempfile example:
126
126
127 [encode]
127 [encode]
128 # convert files to unix line ending conventions on checkin
128 # convert files to unix line ending conventions on checkin
129 **.txt = tempfile: dos2unix -n INFILE OUTFILE
129 **.txt = tempfile: dos2unix -n INFILE OUTFILE
130
130
131 [decode]
131 [decode]
132 # convert files to windows line ending conventions when writing
132 # convert files to windows line ending conventions when writing
133 # them to the working dir
133 # them to the working dir
134 **.txt = tempfile: unix2dos -n INFILE OUTFILE
134 **.txt = tempfile: unix2dos -n INFILE OUTFILE
135
135
136 email::
136 email::
137 Settings for extensions that send email messages.
137 Settings for extensions that send email messages.
138 from;;
138 from;;
139 Optional. Email address to use in "From" header and SMTP envelope
139 Optional. Email address to use in "From" header and SMTP envelope
140 of outgoing messages.
140 of outgoing messages.
141 method;;
141 method;;
142 Optional. Method to use to send email messages. If value is
142 Optional. Method to use to send email messages. If value is
143 "smtp" (default), use SMTP (see section "[mail]" for
143 "smtp" (default), use SMTP (see section "[mail]" for
144 configuration). Otherwise, use as name of program to run that
144 configuration). Otherwise, use as name of program to run that
145 acts like sendmail (takes "-f" option for sender, list of
145 acts like sendmail (takes "-f" option for sender, list of
146 recipients on command line, message on stdin). Normally, setting
146 recipients on command line, message on stdin). Normally, setting
147 this to "sendmail" or "/usr/sbin/sendmail" is enough to use
147 this to "sendmail" or "/usr/sbin/sendmail" is enough to use
148 sendmail to send messages.
148 sendmail to send messages.
149
149
150 Email example:
150 Email example:
151
151
152 [email]
152 [email]
153 from = Joseph User <joe.user@example.com>
153 from = Joseph User <joe.user@example.com>
154 method = /usr/sbin/sendmail
154 method = /usr/sbin/sendmail
155
155
156 extensions::
156 extensions::
157 Mercurial has an extension mechanism for adding new features. To
157 Mercurial has an extension mechanism for adding new features. To
158 enable an extension, create an entry for it in this section.
158 enable an extension, create an entry for it in this section.
159
159
160 If you know that the extension is already in Python's search path,
160 If you know that the extension is already in Python's search path,
161 you can give the name of the module, followed by "=", with nothing
161 you can give the name of the module, followed by "=", with nothing
162 after the "=".
162 after the "=".
163
163
164 Otherwise, give a name that you choose, followed by "=", followed by
164 Otherwise, give a name that you choose, followed by "=", followed by
165 the path to the ".py" file (including the file name extension) that
165 the path to the ".py" file (including the file name extension) that
166 defines the extension.
166 defines the extension.
167
167
168 Example for ~/.hgrc:
168 Example for ~/.hgrc:
169
169
170 [extensions]
170 [extensions]
171 # (the mq extension will get loaded from mercurial's path)
171 # (the mq extension will get loaded from mercurial's path)
172 hgext.mq =
172 hgext.mq =
173 # (this extension will get loaded from the file specified)
173 # (this extension will get loaded from the file specified)
174 myfeature = ~/.hgext/myfeature.py
174 myfeature = ~/.hgext/myfeature.py
175
175
176 hooks::
176 hooks::
177 Commands or Python functions that get automatically executed by
177 Commands or Python functions that get automatically executed by
178 various actions such as starting or finishing a commit. Multiple
178 various actions such as starting or finishing a commit. Multiple
179 hooks can be run for the same action by appending a suffix to the
179 hooks can be run for the same action by appending a suffix to the
180 action. Overriding a site-wide hook can be done by changing its
180 action. Overriding a site-wide hook can be done by changing its
181 value or setting it to an empty string.
181 value or setting it to an empty string.
182
182
183 Example .hg/hgrc:
183 Example .hg/hgrc:
184
184
185 [hooks]
185 [hooks]
186 # do not use the site-wide hook
186 # do not use the site-wide hook
187 incoming =
187 incoming =
188 incoming.email = /my/email/hook
188 incoming.email = /my/email/hook
189 incoming.autobuild = /my/build/hook
189 incoming.autobuild = /my/build/hook
190
190
191 Most hooks are run with environment variables set that give added
191 Most hooks are run with environment variables set that give added
192 useful information. For each hook below, the environment variables
192 useful information. For each hook below, the environment variables
193 it is passed are listed with names of the form "$HG_foo".
193 it is passed are listed with names of the form "$HG_foo".
194
194
195 changegroup;;
195 changegroup;;
196 Run after a changegroup has been added via push, pull or
196 Run after a changegroup has been added via push, pull or
197 unbundle. ID of the first new changeset is in $HG_NODE.
197 unbundle. ID of the first new changeset is in $HG_NODE. URL from
198 which changes came is in $HG_URL.
198 commit;;
199 commit;;
199 Run after a changeset has been created in the local repository.
200 Run after a changeset has been created in the local repository.
200 ID of the newly created changeset is in $HG_NODE. Parent
201 ID of the newly created changeset is in $HG_NODE. Parent
201 changeset IDs are in $HG_PARENT1 and $HG_PARENT2.
202 changeset IDs are in $HG_PARENT1 and $HG_PARENT2.
202 incoming;;
203 incoming;;
203 Run after a changeset has been pulled, pushed, or unbundled into
204 Run after a changeset has been pulled, pushed, or unbundled into
204 the local repository. The ID of the newly arrived changeset is in
205 the local repository. The ID of the newly arrived changeset is in
205 $HG_NODE.
206 $HG_NODE. URL that was source of changes came is in $HG_URL.
206 outgoing;;
207 outgoing;;
207 Run after sending changes from local repository to another. ID of
208 Run after sending changes from local repository to another. ID of
208 first changeset sent is in $HG_NODE. Source of operation is in
209 first changeset sent is in $HG_NODE. Source of operation is in
209 $HG_SOURCE; see "preoutgoing" hook for description.
210 $HG_SOURCE; see "preoutgoing" hook for description.
210 prechangegroup;;
211 prechangegroup;;
211 Run before a changegroup is added via push, pull or unbundle.
212 Run before a changegroup is added via push, pull or unbundle.
212 Exit status 0 allows the changegroup to proceed. Non-zero status
213 Exit status 0 allows the changegroup to proceed. Non-zero status
213 will cause the push, pull or unbundle to fail.
214 will cause the push, pull or unbundle to fail. URL from which
215 changes will come is in $HG_URL.
214 precommit;;
216 precommit;;
215 Run before starting a local commit. Exit status 0 allows the
217 Run before starting a local commit. Exit status 0 allows the
216 commit to proceed. Non-zero status will cause the commit to fail.
218 commit to proceed. Non-zero status will cause the commit to fail.
217 Parent changeset IDs are in $HG_PARENT1 and $HG_PARENT2.
219 Parent changeset IDs are in $HG_PARENT1 and $HG_PARENT2.
218 preoutgoing;;
220 preoutgoing;;
219 Run before computing changes to send from the local repository to
221 Run before computing changes to send from the local repository to
220 another. Non-zero status will cause failure. This lets you
222 another. Non-zero status will cause failure. This lets you
221 prevent pull over http or ssh. Also prevents against local pull,
223 prevent pull over http or ssh. Also prevents against local pull,
222 push (outbound) or bundle commands, but not effective, since you
224 push (outbound) or bundle commands, but not effective, since you
223 can just copy files instead then. Source of operation is in
225 can just copy files instead then. Source of operation is in
224 $HG_SOURCE. If "serve", operation is happening on behalf of
226 $HG_SOURCE. If "serve", operation is happening on behalf of
225 remote ssh or http repository. If "push", "pull" or "bundle",
227 remote ssh or http repository. If "push", "pull" or "bundle",
226 operation is happening on behalf of repository on same system.
228 operation is happening on behalf of repository on same system.
227 pretag;;
229 pretag;;
228 Run before creating a tag. Exit status 0 allows the tag to be
230 Run before creating a tag. Exit status 0 allows the tag to be
229 created. Non-zero status will cause the tag to fail. ID of
231 created. Non-zero status will cause the tag to fail. ID of
230 changeset to tag is in $HG_NODE. Name of tag is in $HG_TAG. Tag
232 changeset to tag is in $HG_NODE. Name of tag is in $HG_TAG. Tag
231 is local if $HG_LOCAL=1, in repo if $HG_LOCAL=0.
233 is local if $HG_LOCAL=1, in repo if $HG_LOCAL=0.
232 pretxnchangegroup;;
234 pretxnchangegroup;;
233 Run after a changegroup has been added via push, pull or unbundle,
235 Run after a changegroup has been added via push, pull or unbundle,
234 but before the transaction has been committed. Changegroup is
236 but before the transaction has been committed. Changegroup is
235 visible to hook program. This lets you validate incoming changes
237 visible to hook program. This lets you validate incoming changes
236 before accepting them. Passed the ID of the first new changeset
238 before accepting them. Passed the ID of the first new changeset
237 in $HG_NODE. Exit status 0 allows the transaction to commit.
239 in $HG_NODE. Exit status 0 allows the transaction to commit.
238 Non-zero status will cause the transaction to be rolled back and
240 Non-zero status will cause the transaction to be rolled back and
239 the push, pull or unbundle will fail.
241 the push, pull or unbundle will fail. URL that was source of
242 changes is in $HG_URL.
240 pretxncommit;;
243 pretxncommit;;
241 Run after a changeset has been created but the transaction not yet
244 Run after a changeset has been created but the transaction not yet
242 committed. Changeset is visible to hook program. This lets you
245 committed. Changeset is visible to hook program. This lets you
243 validate commit message and changes. Exit status 0 allows the
246 validate commit message and changes. Exit status 0 allows the
244 commit to proceed. Non-zero status will cause the transaction to
247 commit to proceed. Non-zero status will cause the transaction to
245 be rolled back. ID of changeset is in $HG_NODE. Parent changeset
248 be rolled back. ID of changeset is in $HG_NODE. Parent changeset
246 IDs are in $HG_PARENT1 and $HG_PARENT2.
249 IDs are in $HG_PARENT1 and $HG_PARENT2.
247 preupdate;;
250 preupdate;;
248 Run before updating the working directory. Exit status 0 allows
251 Run before updating the working directory. Exit status 0 allows
249 the update to proceed. Non-zero status will prevent the update.
252 the update to proceed. Non-zero status will prevent the update.
250 Changeset ID of first new parent is in $HG_PARENT1. If merge, ID
253 Changeset ID of first new parent is in $HG_PARENT1. If merge, ID
251 of second new parent is in $HG_PARENT2.
254 of second new parent is in $HG_PARENT2.
252 tag;;
255 tag;;
253 Run after a tag is created. ID of tagged changeset is in
256 Run after a tag is created. ID of tagged changeset is in
254 $HG_NODE. Name of tag is in $HG_TAG. Tag is local if
257 $HG_NODE. Name of tag is in $HG_TAG. Tag is local if
255 $HG_LOCAL=1, in repo if $HG_LOCAL=0.
258 $HG_LOCAL=1, in repo if $HG_LOCAL=0.
256 update;;
259 update;;
257 Run after updating the working directory. Changeset ID of first
260 Run after updating the working directory. Changeset ID of first
258 new parent is in $HG_PARENT1. If merge, ID of second new parent
261 new parent is in $HG_PARENT1. If merge, ID of second new parent
259 is in $HG_PARENT2. If update succeeded, $HG_ERROR=0. If update
262 is in $HG_PARENT2. If update succeeded, $HG_ERROR=0. If update
260 failed (e.g. because conflicts not resolved), $HG_ERROR=1.
263 failed (e.g. because conflicts not resolved), $HG_ERROR=1.
261
264
262 Note: In earlier releases, the names of hook environment variables
265 Note: In earlier releases, the names of hook environment variables
263 did not have a "HG_" prefix. The old unprefixed names are no longer
266 did not have a "HG_" prefix. The old unprefixed names are no longer
264 provided in the environment.
267 provided in the environment.
265
268
266 The syntax for Python hooks is as follows:
269 The syntax for Python hooks is as follows:
267
270
268 hookname = python:modulename.submodule.callable
271 hookname = python:modulename.submodule.callable
269
272
270 Python hooks are run within the Mercurial process. Each hook is
273 Python hooks are run within the Mercurial process. Each hook is
271 called with at least three keyword arguments: a ui object (keyword
274 called with at least three keyword arguments: a ui object (keyword
272 "ui"), a repository object (keyword "repo"), and a "hooktype"
275 "ui"), a repository object (keyword "repo"), and a "hooktype"
273 keyword that tells what kind of hook is used. Arguments listed as
276 keyword that tells what kind of hook is used. Arguments listed as
274 environment variables above are passed as keyword arguments, with no
277 environment variables above are passed as keyword arguments, with no
275 "HG_" prefix, and names in lower case.
278 "HG_" prefix, and names in lower case.
276
279
277 A Python hook must return a "true" value to succeed. Returning a
280 A Python hook must return a "true" value to succeed. Returning a
278 "false" value or raising an exception is treated as failure of the
281 "false" value or raising an exception is treated as failure of the
279 hook.
282 hook.
280
283
281 http_proxy::
284 http_proxy::
282 Used to access web-based Mercurial repositories through a HTTP
285 Used to access web-based Mercurial repositories through a HTTP
283 proxy.
286 proxy.
284 host;;
287 host;;
285 Host name and (optional) port of the proxy server, for example
288 Host name and (optional) port of the proxy server, for example
286 "myproxy:8000".
289 "myproxy:8000".
287 no;;
290 no;;
288 Optional. Comma-separated list of host names that should bypass
291 Optional. Comma-separated list of host names that should bypass
289 the proxy.
292 the proxy.
290 passwd;;
293 passwd;;
291 Optional. Password to authenticate with at the proxy server.
294 Optional. Password to authenticate with at the proxy server.
292 user;;
295 user;;
293 Optional. User name to authenticate with at the proxy server.
296 Optional. User name to authenticate with at the proxy server.
294
297
295 smtp::
298 smtp::
296 Configuration for extensions that need to send email messages.
299 Configuration for extensions that need to send email messages.
297 host;;
300 host;;
298 Optional. Host name of mail server. Default: "mail".
301 Optional. Host name of mail server. Default: "mail".
299 port;;
302 port;;
300 Optional. Port to connect to on mail server. Default: 25.
303 Optional. Port to connect to on mail server. Default: 25.
301 tls;;
304 tls;;
302 Optional. Whether to connect to mail server using TLS. True or
305 Optional. Whether to connect to mail server using TLS. True or
303 False. Default: False.
306 False. Default: False.
304 username;;
307 username;;
305 Optional. User name to authenticate to SMTP server with.
308 Optional. User name to authenticate to SMTP server with.
306 If username is specified, password must also be specified.
309 If username is specified, password must also be specified.
307 Default: none.
310 Default: none.
308 password;;
311 password;;
309 Optional. Password to authenticate to SMTP server with.
312 Optional. Password to authenticate to SMTP server with.
310 If username is specified, password must also be specified.
313 If username is specified, password must also be specified.
311 Default: none.
314 Default: none.
312 local_hostname;;
315 local_hostname;;
313 Optional. It's the hostname that the sender can use to identify itself
316 Optional. It's the hostname that the sender can use to identify itself
314 to the MTA.
317 to the MTA.
315
318
316 paths::
319 paths::
317 Assigns symbolic names to repositories. The left side is the
320 Assigns symbolic names to repositories. The left side is the
318 symbolic name, and the right gives the directory or URL that is the
321 symbolic name, and the right gives the directory or URL that is the
319 location of the repository. Default paths can be declared by
322 location of the repository. Default paths can be declared by
320 setting the following entries.
323 setting the following entries.
321 default;;
324 default;;
322 Directory or URL to use when pulling if no source is specified.
325 Directory or URL to use when pulling if no source is specified.
323 Default is set to repository from which the current repository
326 Default is set to repository from which the current repository
324 was cloned.
327 was cloned.
325 default-push;;
328 default-push;;
326 Optional. Directory or URL to use when pushing if no destination
329 Optional. Directory or URL to use when pushing if no destination
327 is specified.
330 is specified.
328
331
329 server::
332 server::
330 Controls generic server settings.
333 Controls generic server settings.
331 uncompressed;;
334 uncompressed;;
332 Whether to allow clients to clone a repo using the uncompressed
335 Whether to allow clients to clone a repo using the uncompressed
333 streaming protocol. This transfers about 40% more data than a
336 streaming protocol. This transfers about 40% more data than a
334 regular clone, but uses less memory and CPU on both server and
337 regular clone, but uses less memory and CPU on both server and
335 client. Over a LAN (100Mbps or better) or a very fast WAN, an
338 client. Over a LAN (100Mbps or better) or a very fast WAN, an
336 uncompressed streaming clone is a lot faster (~10x) than a regular
339 uncompressed streaming clone is a lot faster (~10x) than a regular
337 clone. Over most WAN connections (anything slower than about
340 clone. Over most WAN connections (anything slower than about
338 6Mbps), uncompressed streaming is slower, because of the extra
341 6Mbps), uncompressed streaming is slower, because of the extra
339 data transfer overhead. Default is False.
342 data transfer overhead. Default is False.
340
343
341 ui::
344 ui::
342 User interface controls.
345 User interface controls.
343 debug;;
346 debug;;
344 Print debugging information. True or False. Default is False.
347 Print debugging information. True or False. Default is False.
345 editor;;
348 editor;;
346 The editor to use during a commit. Default is $EDITOR or "vi".
349 The editor to use during a commit. Default is $EDITOR or "vi".
347 ignore;;
350 ignore;;
348 A file to read per-user ignore patterns from. This file should be in
351 A file to read per-user ignore patterns from. This file should be in
349 the same format as a repository-wide .hgignore file. This option
352 the same format as a repository-wide .hgignore file. This option
350 supports hook syntax, so if you want to specify multiple ignore
353 supports hook syntax, so if you want to specify multiple ignore
351 files, you can do so by setting something like
354 files, you can do so by setting something like
352 "ignore.other = ~/.hgignore2". For details of the ignore file
355 "ignore.other = ~/.hgignore2". For details of the ignore file
353 format, see the hgignore(5) man page.
356 format, see the hgignore(5) man page.
354 interactive;;
357 interactive;;
355 Allow to prompt the user. True or False. Default is True.
358 Allow to prompt the user. True or False. Default is True.
356 logtemplate;;
359 logtemplate;;
357 Template string for commands that print changesets.
360 Template string for commands that print changesets.
358 style;;
361 style;;
359 Name of style to use for command output.
362 Name of style to use for command output.
360 merge;;
363 merge;;
361 The conflict resolution program to use during a manual merge.
364 The conflict resolution program to use during a manual merge.
362 Default is "hgmerge".
365 Default is "hgmerge".
363 quiet;;
366 quiet;;
364 Reduce the amount of output printed. True or False. Default is False.
367 Reduce the amount of output printed. True or False. Default is False.
365 remotecmd;;
368 remotecmd;;
366 remote command to use for clone/push/pull operations. Default is 'hg'.
369 remote command to use for clone/push/pull operations. Default is 'hg'.
367 ssh;;
370 ssh;;
368 command to use for SSH connections. Default is 'ssh'.
371 command to use for SSH connections. Default is 'ssh'.
369 timeout;;
372 timeout;;
370 The timeout used when a lock is held (in seconds), a negative value
373 The timeout used when a lock is held (in seconds), a negative value
371 means no timeout. Default is 600.
374 means no timeout. Default is 600.
372 username;;
375 username;;
373 The committer of a changeset created when running "commit".
376 The committer of a changeset created when running "commit".
374 Typically a person's name and email address, e.g. "Fred Widget
377 Typically a person's name and email address, e.g. "Fred Widget
375 <fred@example.com>". Default is $EMAIL or username@hostname, unless
378 <fred@example.com>". Default is $EMAIL or username@hostname, unless
376 username is set to an empty string, which enforces specifying the
379 username is set to an empty string, which enforces specifying the
377 username manually.
380 username manually.
378 verbose;;
381 verbose;;
379 Increase the amount of output printed. True or False. Default is False.
382 Increase the amount of output printed. True or False. Default is False.
380
383
381
384
382 web::
385 web::
383 Web interface configuration.
386 Web interface configuration.
384 accesslog;;
387 accesslog;;
385 Where to output the access log. Default is stdout.
388 Where to output the access log. Default is stdout.
386 address;;
389 address;;
387 Interface address to bind to. Default is all.
390 Interface address to bind to. Default is all.
388 allow_archive;;
391 allow_archive;;
389 List of archive format (bz2, gz, zip) allowed for downloading.
392 List of archive format (bz2, gz, zip) allowed for downloading.
390 Default is empty.
393 Default is empty.
391 allowbz2;;
394 allowbz2;;
392 (DEPRECATED) Whether to allow .tar.bz2 downloading of repo revisions.
395 (DEPRECATED) Whether to allow .tar.bz2 downloading of repo revisions.
393 Default is false.
396 Default is false.
394 allowgz;;
397 allowgz;;
395 (DEPRECATED) Whether to allow .tar.gz downloading of repo revisions.
398 (DEPRECATED) Whether to allow .tar.gz downloading of repo revisions.
396 Default is false.
399 Default is false.
397 allowpull;;
400 allowpull;;
398 Whether to allow pulling from the repository. Default is true.
401 Whether to allow pulling from the repository. Default is true.
399 allow_push;;
402 allow_push;;
400 Whether to allow pushing to the repository. If empty or not set,
403 Whether to allow pushing to the repository. If empty or not set,
401 push is not allowed. If the special value "*", any remote user
404 push is not allowed. If the special value "*", any remote user
402 can push, including unauthenticated users. Otherwise, the remote
405 can push, including unauthenticated users. Otherwise, the remote
403 user must have been authenticated, and the authenticated user name
406 user must have been authenticated, and the authenticated user name
404 must be present in this list (separated by whitespace or ",").
407 must be present in this list (separated by whitespace or ",").
405 The contents of the allow_push list are examined after the
408 The contents of the allow_push list are examined after the
406 deny_push list.
409 deny_push list.
407 allowzip;;
410 allowzip;;
408 (DEPRECATED) Whether to allow .zip downloading of repo revisions.
411 (DEPRECATED) Whether to allow .zip downloading of repo revisions.
409 Default is false. This feature creates temporary files.
412 Default is false. This feature creates temporary files.
410 baseurl;;
413 baseurl;;
411 Base URL to use when publishing URLs in other locations, so
414 Base URL to use when publishing URLs in other locations, so
412 third-party tools like email notification hooks can construct URLs.
415 third-party tools like email notification hooks can construct URLs.
413 Example: "http://hgserver/repos/"
416 Example: "http://hgserver/repos/"
414 contact;;
417 contact;;
415 Name or email address of the person in charge of the repository.
418 Name or email address of the person in charge of the repository.
416 Default is "unknown".
419 Default is "unknown".
417 deny_push;;
420 deny_push;;
418 Whether to deny pushing to the repository. If empty or not set,
421 Whether to deny pushing to the repository. If empty or not set,
419 push is not denied. If the special value "*", all remote users
422 push is not denied. If the special value "*", all remote users
420 are denied push. Otherwise, unauthenticated users are all denied,
423 are denied push. Otherwise, unauthenticated users are all denied,
421 and any authenticated user name present in this list (separated by
424 and any authenticated user name present in this list (separated by
422 whitespace or ",") is also denied. The contents of the deny_push
425 whitespace or ",") is also denied. The contents of the deny_push
423 list are examined before the allow_push list.
426 list are examined before the allow_push list.
424 description;;
427 description;;
425 Textual description of the repository's purpose or contents.
428 Textual description of the repository's purpose or contents.
426 Default is "unknown".
429 Default is "unknown".
427 errorlog;;
430 errorlog;;
428 Where to output the error log. Default is stderr.
431 Where to output the error log. Default is stderr.
429 ipv6;;
432 ipv6;;
430 Whether to use IPv6. Default is false.
433 Whether to use IPv6. Default is false.
431 name;;
434 name;;
432 Repository name to use in the web interface. Default is current
435 Repository name to use in the web interface. Default is current
433 working directory.
436 working directory.
434 maxchanges;;
437 maxchanges;;
435 Maximum number of changes to list on the changelog. Default is 10.
438 Maximum number of changes to list on the changelog. Default is 10.
436 maxfiles;;
439 maxfiles;;
437 Maximum number of files to list per changeset. Default is 10.
440 Maximum number of files to list per changeset. Default is 10.
438 port;;
441 port;;
439 Port to listen on. Default is 8000.
442 Port to listen on. Default is 8000.
440 push_ssl;;
443 push_ssl;;
441 Whether to require that inbound pushes be transported over SSL to
444 Whether to require that inbound pushes be transported over SSL to
442 prevent password sniffing. Default is true.
445 prevent password sniffing. Default is true.
443 stripes;;
446 stripes;;
444 How many lines a "zebra stripe" should span in multiline output.
447 How many lines a "zebra stripe" should span in multiline output.
445 Default is 1; set to 0 to disable.
448 Default is 1; set to 0 to disable.
446 style;;
449 style;;
447 Which template map style to use.
450 Which template map style to use.
448 templates;;
451 templates;;
449 Where to find the HTML templates. Default is install path.
452 Where to find the HTML templates. Default is install path.
450
453
451
454
452 AUTHOR
455 AUTHOR
453 ------
456 ------
454 Bryan O'Sullivan <bos@serpentine.com>.
457 Bryan O'Sullivan <bos@serpentine.com>.
455
458
456 Mercurial was written by Matt Mackall <mpm@selenic.com>.
459 Mercurial was written by Matt Mackall <mpm@selenic.com>.
457
460
458 SEE ALSO
461 SEE ALSO
459 --------
462 --------
460 hg(1), hgignore(5)
463 hg(1), hgignore(5)
461
464
462 COPYING
465 COPYING
463 -------
466 -------
464 This manual page is copyright 2005 Bryan O'Sullivan.
467 This manual page is copyright 2005 Bryan O'Sullivan.
465 Mercurial is copyright 2005, 2006 Matt Mackall.
468 Mercurial is copyright 2005, 2006 Matt Mackall.
466 Free use of this software is granted under the terms of the GNU General
469 Free use of this software is granted under the terms of the GNU General
467 Public License (GPL).
470 Public License (GPL).
@@ -1,232 +1,239 b''
1 """
1 """
2 bundlerepo.py - repository class for viewing uncompressed bundles
2 bundlerepo.py - repository class for viewing uncompressed bundles
3
3
4 This provides a read-only repository interface to bundles as if
4 This provides a read-only repository interface to bundles as if
5 they were part of the actual repository.
5 they were part of the actual repository.
6
6
7 Copyright 2006 Benoit Boissinot <benoit.boissinot@ens-lyon.org>
7 Copyright 2006 Benoit Boissinot <benoit.boissinot@ens-lyon.org>
8
8
9 This software may be used and distributed according to the terms
9 This software may be used and distributed according to the terms
10 of the GNU General Public License, incorporated herein by reference.
10 of the GNU General Public License, incorporated herein by reference.
11 """
11 """
12
12
13 from node import *
13 from node import *
14 from i18n import gettext as _
14 from i18n import gettext as _
15 from demandload import demandload
15 from demandload import demandload
16 demandload(globals(), "changegroup util os struct bz2 tempfile")
16 demandload(globals(), "changegroup util os struct bz2 tempfile")
17
17
18 import localrepo, changelog, manifest, filelog, revlog
18 import localrepo, changelog, manifest, filelog, revlog
19
19
20 class bundlerevlog(revlog.revlog):
20 class bundlerevlog(revlog.revlog):
21 def __init__(self, opener, indexfile, datafile, bundlefile,
21 def __init__(self, opener, indexfile, datafile, bundlefile,
22 linkmapper=None):
22 linkmapper=None):
23 # How it works:
23 # How it works:
24 # to retrieve a revision, we need to know the offset of
24 # to retrieve a revision, we need to know the offset of
25 # the revision in the bundlefile (an opened file).
25 # the revision in the bundlefile (an opened file).
26 #
26 #
27 # We store this offset in the index (start), to differentiate a
27 # We store this offset in the index (start), to differentiate a
28 # rev in the bundle and from a rev in the revlog, we check
28 # rev in the bundle and from a rev in the revlog, we check
29 # len(index[r]). If the tuple is bigger than 7, it is a bundle
29 # len(index[r]). If the tuple is bigger than 7, it is a bundle
30 # (it is bigger since we store the node to which the delta is)
30 # (it is bigger since we store the node to which the delta is)
31 #
31 #
32 revlog.revlog.__init__(self, opener, indexfile, datafile)
32 revlog.revlog.__init__(self, opener, indexfile, datafile)
33 self.bundlefile = bundlefile
33 self.bundlefile = bundlefile
34 self.basemap = {}
34 self.basemap = {}
35 def chunkpositer():
35 def chunkpositer():
36 for chunk in changegroup.chunkiter(bundlefile):
36 for chunk in changegroup.chunkiter(bundlefile):
37 pos = bundlefile.tell()
37 pos = bundlefile.tell()
38 yield chunk, pos - len(chunk)
38 yield chunk, pos - len(chunk)
39 n = self.count()
39 n = self.count()
40 prev = None
40 prev = None
41 for chunk, start in chunkpositer():
41 for chunk, start in chunkpositer():
42 size = len(chunk)
42 size = len(chunk)
43 if size < 80:
43 if size < 80:
44 raise util.Abort("invalid changegroup")
44 raise util.Abort("invalid changegroup")
45 start += 80
45 start += 80
46 size -= 80
46 size -= 80
47 node, p1, p2, cs = struct.unpack("20s20s20s20s", chunk[:80])
47 node, p1, p2, cs = struct.unpack("20s20s20s20s", chunk[:80])
48 if node in self.nodemap:
48 if node in self.nodemap:
49 prev = node
49 prev = node
50 continue
50 continue
51 for p in (p1, p2):
51 for p in (p1, p2):
52 if not p in self.nodemap:
52 if not p in self.nodemap:
53 raise revlog.RevlogError(_("unknown parent %s") % short(p1))
53 raise revlog.RevlogError(_("unknown parent %s") % short(p1))
54 if linkmapper is None:
54 if linkmapper is None:
55 link = n
55 link = n
56 else:
56 else:
57 link = linkmapper(cs)
57 link = linkmapper(cs)
58
58
59 if not prev:
59 if not prev:
60 prev = p1
60 prev = p1
61 # start, size, base is not used, link, p1, p2, delta ref
61 # start, size, base is not used, link, p1, p2, delta ref
62 if self.version == 0:
62 if self.version == 0:
63 e = (start, size, None, link, p1, p2, node)
63 e = (start, size, None, link, p1, p2, node)
64 else:
64 else:
65 e = (self.offset_type(start, 0), size, -1, None, link,
65 e = (self.offset_type(start, 0), size, -1, None, link,
66 self.rev(p1), self.rev(p2), node)
66 self.rev(p1), self.rev(p2), node)
67 self.basemap[n] = prev
67 self.basemap[n] = prev
68 self.index.append(e)
68 self.index.append(e)
69 self.nodemap[node] = n
69 self.nodemap[node] = n
70 prev = node
70 prev = node
71 n += 1
71 n += 1
72
72
73 def bundle(self, rev):
73 def bundle(self, rev):
74 """is rev from the bundle"""
74 """is rev from the bundle"""
75 if rev < 0:
75 if rev < 0:
76 return False
76 return False
77 return rev in self.basemap
77 return rev in self.basemap
78 def bundlebase(self, rev): return self.basemap[rev]
78 def bundlebase(self, rev): return self.basemap[rev]
79 def chunk(self, rev, df=None, cachelen=4096):
79 def chunk(self, rev, df=None, cachelen=4096):
80 # Warning: in case of bundle, the diff is against bundlebase,
80 # Warning: in case of bundle, the diff is against bundlebase,
81 # not against rev - 1
81 # not against rev - 1
82 # XXX: could use some caching
82 # XXX: could use some caching
83 if not self.bundle(rev):
83 if not self.bundle(rev):
84 return revlog.revlog.chunk(self, rev, df, cachelen)
84 return revlog.revlog.chunk(self, rev, df, cachelen)
85 self.bundlefile.seek(self.start(rev))
85 self.bundlefile.seek(self.start(rev))
86 return self.bundlefile.read(self.length(rev))
86 return self.bundlefile.read(self.length(rev))
87
87
88 def revdiff(self, rev1, rev2):
88 def revdiff(self, rev1, rev2):
89 """return or calculate a delta between two revisions"""
89 """return or calculate a delta between two revisions"""
90 if self.bundle(rev1) and self.bundle(rev2):
90 if self.bundle(rev1) and self.bundle(rev2):
91 # hot path for bundle
91 # hot path for bundle
92 revb = self.rev(self.bundlebase(rev2))
92 revb = self.rev(self.bundlebase(rev2))
93 if revb == rev1:
93 if revb == rev1:
94 return self.chunk(rev2)
94 return self.chunk(rev2)
95 elif not self.bundle(rev1) and not self.bundle(rev2):
95 elif not self.bundle(rev1) and not self.bundle(rev2):
96 return revlog.revlog.chunk(self, rev1, rev2)
96 return revlog.revlog.chunk(self, rev1, rev2)
97
97
98 return self.diff(self.revision(self.node(rev1)),
98 return self.diff(self.revision(self.node(rev1)),
99 self.revision(self.node(rev2)))
99 self.revision(self.node(rev2)))
100
100
101 def revision(self, node):
101 def revision(self, node):
102 """return an uncompressed revision of a given"""
102 """return an uncompressed revision of a given"""
103 if node == nullid: return ""
103 if node == nullid: return ""
104
104
105 text = None
105 text = None
106 chain = []
106 chain = []
107 iter_node = node
107 iter_node = node
108 rev = self.rev(iter_node)
108 rev = self.rev(iter_node)
109 # reconstruct the revision if it is from a changegroup
109 # reconstruct the revision if it is from a changegroup
110 while self.bundle(rev):
110 while self.bundle(rev):
111 if self.cache and self.cache[0] == iter_node:
111 if self.cache and self.cache[0] == iter_node:
112 text = self.cache[2]
112 text = self.cache[2]
113 break
113 break
114 chain.append(rev)
114 chain.append(rev)
115 iter_node = self.bundlebase(rev)
115 iter_node = self.bundlebase(rev)
116 rev = self.rev(iter_node)
116 rev = self.rev(iter_node)
117 if text is None:
117 if text is None:
118 text = revlog.revlog.revision(self, iter_node)
118 text = revlog.revlog.revision(self, iter_node)
119
119
120 while chain:
120 while chain:
121 delta = self.chunk(chain.pop())
121 delta = self.chunk(chain.pop())
122 text = self.patches(text, [delta])
122 text = self.patches(text, [delta])
123
123
124 p1, p2 = self.parents(node)
124 p1, p2 = self.parents(node)
125 if node != revlog.hash(text, p1, p2):
125 if node != revlog.hash(text, p1, p2):
126 raise revlog.RevlogError(_("integrity check failed on %s:%d")
126 raise revlog.RevlogError(_("integrity check failed on %s:%d")
127 % (self.datafile, self.rev(node)))
127 % (self.datafile, self.rev(node)))
128
128
129 self.cache = (node, self.rev(node), text)
129 self.cache = (node, self.rev(node), text)
130 return text
130 return text
131
131
132 def addrevision(self, text, transaction, link, p1=None, p2=None, d=None):
132 def addrevision(self, text, transaction, link, p1=None, p2=None, d=None):
133 raise NotImplementedError
133 raise NotImplementedError
134 def addgroup(self, revs, linkmapper, transaction, unique=0):
134 def addgroup(self, revs, linkmapper, transaction, unique=0):
135 raise NotImplementedError
135 raise NotImplementedError
136 def strip(self, rev, minlink):
136 def strip(self, rev, minlink):
137 raise NotImplementedError
137 raise NotImplementedError
138 def checksize(self):
138 def checksize(self):
139 raise NotImplementedError
139 raise NotImplementedError
140
140
141 class bundlechangelog(bundlerevlog, changelog.changelog):
141 class bundlechangelog(bundlerevlog, changelog.changelog):
142 def __init__(self, opener, bundlefile):
142 def __init__(self, opener, bundlefile):
143 changelog.changelog.__init__(self, opener)
143 changelog.changelog.__init__(self, opener)
144 bundlerevlog.__init__(self, opener, "00changelog.i", "00changelog.d",
144 bundlerevlog.__init__(self, opener, "00changelog.i", "00changelog.d",
145 bundlefile)
145 bundlefile)
146
146
147 class bundlemanifest(bundlerevlog, manifest.manifest):
147 class bundlemanifest(bundlerevlog, manifest.manifest):
148 def __init__(self, opener, bundlefile, linkmapper):
148 def __init__(self, opener, bundlefile, linkmapper):
149 manifest.manifest.__init__(self, opener)
149 manifest.manifest.__init__(self, opener)
150 bundlerevlog.__init__(self, opener, self.indexfile, self.datafile,
150 bundlerevlog.__init__(self, opener, self.indexfile, self.datafile,
151 bundlefile, linkmapper)
151 bundlefile, linkmapper)
152
152
153 class bundlefilelog(bundlerevlog, filelog.filelog):
153 class bundlefilelog(bundlerevlog, filelog.filelog):
154 def __init__(self, opener, path, bundlefile, linkmapper):
154 def __init__(self, opener, path, bundlefile, linkmapper):
155 filelog.filelog.__init__(self, opener, path)
155 filelog.filelog.__init__(self, opener, path)
156 bundlerevlog.__init__(self, opener, self.indexfile, self.datafile,
156 bundlerevlog.__init__(self, opener, self.indexfile, self.datafile,
157 bundlefile, linkmapper)
157 bundlefile, linkmapper)
158
158
159 class bundlerepository(localrepo.localrepository):
159 class bundlerepository(localrepo.localrepository):
160 def __init__(self, ui, path, bundlename):
160 def __init__(self, ui, path, bundlename):
161 localrepo.localrepository.__init__(self, ui, path)
161 localrepo.localrepository.__init__(self, ui, path)
162
163 self._url = 'bundle:' + bundlename
164 if path: self._url += '+' + path
165
162 self.tempfile = None
166 self.tempfile = None
163 self.bundlefile = open(bundlename, "rb")
167 self.bundlefile = open(bundlename, "rb")
164 header = self.bundlefile.read(6)
168 header = self.bundlefile.read(6)
165 if not header.startswith("HG"):
169 if not header.startswith("HG"):
166 raise util.Abort(_("%s: not a Mercurial bundle file") % bundlename)
170 raise util.Abort(_("%s: not a Mercurial bundle file") % bundlename)
167 elif not header.startswith("HG10"):
171 elif not header.startswith("HG10"):
168 raise util.Abort(_("%s: unknown bundle version") % bundlename)
172 raise util.Abort(_("%s: unknown bundle version") % bundlename)
169 elif header == "HG10BZ":
173 elif header == "HG10BZ":
170 fdtemp, temp = tempfile.mkstemp(prefix="hg-bundle-",
174 fdtemp, temp = tempfile.mkstemp(prefix="hg-bundle-",
171 suffix=".hg10un", dir=self.path)
175 suffix=".hg10un", dir=self.path)
172 self.tempfile = temp
176 self.tempfile = temp
173 fptemp = os.fdopen(fdtemp, 'wb')
177 fptemp = os.fdopen(fdtemp, 'wb')
174 def generator(f):
178 def generator(f):
175 zd = bz2.BZ2Decompressor()
179 zd = bz2.BZ2Decompressor()
176 zd.decompress("BZ")
180 zd.decompress("BZ")
177 for chunk in f:
181 for chunk in f:
178 yield zd.decompress(chunk)
182 yield zd.decompress(chunk)
179 gen = generator(util.filechunkiter(self.bundlefile, 4096))
183 gen = generator(util.filechunkiter(self.bundlefile, 4096))
180
184
181 try:
185 try:
182 fptemp.write("HG10UN")
186 fptemp.write("HG10UN")
183 for chunk in gen:
187 for chunk in gen:
184 fptemp.write(chunk)
188 fptemp.write(chunk)
185 finally:
189 finally:
186 fptemp.close()
190 fptemp.close()
187 self.bundlefile.close()
191 self.bundlefile.close()
188
192
189 self.bundlefile = open(self.tempfile, "rb")
193 self.bundlefile = open(self.tempfile, "rb")
190 # seek right after the header
194 # seek right after the header
191 self.bundlefile.seek(6)
195 self.bundlefile.seek(6)
192 elif header == "HG10UN":
196 elif header == "HG10UN":
193 # nothing to do
197 # nothing to do
194 pass
198 pass
195 else:
199 else:
196 raise util.Abort(_("%s: unknown bundle compression type")
200 raise util.Abort(_("%s: unknown bundle compression type")
197 % bundlename)
201 % bundlename)
198 self.changelog = bundlechangelog(self.opener, self.bundlefile)
202 self.changelog = bundlechangelog(self.opener, self.bundlefile)
199 self.manifest = bundlemanifest(self.opener, self.bundlefile,
203 self.manifest = bundlemanifest(self.opener, self.bundlefile,
200 self.changelog.rev)
204 self.changelog.rev)
201 # dict with the mapping 'filename' -> position in the bundle
205 # dict with the mapping 'filename' -> position in the bundle
202 self.bundlefilespos = {}
206 self.bundlefilespos = {}
203 while 1:
207 while 1:
204 f = changegroup.getchunk(self.bundlefile)
208 f = changegroup.getchunk(self.bundlefile)
205 if not f:
209 if not f:
206 break
210 break
207 self.bundlefilespos[f] = self.bundlefile.tell()
211 self.bundlefilespos[f] = self.bundlefile.tell()
208 for c in changegroup.chunkiter(self.bundlefile):
212 for c in changegroup.chunkiter(self.bundlefile):
209 pass
213 pass
210
214
215 def url(self):
216 return self._url
217
211 def dev(self):
218 def dev(self):
212 return -1
219 return -1
213
220
214 def file(self, f):
221 def file(self, f):
215 if f[0] == '/':
222 if f[0] == '/':
216 f = f[1:]
223 f = f[1:]
217 if f in self.bundlefilespos:
224 if f in self.bundlefilespos:
218 self.bundlefile.seek(self.bundlefilespos[f])
225 self.bundlefile.seek(self.bundlefilespos[f])
219 return bundlefilelog(self.opener, f, self.bundlefile,
226 return bundlefilelog(self.opener, f, self.bundlefile,
220 self.changelog.rev)
227 self.changelog.rev)
221 else:
228 else:
222 return filelog.filelog(self.opener, f)
229 return filelog.filelog(self.opener, f)
223
230
224 def close(self):
231 def close(self):
225 """Close assigned bundle file immediately."""
232 """Close assigned bundle file immediately."""
226 self.bundlefile.close()
233 self.bundlefile.close()
227
234
228 def __del__(self):
235 def __del__(self):
229 if not self.bundlefile.closed:
236 if not self.bundlefile.closed:
230 self.bundlefile.close()
237 self.bundlefile.close()
231 if self.tempfile is not None:
238 if self.tempfile is not None:
232 os.unlink(self.tempfile)
239 os.unlink(self.tempfile)
@@ -1,3563 +1,3564 b''
1 # commands.py - command processing for mercurial
1 # commands.py - command processing for mercurial
2 #
2 #
3 # Copyright 2005 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms
5 # This software may be used and distributed according to the terms
6 # of the GNU General Public License, incorporated herein by reference.
6 # of the GNU General Public License, incorporated herein by reference.
7
7
8 from demandload import demandload
8 from demandload import demandload
9 from node import *
9 from node import *
10 from i18n import gettext as _
10 from i18n import gettext as _
11 demandload(globals(), "os re sys signal shutil imp urllib pdb")
11 demandload(globals(), "os re sys signal shutil imp urllib pdb")
12 demandload(globals(), "fancyopts ui hg util lock revlog templater bundlerepo")
12 demandload(globals(), "fancyopts ui hg util lock revlog templater bundlerepo")
13 demandload(globals(), "fnmatch mdiff random signal tempfile time")
13 demandload(globals(), "fnmatch mdiff random signal tempfile time")
14 demandload(globals(), "traceback errno socket version struct atexit sets bz2")
14 demandload(globals(), "traceback errno socket version struct atexit sets bz2")
15 demandload(globals(), "archival cStringIO changegroup email.Parser")
15 demandload(globals(), "archival cStringIO changegroup email.Parser")
16 demandload(globals(), "hgweb.server sshserver")
16 demandload(globals(), "hgweb.server sshserver")
17
17
18 class UnknownCommand(Exception):
18 class UnknownCommand(Exception):
19 """Exception raised if command is not in the command table."""
19 """Exception raised if command is not in the command table."""
20 class AmbiguousCommand(Exception):
20 class AmbiguousCommand(Exception):
21 """Exception raised if command shortcut matches more than one command."""
21 """Exception raised if command shortcut matches more than one command."""
22
22
23 def bail_if_changed(repo):
23 def bail_if_changed(repo):
24 modified, added, removed, deleted, unknown = repo.changes()
24 modified, added, removed, deleted, unknown = repo.changes()
25 if modified or added or removed or deleted:
25 if modified or added or removed or deleted:
26 raise util.Abort(_("outstanding uncommitted changes"))
26 raise util.Abort(_("outstanding uncommitted changes"))
27
27
28 def filterfiles(filters, files):
28 def filterfiles(filters, files):
29 l = [x for x in files if x in filters]
29 l = [x for x in files if x in filters]
30
30
31 for t in filters:
31 for t in filters:
32 if t and t[-1] != "/":
32 if t and t[-1] != "/":
33 t += "/"
33 t += "/"
34 l += [x for x in files if x.startswith(t)]
34 l += [x for x in files if x.startswith(t)]
35 return l
35 return l
36
36
37 def relpath(repo, args):
37 def relpath(repo, args):
38 cwd = repo.getcwd()
38 cwd = repo.getcwd()
39 if cwd:
39 if cwd:
40 return [util.normpath(os.path.join(cwd, x)) for x in args]
40 return [util.normpath(os.path.join(cwd, x)) for x in args]
41 return args
41 return args
42
42
43 def matchpats(repo, pats=[], opts={}, head=''):
43 def matchpats(repo, pats=[], opts={}, head=''):
44 cwd = repo.getcwd()
44 cwd = repo.getcwd()
45 if not pats and cwd:
45 if not pats and cwd:
46 opts['include'] = [os.path.join(cwd, i) for i in opts['include']]
46 opts['include'] = [os.path.join(cwd, i) for i in opts['include']]
47 opts['exclude'] = [os.path.join(cwd, x) for x in opts['exclude']]
47 opts['exclude'] = [os.path.join(cwd, x) for x in opts['exclude']]
48 cwd = ''
48 cwd = ''
49 return util.cmdmatcher(repo.root, cwd, pats or ['.'], opts.get('include'),
49 return util.cmdmatcher(repo.root, cwd, pats or ['.'], opts.get('include'),
50 opts.get('exclude'), head)
50 opts.get('exclude'), head)
51
51
52 def makewalk(repo, pats, opts, node=None, head='', badmatch=None):
52 def makewalk(repo, pats, opts, node=None, head='', badmatch=None):
53 files, matchfn, anypats = matchpats(repo, pats, opts, head)
53 files, matchfn, anypats = matchpats(repo, pats, opts, head)
54 exact = dict(zip(files, files))
54 exact = dict(zip(files, files))
55 def walk():
55 def walk():
56 for src, fn in repo.walk(node=node, files=files, match=matchfn,
56 for src, fn in repo.walk(node=node, files=files, match=matchfn,
57 badmatch=badmatch):
57 badmatch=badmatch):
58 yield src, fn, util.pathto(repo.getcwd(), fn), fn in exact
58 yield src, fn, util.pathto(repo.getcwd(), fn), fn in exact
59 return files, matchfn, walk()
59 return files, matchfn, walk()
60
60
61 def walk(repo, pats, opts, node=None, head='', badmatch=None):
61 def walk(repo, pats, opts, node=None, head='', badmatch=None):
62 files, matchfn, results = makewalk(repo, pats, opts, node, head, badmatch)
62 files, matchfn, results = makewalk(repo, pats, opts, node, head, badmatch)
63 for r in results:
63 for r in results:
64 yield r
64 yield r
65
65
66 def walkchangerevs(ui, repo, pats, opts):
66 def walkchangerevs(ui, repo, pats, opts):
67 '''Iterate over files and the revs they changed in.
67 '''Iterate over files and the revs they changed in.
68
68
69 Callers most commonly need to iterate backwards over the history
69 Callers most commonly need to iterate backwards over the history
70 it is interested in. Doing so has awful (quadratic-looking)
70 it is interested in. Doing so has awful (quadratic-looking)
71 performance, so we use iterators in a "windowed" way.
71 performance, so we use iterators in a "windowed" way.
72
72
73 We walk a window of revisions in the desired order. Within the
73 We walk a window of revisions in the desired order. Within the
74 window, we first walk forwards to gather data, then in the desired
74 window, we first walk forwards to gather data, then in the desired
75 order (usually backwards) to display it.
75 order (usually backwards) to display it.
76
76
77 This function returns an (iterator, getchange, matchfn) tuple. The
77 This function returns an (iterator, getchange, matchfn) tuple. The
78 getchange function returns the changelog entry for a numeric
78 getchange function returns the changelog entry for a numeric
79 revision. The iterator yields 3-tuples. They will be of one of
79 revision. The iterator yields 3-tuples. They will be of one of
80 the following forms:
80 the following forms:
81
81
82 "window", incrementing, lastrev: stepping through a window,
82 "window", incrementing, lastrev: stepping through a window,
83 positive if walking forwards through revs, last rev in the
83 positive if walking forwards through revs, last rev in the
84 sequence iterated over - use to reset state for the current window
84 sequence iterated over - use to reset state for the current window
85
85
86 "add", rev, fns: out-of-order traversal of the given file names
86 "add", rev, fns: out-of-order traversal of the given file names
87 fns, which changed during revision rev - use to gather data for
87 fns, which changed during revision rev - use to gather data for
88 possible display
88 possible display
89
89
90 "iter", rev, None: in-order traversal of the revs earlier iterated
90 "iter", rev, None: in-order traversal of the revs earlier iterated
91 over with "add" - use to display data'''
91 over with "add" - use to display data'''
92
92
93 def increasing_windows(start, end, windowsize=8, sizelimit=512):
93 def increasing_windows(start, end, windowsize=8, sizelimit=512):
94 if start < end:
94 if start < end:
95 while start < end:
95 while start < end:
96 yield start, min(windowsize, end-start)
96 yield start, min(windowsize, end-start)
97 start += windowsize
97 start += windowsize
98 if windowsize < sizelimit:
98 if windowsize < sizelimit:
99 windowsize *= 2
99 windowsize *= 2
100 else:
100 else:
101 while start > end:
101 while start > end:
102 yield start, min(windowsize, start-end-1)
102 yield start, min(windowsize, start-end-1)
103 start -= windowsize
103 start -= windowsize
104 if windowsize < sizelimit:
104 if windowsize < sizelimit:
105 windowsize *= 2
105 windowsize *= 2
106
106
107
107
108 files, matchfn, anypats = matchpats(repo, pats, opts)
108 files, matchfn, anypats = matchpats(repo, pats, opts)
109
109
110 if repo.changelog.count() == 0:
110 if repo.changelog.count() == 0:
111 return [], False, matchfn
111 return [], False, matchfn
112
112
113 revs = map(int, revrange(ui, repo, opts['rev'] or ['tip:0']))
113 revs = map(int, revrange(ui, repo, opts['rev'] or ['tip:0']))
114 wanted = {}
114 wanted = {}
115 slowpath = anypats
115 slowpath = anypats
116 fncache = {}
116 fncache = {}
117
117
118 chcache = {}
118 chcache = {}
119 def getchange(rev):
119 def getchange(rev):
120 ch = chcache.get(rev)
120 ch = chcache.get(rev)
121 if ch is None:
121 if ch is None:
122 chcache[rev] = ch = repo.changelog.read(repo.lookup(str(rev)))
122 chcache[rev] = ch = repo.changelog.read(repo.lookup(str(rev)))
123 return ch
123 return ch
124
124
125 if not slowpath and not files:
125 if not slowpath and not files:
126 # No files, no patterns. Display all revs.
126 # No files, no patterns. Display all revs.
127 wanted = dict(zip(revs, revs))
127 wanted = dict(zip(revs, revs))
128 if not slowpath:
128 if not slowpath:
129 # Only files, no patterns. Check the history of each file.
129 # Only files, no patterns. Check the history of each file.
130 def filerevgen(filelog):
130 def filerevgen(filelog):
131 cl_count = repo.changelog.count()
131 cl_count = repo.changelog.count()
132 for i, window in increasing_windows(filelog.count()-1, -1):
132 for i, window in increasing_windows(filelog.count()-1, -1):
133 revs = []
133 revs = []
134 for j in xrange(i - window, i + 1):
134 for j in xrange(i - window, i + 1):
135 revs.append(filelog.linkrev(filelog.node(j)))
135 revs.append(filelog.linkrev(filelog.node(j)))
136 revs.reverse()
136 revs.reverse()
137 for rev in revs:
137 for rev in revs:
138 # only yield rev for which we have the changelog, it can
138 # only yield rev for which we have the changelog, it can
139 # happen while doing "hg log" during a pull or commit
139 # happen while doing "hg log" during a pull or commit
140 if rev < cl_count:
140 if rev < cl_count:
141 yield rev
141 yield rev
142
142
143 minrev, maxrev = min(revs), max(revs)
143 minrev, maxrev = min(revs), max(revs)
144 for file_ in files:
144 for file_ in files:
145 filelog = repo.file(file_)
145 filelog = repo.file(file_)
146 # A zero count may be a directory or deleted file, so
146 # A zero count may be a directory or deleted file, so
147 # try to find matching entries on the slow path.
147 # try to find matching entries on the slow path.
148 if filelog.count() == 0:
148 if filelog.count() == 0:
149 slowpath = True
149 slowpath = True
150 break
150 break
151 for rev in filerevgen(filelog):
151 for rev in filerevgen(filelog):
152 if rev <= maxrev:
152 if rev <= maxrev:
153 if rev < minrev:
153 if rev < minrev:
154 break
154 break
155 fncache.setdefault(rev, [])
155 fncache.setdefault(rev, [])
156 fncache[rev].append(file_)
156 fncache[rev].append(file_)
157 wanted[rev] = 1
157 wanted[rev] = 1
158 if slowpath:
158 if slowpath:
159 # The slow path checks files modified in every changeset.
159 # The slow path checks files modified in every changeset.
160 def changerevgen():
160 def changerevgen():
161 for i, window in increasing_windows(repo.changelog.count()-1, -1):
161 for i, window in increasing_windows(repo.changelog.count()-1, -1):
162 for j in xrange(i - window, i + 1):
162 for j in xrange(i - window, i + 1):
163 yield j, getchange(j)[3]
163 yield j, getchange(j)[3]
164
164
165 for rev, changefiles in changerevgen():
165 for rev, changefiles in changerevgen():
166 matches = filter(matchfn, changefiles)
166 matches = filter(matchfn, changefiles)
167 if matches:
167 if matches:
168 fncache[rev] = matches
168 fncache[rev] = matches
169 wanted[rev] = 1
169 wanted[rev] = 1
170
170
171 def iterate():
171 def iterate():
172 for i, window in increasing_windows(0, len(revs)):
172 for i, window in increasing_windows(0, len(revs)):
173 yield 'window', revs[0] < revs[-1], revs[-1]
173 yield 'window', revs[0] < revs[-1], revs[-1]
174 nrevs = [rev for rev in revs[i:i+window]
174 nrevs = [rev for rev in revs[i:i+window]
175 if rev in wanted]
175 if rev in wanted]
176 srevs = list(nrevs)
176 srevs = list(nrevs)
177 srevs.sort()
177 srevs.sort()
178 for rev in srevs:
178 for rev in srevs:
179 fns = fncache.get(rev) or filter(matchfn, getchange(rev)[3])
179 fns = fncache.get(rev) or filter(matchfn, getchange(rev)[3])
180 yield 'add', rev, fns
180 yield 'add', rev, fns
181 for rev in nrevs:
181 for rev in nrevs:
182 yield 'iter', rev, None
182 yield 'iter', rev, None
183 return iterate(), getchange, matchfn
183 return iterate(), getchange, matchfn
184
184
185 revrangesep = ':'
185 revrangesep = ':'
186
186
187 def revfix(repo, val, defval):
187 def revfix(repo, val, defval):
188 '''turn user-level id of changeset into rev number.
188 '''turn user-level id of changeset into rev number.
189 user-level id can be tag, changeset, rev number, or negative rev
189 user-level id can be tag, changeset, rev number, or negative rev
190 number relative to number of revs (-1 is tip, etc).'''
190 number relative to number of revs (-1 is tip, etc).'''
191 if not val:
191 if not val:
192 return defval
192 return defval
193 try:
193 try:
194 num = int(val)
194 num = int(val)
195 if str(num) != val:
195 if str(num) != val:
196 raise ValueError
196 raise ValueError
197 if num < 0:
197 if num < 0:
198 num += repo.changelog.count()
198 num += repo.changelog.count()
199 if num < 0:
199 if num < 0:
200 num = 0
200 num = 0
201 elif num >= repo.changelog.count():
201 elif num >= repo.changelog.count():
202 raise ValueError
202 raise ValueError
203 except ValueError:
203 except ValueError:
204 try:
204 try:
205 num = repo.changelog.rev(repo.lookup(val))
205 num = repo.changelog.rev(repo.lookup(val))
206 except KeyError:
206 except KeyError:
207 raise util.Abort(_('invalid revision identifier %s'), val)
207 raise util.Abort(_('invalid revision identifier %s'), val)
208 return num
208 return num
209
209
210 def revpair(ui, repo, revs):
210 def revpair(ui, repo, revs):
211 '''return pair of nodes, given list of revisions. second item can
211 '''return pair of nodes, given list of revisions. second item can
212 be None, meaning use working dir.'''
212 be None, meaning use working dir.'''
213 if not revs:
213 if not revs:
214 return repo.dirstate.parents()[0], None
214 return repo.dirstate.parents()[0], None
215 end = None
215 end = None
216 if len(revs) == 1:
216 if len(revs) == 1:
217 start = revs[0]
217 start = revs[0]
218 if revrangesep in start:
218 if revrangesep in start:
219 start, end = start.split(revrangesep, 1)
219 start, end = start.split(revrangesep, 1)
220 start = revfix(repo, start, 0)
220 start = revfix(repo, start, 0)
221 end = revfix(repo, end, repo.changelog.count() - 1)
221 end = revfix(repo, end, repo.changelog.count() - 1)
222 else:
222 else:
223 start = revfix(repo, start, None)
223 start = revfix(repo, start, None)
224 elif len(revs) == 2:
224 elif len(revs) == 2:
225 if revrangesep in revs[0] or revrangesep in revs[1]:
225 if revrangesep in revs[0] or revrangesep in revs[1]:
226 raise util.Abort(_('too many revisions specified'))
226 raise util.Abort(_('too many revisions specified'))
227 start = revfix(repo, revs[0], None)
227 start = revfix(repo, revs[0], None)
228 end = revfix(repo, revs[1], None)
228 end = revfix(repo, revs[1], None)
229 else:
229 else:
230 raise util.Abort(_('too many revisions specified'))
230 raise util.Abort(_('too many revisions specified'))
231 if end is not None: end = repo.lookup(str(end))
231 if end is not None: end = repo.lookup(str(end))
232 return repo.lookup(str(start)), end
232 return repo.lookup(str(start)), end
233
233
234 def revrange(ui, repo, revs):
234 def revrange(ui, repo, revs):
235 """Yield revision as strings from a list of revision specifications."""
235 """Yield revision as strings from a list of revision specifications."""
236 seen = {}
236 seen = {}
237 for spec in revs:
237 for spec in revs:
238 if revrangesep in spec:
238 if revrangesep in spec:
239 start, end = spec.split(revrangesep, 1)
239 start, end = spec.split(revrangesep, 1)
240 start = revfix(repo, start, 0)
240 start = revfix(repo, start, 0)
241 end = revfix(repo, end, repo.changelog.count() - 1)
241 end = revfix(repo, end, repo.changelog.count() - 1)
242 step = start > end and -1 or 1
242 step = start > end and -1 or 1
243 for rev in xrange(start, end+step, step):
243 for rev in xrange(start, end+step, step):
244 if rev in seen:
244 if rev in seen:
245 continue
245 continue
246 seen[rev] = 1
246 seen[rev] = 1
247 yield str(rev)
247 yield str(rev)
248 else:
248 else:
249 rev = revfix(repo, spec, None)
249 rev = revfix(repo, spec, None)
250 if rev in seen:
250 if rev in seen:
251 continue
251 continue
252 seen[rev] = 1
252 seen[rev] = 1
253 yield str(rev)
253 yield str(rev)
254
254
255 def make_filename(repo, pat, node,
255 def make_filename(repo, pat, node,
256 total=None, seqno=None, revwidth=None, pathname=None):
256 total=None, seqno=None, revwidth=None, pathname=None):
257 node_expander = {
257 node_expander = {
258 'H': lambda: hex(node),
258 'H': lambda: hex(node),
259 'R': lambda: str(repo.changelog.rev(node)),
259 'R': lambda: str(repo.changelog.rev(node)),
260 'h': lambda: short(node),
260 'h': lambda: short(node),
261 }
261 }
262 expander = {
262 expander = {
263 '%': lambda: '%',
263 '%': lambda: '%',
264 'b': lambda: os.path.basename(repo.root),
264 'b': lambda: os.path.basename(repo.root),
265 }
265 }
266
266
267 try:
267 try:
268 if node:
268 if node:
269 expander.update(node_expander)
269 expander.update(node_expander)
270 if node and revwidth is not None:
270 if node and revwidth is not None:
271 expander['r'] = lambda: str(r.rev(node)).zfill(revwidth)
271 expander['r'] = lambda: str(r.rev(node)).zfill(revwidth)
272 if total is not None:
272 if total is not None:
273 expander['N'] = lambda: str(total)
273 expander['N'] = lambda: str(total)
274 if seqno is not None:
274 if seqno is not None:
275 expander['n'] = lambda: str(seqno)
275 expander['n'] = lambda: str(seqno)
276 if total is not None and seqno is not None:
276 if total is not None and seqno is not None:
277 expander['n'] = lambda:str(seqno).zfill(len(str(total)))
277 expander['n'] = lambda:str(seqno).zfill(len(str(total)))
278 if pathname is not None:
278 if pathname is not None:
279 expander['s'] = lambda: os.path.basename(pathname)
279 expander['s'] = lambda: os.path.basename(pathname)
280 expander['d'] = lambda: os.path.dirname(pathname) or '.'
280 expander['d'] = lambda: os.path.dirname(pathname) or '.'
281 expander['p'] = lambda: pathname
281 expander['p'] = lambda: pathname
282
282
283 newname = []
283 newname = []
284 patlen = len(pat)
284 patlen = len(pat)
285 i = 0
285 i = 0
286 while i < patlen:
286 while i < patlen:
287 c = pat[i]
287 c = pat[i]
288 if c == '%':
288 if c == '%':
289 i += 1
289 i += 1
290 c = pat[i]
290 c = pat[i]
291 c = expander[c]()
291 c = expander[c]()
292 newname.append(c)
292 newname.append(c)
293 i += 1
293 i += 1
294 return ''.join(newname)
294 return ''.join(newname)
295 except KeyError, inst:
295 except KeyError, inst:
296 raise util.Abort(_("invalid format spec '%%%s' in output file name"),
296 raise util.Abort(_("invalid format spec '%%%s' in output file name"),
297 inst.args[0])
297 inst.args[0])
298
298
299 def make_file(repo, pat, node=None,
299 def make_file(repo, pat, node=None,
300 total=None, seqno=None, revwidth=None, mode='wb', pathname=None):
300 total=None, seqno=None, revwidth=None, mode='wb', pathname=None):
301 if not pat or pat == '-':
301 if not pat or pat == '-':
302 return 'w' in mode and sys.stdout or sys.stdin
302 return 'w' in mode and sys.stdout or sys.stdin
303 if hasattr(pat, 'write') and 'w' in mode:
303 if hasattr(pat, 'write') and 'w' in mode:
304 return pat
304 return pat
305 if hasattr(pat, 'read') and 'r' in mode:
305 if hasattr(pat, 'read') and 'r' in mode:
306 return pat
306 return pat
307 return open(make_filename(repo, pat, node, total, seqno, revwidth,
307 return open(make_filename(repo, pat, node, total, seqno, revwidth,
308 pathname),
308 pathname),
309 mode)
309 mode)
310
310
311 def write_bundle(cg, filename=None, compress=True):
311 def write_bundle(cg, filename=None, compress=True):
312 """Write a bundle file and return its filename.
312 """Write a bundle file and return its filename.
313
313
314 Existing files will not be overwritten.
314 Existing files will not be overwritten.
315 If no filename is specified, a temporary file is created.
315 If no filename is specified, a temporary file is created.
316 bz2 compression can be turned off.
316 bz2 compression can be turned off.
317 The bundle file will be deleted in case of errors.
317 The bundle file will be deleted in case of errors.
318 """
318 """
319 class nocompress(object):
319 class nocompress(object):
320 def compress(self, x):
320 def compress(self, x):
321 return x
321 return x
322 def flush(self):
322 def flush(self):
323 return ""
323 return ""
324
324
325 fh = None
325 fh = None
326 cleanup = None
326 cleanup = None
327 try:
327 try:
328 if filename:
328 if filename:
329 if os.path.exists(filename):
329 if os.path.exists(filename):
330 raise util.Abort(_("file '%s' already exists"), filename)
330 raise util.Abort(_("file '%s' already exists"), filename)
331 fh = open(filename, "wb")
331 fh = open(filename, "wb")
332 else:
332 else:
333 fd, filename = tempfile.mkstemp(prefix="hg-bundle-", suffix=".hg")
333 fd, filename = tempfile.mkstemp(prefix="hg-bundle-", suffix=".hg")
334 fh = os.fdopen(fd, "wb")
334 fh = os.fdopen(fd, "wb")
335 cleanup = filename
335 cleanup = filename
336
336
337 if compress:
337 if compress:
338 fh.write("HG10")
338 fh.write("HG10")
339 z = bz2.BZ2Compressor(9)
339 z = bz2.BZ2Compressor(9)
340 else:
340 else:
341 fh.write("HG10UN")
341 fh.write("HG10UN")
342 z = nocompress()
342 z = nocompress()
343 # parse the changegroup data, otherwise we will block
343 # parse the changegroup data, otherwise we will block
344 # in case of sshrepo because we don't know the end of the stream
344 # in case of sshrepo because we don't know the end of the stream
345
345
346 # an empty chunkiter is the end of the changegroup
346 # an empty chunkiter is the end of the changegroup
347 empty = False
347 empty = False
348 while not empty:
348 while not empty:
349 empty = True
349 empty = True
350 for chunk in changegroup.chunkiter(cg):
350 for chunk in changegroup.chunkiter(cg):
351 empty = False
351 empty = False
352 fh.write(z.compress(changegroup.genchunk(chunk)))
352 fh.write(z.compress(changegroup.genchunk(chunk)))
353 fh.write(z.compress(changegroup.closechunk()))
353 fh.write(z.compress(changegroup.closechunk()))
354 fh.write(z.flush())
354 fh.write(z.flush())
355 cleanup = None
355 cleanup = None
356 return filename
356 return filename
357 finally:
357 finally:
358 if fh is not None:
358 if fh is not None:
359 fh.close()
359 fh.close()
360 if cleanup is not None:
360 if cleanup is not None:
361 os.unlink(cleanup)
361 os.unlink(cleanup)
362
362
363 def dodiff(fp, ui, repo, node1, node2, files=None, match=util.always,
363 def dodiff(fp, ui, repo, node1, node2, files=None, match=util.always,
364 changes=None, text=False, opts={}):
364 changes=None, text=False, opts={}):
365 if not node1:
365 if not node1:
366 node1 = repo.dirstate.parents()[0]
366 node1 = repo.dirstate.parents()[0]
367 # reading the data for node1 early allows it to play nicely
367 # reading the data for node1 early allows it to play nicely
368 # with repo.changes and the revlog cache.
368 # with repo.changes and the revlog cache.
369 change = repo.changelog.read(node1)
369 change = repo.changelog.read(node1)
370 mmap = repo.manifest.read(change[0])
370 mmap = repo.manifest.read(change[0])
371 date1 = util.datestr(change[2])
371 date1 = util.datestr(change[2])
372
372
373 if not changes:
373 if not changes:
374 changes = repo.changes(node1, node2, files, match=match)
374 changes = repo.changes(node1, node2, files, match=match)
375 modified, added, removed, deleted, unknown = changes
375 modified, added, removed, deleted, unknown = changes
376 if files:
376 if files:
377 modified, added, removed = map(lambda x: filterfiles(files, x),
377 modified, added, removed = map(lambda x: filterfiles(files, x),
378 (modified, added, removed))
378 (modified, added, removed))
379
379
380 if not modified and not added and not removed:
380 if not modified and not added and not removed:
381 return
381 return
382
382
383 if node2:
383 if node2:
384 change = repo.changelog.read(node2)
384 change = repo.changelog.read(node2)
385 mmap2 = repo.manifest.read(change[0])
385 mmap2 = repo.manifest.read(change[0])
386 _date2 = util.datestr(change[2])
386 _date2 = util.datestr(change[2])
387 def date2(f):
387 def date2(f):
388 return _date2
388 return _date2
389 def read(f):
389 def read(f):
390 return repo.file(f).read(mmap2[f])
390 return repo.file(f).read(mmap2[f])
391 else:
391 else:
392 tz = util.makedate()[1]
392 tz = util.makedate()[1]
393 _date2 = util.datestr()
393 _date2 = util.datestr()
394 def date2(f):
394 def date2(f):
395 try:
395 try:
396 return util.datestr((os.lstat(repo.wjoin(f)).st_mtime, tz))
396 return util.datestr((os.lstat(repo.wjoin(f)).st_mtime, tz))
397 except OSError, err:
397 except OSError, err:
398 if err.errno != errno.ENOENT: raise
398 if err.errno != errno.ENOENT: raise
399 return _date2
399 return _date2
400 def read(f):
400 def read(f):
401 return repo.wread(f)
401 return repo.wread(f)
402
402
403 if ui.quiet:
403 if ui.quiet:
404 r = None
404 r = None
405 else:
405 else:
406 hexfunc = ui.verbose and hex or short
406 hexfunc = ui.verbose and hex or short
407 r = [hexfunc(node) for node in [node1, node2] if node]
407 r = [hexfunc(node) for node in [node1, node2] if node]
408
408
409 diffopts = ui.diffopts()
409 diffopts = ui.diffopts()
410 showfunc = opts.get('show_function') or diffopts['showfunc']
410 showfunc = opts.get('show_function') or diffopts['showfunc']
411 ignorews = opts.get('ignore_all_space') or diffopts['ignorews']
411 ignorews = opts.get('ignore_all_space') or diffopts['ignorews']
412 ignorewsamount = opts.get('ignore_space_change') or \
412 ignorewsamount = opts.get('ignore_space_change') or \
413 diffopts['ignorewsamount']
413 diffopts['ignorewsamount']
414 ignoreblanklines = opts.get('ignore_blank_lines') or \
414 ignoreblanklines = opts.get('ignore_blank_lines') or \
415 diffopts['ignoreblanklines']
415 diffopts['ignoreblanklines']
416 for f in modified:
416 for f in modified:
417 to = None
417 to = None
418 if f in mmap:
418 if f in mmap:
419 to = repo.file(f).read(mmap[f])
419 to = repo.file(f).read(mmap[f])
420 tn = read(f)
420 tn = read(f)
421 fp.write(mdiff.unidiff(to, date1, tn, date2(f), f, r, text=text,
421 fp.write(mdiff.unidiff(to, date1, tn, date2(f), f, r, text=text,
422 showfunc=showfunc, ignorews=ignorews,
422 showfunc=showfunc, ignorews=ignorews,
423 ignorewsamount=ignorewsamount,
423 ignorewsamount=ignorewsamount,
424 ignoreblanklines=ignoreblanklines))
424 ignoreblanklines=ignoreblanklines))
425 for f in added:
425 for f in added:
426 to = None
426 to = None
427 tn = read(f)
427 tn = read(f)
428 fp.write(mdiff.unidiff(to, date1, tn, date2(f), f, r, text=text,
428 fp.write(mdiff.unidiff(to, date1, tn, date2(f), f, r, text=text,
429 showfunc=showfunc, ignorews=ignorews,
429 showfunc=showfunc, ignorews=ignorews,
430 ignorewsamount=ignorewsamount,
430 ignorewsamount=ignorewsamount,
431 ignoreblanklines=ignoreblanklines))
431 ignoreblanklines=ignoreblanklines))
432 for f in removed:
432 for f in removed:
433 to = repo.file(f).read(mmap[f])
433 to = repo.file(f).read(mmap[f])
434 tn = None
434 tn = None
435 fp.write(mdiff.unidiff(to, date1, tn, date2(f), f, r, text=text,
435 fp.write(mdiff.unidiff(to, date1, tn, date2(f), f, r, text=text,
436 showfunc=showfunc, ignorews=ignorews,
436 showfunc=showfunc, ignorews=ignorews,
437 ignorewsamount=ignorewsamount,
437 ignorewsamount=ignorewsamount,
438 ignoreblanklines=ignoreblanklines))
438 ignoreblanklines=ignoreblanklines))
439
439
440 def trimuser(ui, name, rev, revcache):
440 def trimuser(ui, name, rev, revcache):
441 """trim the name of the user who committed a change"""
441 """trim the name of the user who committed a change"""
442 user = revcache.get(rev)
442 user = revcache.get(rev)
443 if user is None:
443 if user is None:
444 user = revcache[rev] = ui.shortuser(name)
444 user = revcache[rev] = ui.shortuser(name)
445 return user
445 return user
446
446
447 class changeset_printer(object):
447 class changeset_printer(object):
448 '''show changeset information when templating not requested.'''
448 '''show changeset information when templating not requested.'''
449
449
450 def __init__(self, ui, repo):
450 def __init__(self, ui, repo):
451 self.ui = ui
451 self.ui = ui
452 self.repo = repo
452 self.repo = repo
453
453
454 def show(self, rev=0, changenode=None, brinfo=None):
454 def show(self, rev=0, changenode=None, brinfo=None):
455 '''show a single changeset or file revision'''
455 '''show a single changeset or file revision'''
456 log = self.repo.changelog
456 log = self.repo.changelog
457 if changenode is None:
457 if changenode is None:
458 changenode = log.node(rev)
458 changenode = log.node(rev)
459 elif not rev:
459 elif not rev:
460 rev = log.rev(changenode)
460 rev = log.rev(changenode)
461
461
462 if self.ui.quiet:
462 if self.ui.quiet:
463 self.ui.write("%d:%s\n" % (rev, short(changenode)))
463 self.ui.write("%d:%s\n" % (rev, short(changenode)))
464 return
464 return
465
465
466 changes = log.read(changenode)
466 changes = log.read(changenode)
467 date = util.datestr(changes[2])
467 date = util.datestr(changes[2])
468
468
469 parents = [(log.rev(p), self.ui.verbose and hex(p) or short(p))
469 parents = [(log.rev(p), self.ui.verbose and hex(p) or short(p))
470 for p in log.parents(changenode)
470 for p in log.parents(changenode)
471 if self.ui.debugflag or p != nullid]
471 if self.ui.debugflag or p != nullid]
472 if (not self.ui.debugflag and len(parents) == 1 and
472 if (not self.ui.debugflag and len(parents) == 1 and
473 parents[0][0] == rev-1):
473 parents[0][0] == rev-1):
474 parents = []
474 parents = []
475
475
476 if self.ui.verbose:
476 if self.ui.verbose:
477 self.ui.write(_("changeset: %d:%s\n") % (rev, hex(changenode)))
477 self.ui.write(_("changeset: %d:%s\n") % (rev, hex(changenode)))
478 else:
478 else:
479 self.ui.write(_("changeset: %d:%s\n") % (rev, short(changenode)))
479 self.ui.write(_("changeset: %d:%s\n") % (rev, short(changenode)))
480
480
481 for tag in self.repo.nodetags(changenode):
481 for tag in self.repo.nodetags(changenode):
482 self.ui.status(_("tag: %s\n") % tag)
482 self.ui.status(_("tag: %s\n") % tag)
483 for parent in parents:
483 for parent in parents:
484 self.ui.write(_("parent: %d:%s\n") % parent)
484 self.ui.write(_("parent: %d:%s\n") % parent)
485
485
486 if brinfo and changenode in brinfo:
486 if brinfo and changenode in brinfo:
487 br = brinfo[changenode]
487 br = brinfo[changenode]
488 self.ui.write(_("branch: %s\n") % " ".join(br))
488 self.ui.write(_("branch: %s\n") % " ".join(br))
489
489
490 self.ui.debug(_("manifest: %d:%s\n") %
490 self.ui.debug(_("manifest: %d:%s\n") %
491 (self.repo.manifest.rev(changes[0]), hex(changes[0])))
491 (self.repo.manifest.rev(changes[0]), hex(changes[0])))
492 self.ui.status(_("user: %s\n") % changes[1])
492 self.ui.status(_("user: %s\n") % changes[1])
493 self.ui.status(_("date: %s\n") % date)
493 self.ui.status(_("date: %s\n") % date)
494
494
495 if self.ui.debugflag:
495 if self.ui.debugflag:
496 files = self.repo.changes(log.parents(changenode)[0], changenode)
496 files = self.repo.changes(log.parents(changenode)[0], changenode)
497 for key, value in zip([_("files:"), _("files+:"), _("files-:")],
497 for key, value in zip([_("files:"), _("files+:"), _("files-:")],
498 files):
498 files):
499 if value:
499 if value:
500 self.ui.note("%-12s %s\n" % (key, " ".join(value)))
500 self.ui.note("%-12s %s\n" % (key, " ".join(value)))
501 else:
501 else:
502 self.ui.note(_("files: %s\n") % " ".join(changes[3]))
502 self.ui.note(_("files: %s\n") % " ".join(changes[3]))
503
503
504 description = changes[4].strip()
504 description = changes[4].strip()
505 if description:
505 if description:
506 if self.ui.verbose:
506 if self.ui.verbose:
507 self.ui.status(_("description:\n"))
507 self.ui.status(_("description:\n"))
508 self.ui.status(description)
508 self.ui.status(description)
509 self.ui.status("\n\n")
509 self.ui.status("\n\n")
510 else:
510 else:
511 self.ui.status(_("summary: %s\n") %
511 self.ui.status(_("summary: %s\n") %
512 description.splitlines()[0])
512 description.splitlines()[0])
513 self.ui.status("\n")
513 self.ui.status("\n")
514
514
515 def show_changeset(ui, repo, opts):
515 def show_changeset(ui, repo, opts):
516 '''show one changeset. uses template or regular display. caller
516 '''show one changeset. uses template or regular display. caller
517 can pass in 'style' and 'template' options in opts.'''
517 can pass in 'style' and 'template' options in opts.'''
518
518
519 tmpl = opts.get('template')
519 tmpl = opts.get('template')
520 if tmpl:
520 if tmpl:
521 tmpl = templater.parsestring(tmpl, quoted=False)
521 tmpl = templater.parsestring(tmpl, quoted=False)
522 else:
522 else:
523 tmpl = ui.config('ui', 'logtemplate')
523 tmpl = ui.config('ui', 'logtemplate')
524 if tmpl: tmpl = templater.parsestring(tmpl)
524 if tmpl: tmpl = templater.parsestring(tmpl)
525 mapfile = opts.get('style') or ui.config('ui', 'style')
525 mapfile = opts.get('style') or ui.config('ui', 'style')
526 if tmpl or mapfile:
526 if tmpl or mapfile:
527 if mapfile:
527 if mapfile:
528 if not os.path.isfile(mapfile):
528 if not os.path.isfile(mapfile):
529 mapname = templater.templatepath('map-cmdline.' + mapfile)
529 mapname = templater.templatepath('map-cmdline.' + mapfile)
530 if not mapname: mapname = templater.templatepath(mapfile)
530 if not mapname: mapname = templater.templatepath(mapfile)
531 if mapname: mapfile = mapname
531 if mapname: mapfile = mapname
532 try:
532 try:
533 t = templater.changeset_templater(ui, repo, mapfile)
533 t = templater.changeset_templater(ui, repo, mapfile)
534 except SyntaxError, inst:
534 except SyntaxError, inst:
535 raise util.Abort(inst.args[0])
535 raise util.Abort(inst.args[0])
536 if tmpl: t.use_template(tmpl)
536 if tmpl: t.use_template(tmpl)
537 return t
537 return t
538 return changeset_printer(ui, repo)
538 return changeset_printer(ui, repo)
539
539
540 def show_version(ui):
540 def show_version(ui):
541 """output version and copyright information"""
541 """output version and copyright information"""
542 ui.write(_("Mercurial Distributed SCM (version %s)\n")
542 ui.write(_("Mercurial Distributed SCM (version %s)\n")
543 % version.get_version())
543 % version.get_version())
544 ui.status(_(
544 ui.status(_(
545 "\nCopyright (C) 2005 Matt Mackall <mpm@selenic.com>\n"
545 "\nCopyright (C) 2005 Matt Mackall <mpm@selenic.com>\n"
546 "This is free software; see the source for copying conditions. "
546 "This is free software; see the source for copying conditions. "
547 "There is NO\nwarranty; "
547 "There is NO\nwarranty; "
548 "not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n"
548 "not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n"
549 ))
549 ))
550
550
551 def help_(ui, name=None, with_version=False):
551 def help_(ui, name=None, with_version=False):
552 """show help for a command, extension, or list of commands
552 """show help for a command, extension, or list of commands
553
553
554 With no arguments, print a list of commands and short help.
554 With no arguments, print a list of commands and short help.
555
555
556 Given a command name, print help for that command.
556 Given a command name, print help for that command.
557
557
558 Given an extension name, print help for that extension, and the
558 Given an extension name, print help for that extension, and the
559 commands it provides."""
559 commands it provides."""
560 option_lists = []
560 option_lists = []
561
561
562 def helpcmd(name):
562 def helpcmd(name):
563 if with_version:
563 if with_version:
564 show_version(ui)
564 show_version(ui)
565 ui.write('\n')
565 ui.write('\n')
566 aliases, i = findcmd(name)
566 aliases, i = findcmd(name)
567 # synopsis
567 # synopsis
568 ui.write("%s\n\n" % i[2])
568 ui.write("%s\n\n" % i[2])
569
569
570 # description
570 # description
571 doc = i[0].__doc__
571 doc = i[0].__doc__
572 if not doc:
572 if not doc:
573 doc = _("(No help text available)")
573 doc = _("(No help text available)")
574 if ui.quiet:
574 if ui.quiet:
575 doc = doc.splitlines(0)[0]
575 doc = doc.splitlines(0)[0]
576 ui.write("%s\n" % doc.rstrip())
576 ui.write("%s\n" % doc.rstrip())
577
577
578 if not ui.quiet:
578 if not ui.quiet:
579 # aliases
579 # aliases
580 if len(aliases) > 1:
580 if len(aliases) > 1:
581 ui.write(_("\naliases: %s\n") % ', '.join(aliases[1:]))
581 ui.write(_("\naliases: %s\n") % ', '.join(aliases[1:]))
582
582
583 # options
583 # options
584 if i[1]:
584 if i[1]:
585 option_lists.append(("options", i[1]))
585 option_lists.append(("options", i[1]))
586
586
587 def helplist(select=None):
587 def helplist(select=None):
588 h = {}
588 h = {}
589 cmds = {}
589 cmds = {}
590 for c, e in table.items():
590 for c, e in table.items():
591 f = c.split("|", 1)[0]
591 f = c.split("|", 1)[0]
592 if select and not select(f):
592 if select and not select(f):
593 continue
593 continue
594 if name == "shortlist" and not f.startswith("^"):
594 if name == "shortlist" and not f.startswith("^"):
595 continue
595 continue
596 f = f.lstrip("^")
596 f = f.lstrip("^")
597 if not ui.debugflag and f.startswith("debug"):
597 if not ui.debugflag and f.startswith("debug"):
598 continue
598 continue
599 doc = e[0].__doc__
599 doc = e[0].__doc__
600 if not doc:
600 if not doc:
601 doc = _("(No help text available)")
601 doc = _("(No help text available)")
602 h[f] = doc.splitlines(0)[0].rstrip()
602 h[f] = doc.splitlines(0)[0].rstrip()
603 cmds[f] = c.lstrip("^")
603 cmds[f] = c.lstrip("^")
604
604
605 fns = h.keys()
605 fns = h.keys()
606 fns.sort()
606 fns.sort()
607 m = max(map(len, fns))
607 m = max(map(len, fns))
608 for f in fns:
608 for f in fns:
609 if ui.verbose:
609 if ui.verbose:
610 commands = cmds[f].replace("|",", ")
610 commands = cmds[f].replace("|",", ")
611 ui.write(" %s:\n %s\n"%(commands, h[f]))
611 ui.write(" %s:\n %s\n"%(commands, h[f]))
612 else:
612 else:
613 ui.write(' %-*s %s\n' % (m, f, h[f]))
613 ui.write(' %-*s %s\n' % (m, f, h[f]))
614
614
615 def helpext(name):
615 def helpext(name):
616 try:
616 try:
617 mod = findext(name)
617 mod = findext(name)
618 except KeyError:
618 except KeyError:
619 raise UnknownCommand(name)
619 raise UnknownCommand(name)
620
620
621 doc = (mod.__doc__ or _('No help text available')).splitlines(0)
621 doc = (mod.__doc__ or _('No help text available')).splitlines(0)
622 ui.write(_('%s extension - %s\n') % (name.split('.')[-1], doc[0]))
622 ui.write(_('%s extension - %s\n') % (name.split('.')[-1], doc[0]))
623 for d in doc[1:]:
623 for d in doc[1:]:
624 ui.write(d, '\n')
624 ui.write(d, '\n')
625
625
626 ui.status('\n')
626 ui.status('\n')
627 if ui.verbose:
627 if ui.verbose:
628 ui.status(_('list of commands:\n\n'))
628 ui.status(_('list of commands:\n\n'))
629 else:
629 else:
630 ui.status(_('list of commands (use "hg help -v %s" '
630 ui.status(_('list of commands (use "hg help -v %s" '
631 'to show aliases and global options):\n\n') % name)
631 'to show aliases and global options):\n\n') % name)
632
632
633 modcmds = dict.fromkeys([c.split('|', 1)[0] for c in mod.cmdtable])
633 modcmds = dict.fromkeys([c.split('|', 1)[0] for c in mod.cmdtable])
634 helplist(modcmds.has_key)
634 helplist(modcmds.has_key)
635
635
636 if name and name != 'shortlist':
636 if name and name != 'shortlist':
637 try:
637 try:
638 helpcmd(name)
638 helpcmd(name)
639 except UnknownCommand:
639 except UnknownCommand:
640 helpext(name)
640 helpext(name)
641
641
642 else:
642 else:
643 # program name
643 # program name
644 if ui.verbose or with_version:
644 if ui.verbose or with_version:
645 show_version(ui)
645 show_version(ui)
646 else:
646 else:
647 ui.status(_("Mercurial Distributed SCM\n"))
647 ui.status(_("Mercurial Distributed SCM\n"))
648 ui.status('\n')
648 ui.status('\n')
649
649
650 # list of commands
650 # list of commands
651 if name == "shortlist":
651 if name == "shortlist":
652 ui.status(_('basic commands (use "hg help" '
652 ui.status(_('basic commands (use "hg help" '
653 'for the full list or option "-v" for details):\n\n'))
653 'for the full list or option "-v" for details):\n\n'))
654 elif ui.verbose:
654 elif ui.verbose:
655 ui.status(_('list of commands:\n\n'))
655 ui.status(_('list of commands:\n\n'))
656 else:
656 else:
657 ui.status(_('list of commands (use "hg help -v" '
657 ui.status(_('list of commands (use "hg help -v" '
658 'to show aliases and global options):\n\n'))
658 'to show aliases and global options):\n\n'))
659
659
660 helplist()
660 helplist()
661
661
662 # global options
662 # global options
663 if ui.verbose:
663 if ui.verbose:
664 option_lists.append(("global options", globalopts))
664 option_lists.append(("global options", globalopts))
665
665
666 # list all option lists
666 # list all option lists
667 opt_output = []
667 opt_output = []
668 for title, options in option_lists:
668 for title, options in option_lists:
669 opt_output.append(("\n%s:\n" % title, None))
669 opt_output.append(("\n%s:\n" % title, None))
670 for shortopt, longopt, default, desc in options:
670 for shortopt, longopt, default, desc in options:
671 opt_output.append(("%2s%s" % (shortopt and "-%s" % shortopt,
671 opt_output.append(("%2s%s" % (shortopt and "-%s" % shortopt,
672 longopt and " --%s" % longopt),
672 longopt and " --%s" % longopt),
673 "%s%s" % (desc,
673 "%s%s" % (desc,
674 default
674 default
675 and _(" (default: %s)") % default
675 and _(" (default: %s)") % default
676 or "")))
676 or "")))
677
677
678 if opt_output:
678 if opt_output:
679 opts_len = max([len(line[0]) for line in opt_output if line[1]])
679 opts_len = max([len(line[0]) for line in opt_output if line[1]])
680 for first, second in opt_output:
680 for first, second in opt_output:
681 if second:
681 if second:
682 ui.write(" %-*s %s\n" % (opts_len, first, second))
682 ui.write(" %-*s %s\n" % (opts_len, first, second))
683 else:
683 else:
684 ui.write("%s\n" % first)
684 ui.write("%s\n" % first)
685
685
686 # Commands start here, listed alphabetically
686 # Commands start here, listed alphabetically
687
687
688 def add(ui, repo, *pats, **opts):
688 def add(ui, repo, *pats, **opts):
689 """add the specified files on the next commit
689 """add the specified files on the next commit
690
690
691 Schedule files to be version controlled and added to the repository.
691 Schedule files to be version controlled and added to the repository.
692
692
693 The files will be added to the repository at the next commit.
693 The files will be added to the repository at the next commit.
694
694
695 If no names are given, add all files in the repository.
695 If no names are given, add all files in the repository.
696 """
696 """
697
697
698 names = []
698 names = []
699 for src, abs, rel, exact in walk(repo, pats, opts):
699 for src, abs, rel, exact in walk(repo, pats, opts):
700 if exact:
700 if exact:
701 if ui.verbose:
701 if ui.verbose:
702 ui.status(_('adding %s\n') % rel)
702 ui.status(_('adding %s\n') % rel)
703 names.append(abs)
703 names.append(abs)
704 elif repo.dirstate.state(abs) == '?':
704 elif repo.dirstate.state(abs) == '?':
705 ui.status(_('adding %s\n') % rel)
705 ui.status(_('adding %s\n') % rel)
706 names.append(abs)
706 names.append(abs)
707 if not opts.get('dry_run'):
707 if not opts.get('dry_run'):
708 repo.add(names)
708 repo.add(names)
709
709
710 def addremove(ui, repo, *pats, **opts):
710 def addremove(ui, repo, *pats, **opts):
711 """add all new files, delete all missing files (DEPRECATED)
711 """add all new files, delete all missing files (DEPRECATED)
712
712
713 (DEPRECATED)
713 (DEPRECATED)
714 Add all new files and remove all missing files from the repository.
714 Add all new files and remove all missing files from the repository.
715
715
716 New files are ignored if they match any of the patterns in .hgignore. As
716 New files are ignored if they match any of the patterns in .hgignore. As
717 with add, these changes take effect at the next commit.
717 with add, these changes take effect at the next commit.
718
718
719 This command is now deprecated and will be removed in a future
719 This command is now deprecated and will be removed in a future
720 release. Please use add and remove --after instead.
720 release. Please use add and remove --after instead.
721 """
721 """
722 ui.warn(_('(the addremove command is deprecated; use add and remove '
722 ui.warn(_('(the addremove command is deprecated; use add and remove '
723 '--after instead)\n'))
723 '--after instead)\n'))
724 return addremove_lock(ui, repo, pats, opts)
724 return addremove_lock(ui, repo, pats, opts)
725
725
726 def addremove_lock(ui, repo, pats, opts, wlock=None):
726 def addremove_lock(ui, repo, pats, opts, wlock=None):
727 add, remove = [], []
727 add, remove = [], []
728 for src, abs, rel, exact in walk(repo, pats, opts):
728 for src, abs, rel, exact in walk(repo, pats, opts):
729 if src == 'f' and repo.dirstate.state(abs) == '?':
729 if src == 'f' and repo.dirstate.state(abs) == '?':
730 add.append(abs)
730 add.append(abs)
731 if ui.verbose or not exact:
731 if ui.verbose or not exact:
732 ui.status(_('adding %s\n') % ((pats and rel) or abs))
732 ui.status(_('adding %s\n') % ((pats and rel) or abs))
733 if repo.dirstate.state(abs) != 'r' and not os.path.exists(rel):
733 if repo.dirstate.state(abs) != 'r' and not os.path.exists(rel):
734 remove.append(abs)
734 remove.append(abs)
735 if ui.verbose or not exact:
735 if ui.verbose or not exact:
736 ui.status(_('removing %s\n') % ((pats and rel) or abs))
736 ui.status(_('removing %s\n') % ((pats and rel) or abs))
737 if not opts.get('dry_run'):
737 if not opts.get('dry_run'):
738 repo.add(add, wlock=wlock)
738 repo.add(add, wlock=wlock)
739 repo.remove(remove, wlock=wlock)
739 repo.remove(remove, wlock=wlock)
740
740
741 def annotate(ui, repo, *pats, **opts):
741 def annotate(ui, repo, *pats, **opts):
742 """show changeset information per file line
742 """show changeset information per file line
743
743
744 List changes in files, showing the revision id responsible for each line
744 List changes in files, showing the revision id responsible for each line
745
745
746 This command is useful to discover who did a change or when a change took
746 This command is useful to discover who did a change or when a change took
747 place.
747 place.
748
748
749 Without the -a option, annotate will avoid processing files it
749 Without the -a option, annotate will avoid processing files it
750 detects as binary. With -a, annotate will generate an annotation
750 detects as binary. With -a, annotate will generate an annotation
751 anyway, probably with undesirable results.
751 anyway, probably with undesirable results.
752 """
752 """
753 def getnode(rev):
753 def getnode(rev):
754 return short(repo.changelog.node(rev))
754 return short(repo.changelog.node(rev))
755
755
756 ucache = {}
756 ucache = {}
757 def getname(rev):
757 def getname(rev):
758 try:
758 try:
759 return ucache[rev]
759 return ucache[rev]
760 except:
760 except:
761 u = trimuser(ui, repo.changectx(rev).user(), rev, ucache)
761 u = trimuser(ui, repo.changectx(rev).user(), rev, ucache)
762 ucache[rev] = u
762 ucache[rev] = u
763 return u
763 return u
764
764
765 dcache = {}
765 dcache = {}
766 def getdate(rev):
766 def getdate(rev):
767 datestr = dcache.get(rev)
767 datestr = dcache.get(rev)
768 if datestr is None:
768 if datestr is None:
769 datestr = dcache[rev] = util.datestr(repo.changectx(rev).date())
769 datestr = dcache[rev] = util.datestr(repo.changectx(rev).date())
770 return datestr
770 return datestr
771
771
772 if not pats:
772 if not pats:
773 raise util.Abort(_('at least one file name or pattern required'))
773 raise util.Abort(_('at least one file name or pattern required'))
774
774
775 opmap = [['user', getname], ['number', str], ['changeset', getnode],
775 opmap = [['user', getname], ['number', str], ['changeset', getnode],
776 ['date', getdate]]
776 ['date', getdate]]
777 if not opts['user'] and not opts['changeset'] and not opts['date']:
777 if not opts['user'] and not opts['changeset'] and not opts['date']:
778 opts['number'] = 1
778 opts['number'] = 1
779
779
780 ctx = repo.changectx(opts['rev'] or repo.dirstate.parents()[0])
780 ctx = repo.changectx(opts['rev'] or repo.dirstate.parents()[0])
781
781
782 for src, abs, rel, exact in walk(repo, pats, opts, node=ctx.node()):
782 for src, abs, rel, exact in walk(repo, pats, opts, node=ctx.node()):
783 fctx = ctx.filectx(abs)
783 fctx = ctx.filectx(abs)
784 if not opts['text'] and util.binary(fctx.data()):
784 if not opts['text'] and util.binary(fctx.data()):
785 ui.write(_("%s: binary file\n") % ((pats and rel) or abs))
785 ui.write(_("%s: binary file\n") % ((pats and rel) or abs))
786 continue
786 continue
787
787
788 lines = fctx.annotate()
788 lines = fctx.annotate()
789 pieces = []
789 pieces = []
790
790
791 for o, f in opmap:
791 for o, f in opmap:
792 if opts[o]:
792 if opts[o]:
793 l = [f(n) for n, dummy in lines]
793 l = [f(n) for n, dummy in lines]
794 if l:
794 if l:
795 m = max(map(len, l))
795 m = max(map(len, l))
796 pieces.append(["%*s" % (m, x) for x in l])
796 pieces.append(["%*s" % (m, x) for x in l])
797
797
798 if pieces:
798 if pieces:
799 for p, l in zip(zip(*pieces), lines):
799 for p, l in zip(zip(*pieces), lines):
800 ui.write("%s: %s" % (" ".join(p), l[1]))
800 ui.write("%s: %s" % (" ".join(p), l[1]))
801
801
802 def archive(ui, repo, dest, **opts):
802 def archive(ui, repo, dest, **opts):
803 '''create unversioned archive of a repository revision
803 '''create unversioned archive of a repository revision
804
804
805 By default, the revision used is the parent of the working
805 By default, the revision used is the parent of the working
806 directory; use "-r" to specify a different revision.
806 directory; use "-r" to specify a different revision.
807
807
808 To specify the type of archive to create, use "-t". Valid
808 To specify the type of archive to create, use "-t". Valid
809 types are:
809 types are:
810
810
811 "files" (default): a directory full of files
811 "files" (default): a directory full of files
812 "tar": tar archive, uncompressed
812 "tar": tar archive, uncompressed
813 "tbz2": tar archive, compressed using bzip2
813 "tbz2": tar archive, compressed using bzip2
814 "tgz": tar archive, compressed using gzip
814 "tgz": tar archive, compressed using gzip
815 "uzip": zip archive, uncompressed
815 "uzip": zip archive, uncompressed
816 "zip": zip archive, compressed using deflate
816 "zip": zip archive, compressed using deflate
817
817
818 The exact name of the destination archive or directory is given
818 The exact name of the destination archive or directory is given
819 using a format string; see "hg help export" for details.
819 using a format string; see "hg help export" for details.
820
820
821 Each member added to an archive file has a directory prefix
821 Each member added to an archive file has a directory prefix
822 prepended. Use "-p" to specify a format string for the prefix.
822 prepended. Use "-p" to specify a format string for the prefix.
823 The default is the basename of the archive, with suffixes removed.
823 The default is the basename of the archive, with suffixes removed.
824 '''
824 '''
825
825
826 if opts['rev']:
826 if opts['rev']:
827 node = repo.lookup(opts['rev'])
827 node = repo.lookup(opts['rev'])
828 else:
828 else:
829 node, p2 = repo.dirstate.parents()
829 node, p2 = repo.dirstate.parents()
830 if p2 != nullid:
830 if p2 != nullid:
831 raise util.Abort(_('uncommitted merge - please provide a '
831 raise util.Abort(_('uncommitted merge - please provide a '
832 'specific revision'))
832 'specific revision'))
833
833
834 dest = make_filename(repo, dest, node)
834 dest = make_filename(repo, dest, node)
835 if os.path.realpath(dest) == repo.root:
835 if os.path.realpath(dest) == repo.root:
836 raise util.Abort(_('repository root cannot be destination'))
836 raise util.Abort(_('repository root cannot be destination'))
837 dummy, matchfn, dummy = matchpats(repo, [], opts)
837 dummy, matchfn, dummy = matchpats(repo, [], opts)
838 kind = opts.get('type') or 'files'
838 kind = opts.get('type') or 'files'
839 prefix = opts['prefix']
839 prefix = opts['prefix']
840 if dest == '-':
840 if dest == '-':
841 if kind == 'files':
841 if kind == 'files':
842 raise util.Abort(_('cannot archive plain files to stdout'))
842 raise util.Abort(_('cannot archive plain files to stdout'))
843 dest = sys.stdout
843 dest = sys.stdout
844 if not prefix: prefix = os.path.basename(repo.root) + '-%h'
844 if not prefix: prefix = os.path.basename(repo.root) + '-%h'
845 prefix = make_filename(repo, prefix, node)
845 prefix = make_filename(repo, prefix, node)
846 archival.archive(repo, dest, node, kind, not opts['no_decode'],
846 archival.archive(repo, dest, node, kind, not opts['no_decode'],
847 matchfn, prefix)
847 matchfn, prefix)
848
848
849 def backout(ui, repo, rev, **opts):
849 def backout(ui, repo, rev, **opts):
850 '''reverse effect of earlier changeset
850 '''reverse effect of earlier changeset
851
851
852 Commit the backed out changes as a new changeset. The new
852 Commit the backed out changes as a new changeset. The new
853 changeset is a child of the backed out changeset.
853 changeset is a child of the backed out changeset.
854
854
855 If you back out a changeset other than the tip, a new head is
855 If you back out a changeset other than the tip, a new head is
856 created. This head is the parent of the working directory. If
856 created. This head is the parent of the working directory. If
857 you back out an old changeset, your working directory will appear
857 you back out an old changeset, your working directory will appear
858 old after the backout. You should merge the backout changeset
858 old after the backout. You should merge the backout changeset
859 with another head.
859 with another head.
860
860
861 The --merge option remembers the parent of the working directory
861 The --merge option remembers the parent of the working directory
862 before starting the backout, then merges the new head with that
862 before starting the backout, then merges the new head with that
863 changeset afterwards. This saves you from doing the merge by
863 changeset afterwards. This saves you from doing the merge by
864 hand. The result of this merge is not committed, as for a normal
864 hand. The result of this merge is not committed, as for a normal
865 merge.'''
865 merge.'''
866
866
867 bail_if_changed(repo)
867 bail_if_changed(repo)
868 op1, op2 = repo.dirstate.parents()
868 op1, op2 = repo.dirstate.parents()
869 if op2 != nullid:
869 if op2 != nullid:
870 raise util.Abort(_('outstanding uncommitted merge'))
870 raise util.Abort(_('outstanding uncommitted merge'))
871 node = repo.lookup(rev)
871 node = repo.lookup(rev)
872 p1, p2 = repo.changelog.parents(node)
872 p1, p2 = repo.changelog.parents(node)
873 if p1 == nullid:
873 if p1 == nullid:
874 raise util.Abort(_('cannot back out a change with no parents'))
874 raise util.Abort(_('cannot back out a change with no parents'))
875 if p2 != nullid:
875 if p2 != nullid:
876 if not opts['parent']:
876 if not opts['parent']:
877 raise util.Abort(_('cannot back out a merge changeset without '
877 raise util.Abort(_('cannot back out a merge changeset without '
878 '--parent'))
878 '--parent'))
879 p = repo.lookup(opts['parent'])
879 p = repo.lookup(opts['parent'])
880 if p not in (p1, p2):
880 if p not in (p1, p2):
881 raise util.Abort(_('%s is not a parent of %s' %
881 raise util.Abort(_('%s is not a parent of %s' %
882 (short(p), short(node))))
882 (short(p), short(node))))
883 parent = p
883 parent = p
884 else:
884 else:
885 if opts['parent']:
885 if opts['parent']:
886 raise util.Abort(_('cannot use --parent on non-merge changeset'))
886 raise util.Abort(_('cannot use --parent on non-merge changeset'))
887 parent = p1
887 parent = p1
888 repo.update(node, force=True, show_stats=False)
888 repo.update(node, force=True, show_stats=False)
889 revert_opts = opts.copy()
889 revert_opts = opts.copy()
890 revert_opts['rev'] = hex(parent)
890 revert_opts['rev'] = hex(parent)
891 revert(ui, repo, **revert_opts)
891 revert(ui, repo, **revert_opts)
892 commit_opts = opts.copy()
892 commit_opts = opts.copy()
893 commit_opts['addremove'] = False
893 commit_opts['addremove'] = False
894 if not commit_opts['message'] and not commit_opts['logfile']:
894 if not commit_opts['message'] and not commit_opts['logfile']:
895 commit_opts['message'] = _("Backed out changeset %s") % (hex(node))
895 commit_opts['message'] = _("Backed out changeset %s") % (hex(node))
896 commit_opts['force_editor'] = True
896 commit_opts['force_editor'] = True
897 commit(ui, repo, **commit_opts)
897 commit(ui, repo, **commit_opts)
898 def nice(node):
898 def nice(node):
899 return '%d:%s' % (repo.changelog.rev(node), short(node))
899 return '%d:%s' % (repo.changelog.rev(node), short(node))
900 ui.status(_('changeset %s backs out changeset %s\n') %
900 ui.status(_('changeset %s backs out changeset %s\n') %
901 (nice(repo.changelog.tip()), nice(node)))
901 (nice(repo.changelog.tip()), nice(node)))
902 if op1 != node:
902 if op1 != node:
903 if opts['merge']:
903 if opts['merge']:
904 ui.status(_('merging with changeset %s\n') % nice(op1))
904 ui.status(_('merging with changeset %s\n') % nice(op1))
905 doupdate(ui, repo, hex(op1), **opts)
905 doupdate(ui, repo, hex(op1), **opts)
906 else:
906 else:
907 ui.status(_('the backout changeset is a new head - '
907 ui.status(_('the backout changeset is a new head - '
908 'do not forget to merge\n'))
908 'do not forget to merge\n'))
909 ui.status(_('(use "backout -m" if you want to auto-merge)\n'))
909 ui.status(_('(use "backout -m" if you want to auto-merge)\n'))
910
910
911 def bundle(ui, repo, fname, dest=None, **opts):
911 def bundle(ui, repo, fname, dest=None, **opts):
912 """create a changegroup file
912 """create a changegroup file
913
913
914 Generate a compressed changegroup file collecting all changesets
914 Generate a compressed changegroup file collecting all changesets
915 not found in the other repository.
915 not found in the other repository.
916
916
917 This file can then be transferred using conventional means and
917 This file can then be transferred using conventional means and
918 applied to another repository with the unbundle command. This is
918 applied to another repository with the unbundle command. This is
919 useful when native push and pull are not available or when
919 useful when native push and pull are not available or when
920 exporting an entire repository is undesirable. The standard file
920 exporting an entire repository is undesirable. The standard file
921 extension is ".hg".
921 extension is ".hg".
922
922
923 Unlike import/export, this exactly preserves all changeset
923 Unlike import/export, this exactly preserves all changeset
924 contents including permissions, rename data, and revision history.
924 contents including permissions, rename data, and revision history.
925 """
925 """
926 dest = ui.expandpath(dest or 'default-push', dest or 'default')
926 dest = ui.expandpath(dest or 'default-push', dest or 'default')
927 other = hg.repository(ui, dest)
927 other = hg.repository(ui, dest)
928 o = repo.findoutgoing(other, force=opts['force'])
928 o = repo.findoutgoing(other, force=opts['force'])
929 cg = repo.changegroup(o, 'bundle')
929 cg = repo.changegroup(o, 'bundle')
930 write_bundle(cg, fname)
930 write_bundle(cg, fname)
931
931
932 def cat(ui, repo, file1, *pats, **opts):
932 def cat(ui, repo, file1, *pats, **opts):
933 """output the latest or given revisions of files
933 """output the latest or given revisions of files
934
934
935 Print the specified files as they were at the given revision.
935 Print the specified files as they were at the given revision.
936 If no revision is given then the tip is used.
936 If no revision is given then the tip is used.
937
937
938 Output may be to a file, in which case the name of the file is
938 Output may be to a file, in which case the name of the file is
939 given using a format string. The formatting rules are the same as
939 given using a format string. The formatting rules are the same as
940 for the export command, with the following additions:
940 for the export command, with the following additions:
941
941
942 %s basename of file being printed
942 %s basename of file being printed
943 %d dirname of file being printed, or '.' if in repo root
943 %d dirname of file being printed, or '.' if in repo root
944 %p root-relative path name of file being printed
944 %p root-relative path name of file being printed
945 """
945 """
946 ctx = repo.changectx(opts['rev'] or "-1")
946 ctx = repo.changectx(opts['rev'] or "-1")
947 for src, abs, rel, exact in walk(repo, (file1,) + pats, opts, ctx.node()):
947 for src, abs, rel, exact in walk(repo, (file1,) + pats, opts, ctx.node()):
948 fp = make_file(repo, opts['output'], ctx.node(), pathname=abs)
948 fp = make_file(repo, opts['output'], ctx.node(), pathname=abs)
949 fp.write(ctx.filectx(abs).data())
949 fp.write(ctx.filectx(abs).data())
950
950
951 def clone(ui, source, dest=None, **opts):
951 def clone(ui, source, dest=None, **opts):
952 """make a copy of an existing repository
952 """make a copy of an existing repository
953
953
954 Create a copy of an existing repository in a new directory.
954 Create a copy of an existing repository in a new directory.
955
955
956 If no destination directory name is specified, it defaults to the
956 If no destination directory name is specified, it defaults to the
957 basename of the source.
957 basename of the source.
958
958
959 The location of the source is added to the new repository's
959 The location of the source is added to the new repository's
960 .hg/hgrc file, as the default to be used for future pulls.
960 .hg/hgrc file, as the default to be used for future pulls.
961
961
962 For efficiency, hardlinks are used for cloning whenever the source
962 For efficiency, hardlinks are used for cloning whenever the source
963 and destination are on the same filesystem. Some filesystems,
963 and destination are on the same filesystem. Some filesystems,
964 such as AFS, implement hardlinking incorrectly, but do not report
964 such as AFS, implement hardlinking incorrectly, but do not report
965 errors. In these cases, use the --pull option to avoid
965 errors. In these cases, use the --pull option to avoid
966 hardlinking.
966 hardlinking.
967
967
968 See pull for valid source format details.
968 See pull for valid source format details.
969
969
970 It is possible to specify an ssh:// URL as the destination, but no
970 It is possible to specify an ssh:// URL as the destination, but no
971 .hg/hgrc will be created on the remote side. Look at the help text
971 .hg/hgrc will be created on the remote side. Look at the help text
972 for the pull command for important details about ssh:// URLs.
972 for the pull command for important details about ssh:// URLs.
973 """
973 """
974 ui.setconfig_remoteopts(**opts)
974 ui.setconfig_remoteopts(**opts)
975 hg.clone(ui, ui.expandpath(source), dest,
975 hg.clone(ui, ui.expandpath(source), dest,
976 pull=opts['pull'],
976 pull=opts['pull'],
977 stream=opts['uncompressed'],
977 stream=opts['uncompressed'],
978 rev=opts['rev'],
978 rev=opts['rev'],
979 update=not opts['noupdate'])
979 update=not opts['noupdate'])
980
980
981 def commit(ui, repo, *pats, **opts):
981 def commit(ui, repo, *pats, **opts):
982 """commit the specified files or all outstanding changes
982 """commit the specified files or all outstanding changes
983
983
984 Commit changes to the given files into the repository.
984 Commit changes to the given files into the repository.
985
985
986 If a list of files is omitted, all changes reported by "hg status"
986 If a list of files is omitted, all changes reported by "hg status"
987 will be committed.
987 will be committed.
988
988
989 If no commit message is specified, the editor configured in your hgrc
989 If no commit message is specified, the editor configured in your hgrc
990 or in the EDITOR environment variable is started to enter a message.
990 or in the EDITOR environment variable is started to enter a message.
991 """
991 """
992 message = opts['message']
992 message = opts['message']
993 logfile = opts['logfile']
993 logfile = opts['logfile']
994
994
995 if message and logfile:
995 if message and logfile:
996 raise util.Abort(_('options --message and --logfile are mutually '
996 raise util.Abort(_('options --message and --logfile are mutually '
997 'exclusive'))
997 'exclusive'))
998 if not message and logfile:
998 if not message and logfile:
999 try:
999 try:
1000 if logfile == '-':
1000 if logfile == '-':
1001 message = sys.stdin.read()
1001 message = sys.stdin.read()
1002 else:
1002 else:
1003 message = open(logfile).read()
1003 message = open(logfile).read()
1004 except IOError, inst:
1004 except IOError, inst:
1005 raise util.Abort(_("can't read commit message '%s': %s") %
1005 raise util.Abort(_("can't read commit message '%s': %s") %
1006 (logfile, inst.strerror))
1006 (logfile, inst.strerror))
1007
1007
1008 if opts['addremove']:
1008 if opts['addremove']:
1009 addremove_lock(ui, repo, pats, opts)
1009 addremove_lock(ui, repo, pats, opts)
1010 fns, match, anypats = matchpats(repo, pats, opts)
1010 fns, match, anypats = matchpats(repo, pats, opts)
1011 if pats:
1011 if pats:
1012 modified, added, removed, deleted, unknown = (
1012 modified, added, removed, deleted, unknown = (
1013 repo.changes(files=fns, match=match))
1013 repo.changes(files=fns, match=match))
1014 files = modified + added + removed
1014 files = modified + added + removed
1015 else:
1015 else:
1016 files = []
1016 files = []
1017 try:
1017 try:
1018 repo.commit(files, message, opts['user'], opts['date'], match,
1018 repo.commit(files, message, opts['user'], opts['date'], match,
1019 force_editor=opts.get('force_editor'))
1019 force_editor=opts.get('force_editor'))
1020 except ValueError, inst:
1020 except ValueError, inst:
1021 raise util.Abort(str(inst))
1021 raise util.Abort(str(inst))
1022
1022
1023 def docopy(ui, repo, pats, opts, wlock):
1023 def docopy(ui, repo, pats, opts, wlock):
1024 # called with the repo lock held
1024 # called with the repo lock held
1025 cwd = repo.getcwd()
1025 cwd = repo.getcwd()
1026 errors = 0
1026 errors = 0
1027 copied = []
1027 copied = []
1028 targets = {}
1028 targets = {}
1029
1029
1030 def okaytocopy(abs, rel, exact):
1030 def okaytocopy(abs, rel, exact):
1031 reasons = {'?': _('is not managed'),
1031 reasons = {'?': _('is not managed'),
1032 'a': _('has been marked for add'),
1032 'a': _('has been marked for add'),
1033 'r': _('has been marked for remove')}
1033 'r': _('has been marked for remove')}
1034 state = repo.dirstate.state(abs)
1034 state = repo.dirstate.state(abs)
1035 reason = reasons.get(state)
1035 reason = reasons.get(state)
1036 if reason:
1036 if reason:
1037 if state == 'a':
1037 if state == 'a':
1038 origsrc = repo.dirstate.copied(abs)
1038 origsrc = repo.dirstate.copied(abs)
1039 if origsrc is not None:
1039 if origsrc is not None:
1040 return origsrc
1040 return origsrc
1041 if exact:
1041 if exact:
1042 ui.warn(_('%s: not copying - file %s\n') % (rel, reason))
1042 ui.warn(_('%s: not copying - file %s\n') % (rel, reason))
1043 else:
1043 else:
1044 return abs
1044 return abs
1045
1045
1046 def copy(origsrc, abssrc, relsrc, target, exact):
1046 def copy(origsrc, abssrc, relsrc, target, exact):
1047 abstarget = util.canonpath(repo.root, cwd, target)
1047 abstarget = util.canonpath(repo.root, cwd, target)
1048 reltarget = util.pathto(cwd, abstarget)
1048 reltarget = util.pathto(cwd, abstarget)
1049 prevsrc = targets.get(abstarget)
1049 prevsrc = targets.get(abstarget)
1050 if prevsrc is not None:
1050 if prevsrc is not None:
1051 ui.warn(_('%s: not overwriting - %s collides with %s\n') %
1051 ui.warn(_('%s: not overwriting - %s collides with %s\n') %
1052 (reltarget, abssrc, prevsrc))
1052 (reltarget, abssrc, prevsrc))
1053 return
1053 return
1054 if (not opts['after'] and os.path.exists(reltarget) or
1054 if (not opts['after'] and os.path.exists(reltarget) or
1055 opts['after'] and repo.dirstate.state(abstarget) not in '?r'):
1055 opts['after'] and repo.dirstate.state(abstarget) not in '?r'):
1056 if not opts['force']:
1056 if not opts['force']:
1057 ui.warn(_('%s: not overwriting - file exists\n') %
1057 ui.warn(_('%s: not overwriting - file exists\n') %
1058 reltarget)
1058 reltarget)
1059 return
1059 return
1060 if not opts['after'] and not opts.get('dry_run'):
1060 if not opts['after'] and not opts.get('dry_run'):
1061 os.unlink(reltarget)
1061 os.unlink(reltarget)
1062 if opts['after']:
1062 if opts['after']:
1063 if not os.path.exists(reltarget):
1063 if not os.path.exists(reltarget):
1064 return
1064 return
1065 else:
1065 else:
1066 targetdir = os.path.dirname(reltarget) or '.'
1066 targetdir = os.path.dirname(reltarget) or '.'
1067 if not os.path.isdir(targetdir) and not opts.get('dry_run'):
1067 if not os.path.isdir(targetdir) and not opts.get('dry_run'):
1068 os.makedirs(targetdir)
1068 os.makedirs(targetdir)
1069 try:
1069 try:
1070 restore = repo.dirstate.state(abstarget) == 'r'
1070 restore = repo.dirstate.state(abstarget) == 'r'
1071 if restore and not opts.get('dry_run'):
1071 if restore and not opts.get('dry_run'):
1072 repo.undelete([abstarget], wlock)
1072 repo.undelete([abstarget], wlock)
1073 try:
1073 try:
1074 if not opts.get('dry_run'):
1074 if not opts.get('dry_run'):
1075 shutil.copyfile(relsrc, reltarget)
1075 shutil.copyfile(relsrc, reltarget)
1076 shutil.copymode(relsrc, reltarget)
1076 shutil.copymode(relsrc, reltarget)
1077 restore = False
1077 restore = False
1078 finally:
1078 finally:
1079 if restore:
1079 if restore:
1080 repo.remove([abstarget], wlock)
1080 repo.remove([abstarget], wlock)
1081 except shutil.Error, inst:
1081 except shutil.Error, inst:
1082 raise util.Abort(str(inst))
1082 raise util.Abort(str(inst))
1083 except IOError, inst:
1083 except IOError, inst:
1084 if inst.errno == errno.ENOENT:
1084 if inst.errno == errno.ENOENT:
1085 ui.warn(_('%s: deleted in working copy\n') % relsrc)
1085 ui.warn(_('%s: deleted in working copy\n') % relsrc)
1086 else:
1086 else:
1087 ui.warn(_('%s: cannot copy - %s\n') %
1087 ui.warn(_('%s: cannot copy - %s\n') %
1088 (relsrc, inst.strerror))
1088 (relsrc, inst.strerror))
1089 errors += 1
1089 errors += 1
1090 return
1090 return
1091 if ui.verbose or not exact:
1091 if ui.verbose or not exact:
1092 ui.status(_('copying %s to %s\n') % (relsrc, reltarget))
1092 ui.status(_('copying %s to %s\n') % (relsrc, reltarget))
1093 targets[abstarget] = abssrc
1093 targets[abstarget] = abssrc
1094 if abstarget != origsrc and not opts.get('dry_run'):
1094 if abstarget != origsrc and not opts.get('dry_run'):
1095 repo.copy(origsrc, abstarget, wlock)
1095 repo.copy(origsrc, abstarget, wlock)
1096 copied.append((abssrc, relsrc, exact))
1096 copied.append((abssrc, relsrc, exact))
1097
1097
1098 def targetpathfn(pat, dest, srcs):
1098 def targetpathfn(pat, dest, srcs):
1099 if os.path.isdir(pat):
1099 if os.path.isdir(pat):
1100 abspfx = util.canonpath(repo.root, cwd, pat)
1100 abspfx = util.canonpath(repo.root, cwd, pat)
1101 if destdirexists:
1101 if destdirexists:
1102 striplen = len(os.path.split(abspfx)[0])
1102 striplen = len(os.path.split(abspfx)[0])
1103 else:
1103 else:
1104 striplen = len(abspfx)
1104 striplen = len(abspfx)
1105 if striplen:
1105 if striplen:
1106 striplen += len(os.sep)
1106 striplen += len(os.sep)
1107 res = lambda p: os.path.join(dest, p[striplen:])
1107 res = lambda p: os.path.join(dest, p[striplen:])
1108 elif destdirexists:
1108 elif destdirexists:
1109 res = lambda p: os.path.join(dest, os.path.basename(p))
1109 res = lambda p: os.path.join(dest, os.path.basename(p))
1110 else:
1110 else:
1111 res = lambda p: dest
1111 res = lambda p: dest
1112 return res
1112 return res
1113
1113
1114 def targetpathafterfn(pat, dest, srcs):
1114 def targetpathafterfn(pat, dest, srcs):
1115 if util.patkind(pat, None)[0]:
1115 if util.patkind(pat, None)[0]:
1116 # a mercurial pattern
1116 # a mercurial pattern
1117 res = lambda p: os.path.join(dest, os.path.basename(p))
1117 res = lambda p: os.path.join(dest, os.path.basename(p))
1118 else:
1118 else:
1119 abspfx = util.canonpath(repo.root, cwd, pat)
1119 abspfx = util.canonpath(repo.root, cwd, pat)
1120 if len(abspfx) < len(srcs[0][0]):
1120 if len(abspfx) < len(srcs[0][0]):
1121 # A directory. Either the target path contains the last
1121 # A directory. Either the target path contains the last
1122 # component of the source path or it does not.
1122 # component of the source path or it does not.
1123 def evalpath(striplen):
1123 def evalpath(striplen):
1124 score = 0
1124 score = 0
1125 for s in srcs:
1125 for s in srcs:
1126 t = os.path.join(dest, s[0][striplen:])
1126 t = os.path.join(dest, s[0][striplen:])
1127 if os.path.exists(t):
1127 if os.path.exists(t):
1128 score += 1
1128 score += 1
1129 return score
1129 return score
1130
1130
1131 striplen = len(abspfx)
1131 striplen = len(abspfx)
1132 if striplen:
1132 if striplen:
1133 striplen += len(os.sep)
1133 striplen += len(os.sep)
1134 if os.path.isdir(os.path.join(dest, os.path.split(abspfx)[1])):
1134 if os.path.isdir(os.path.join(dest, os.path.split(abspfx)[1])):
1135 score = evalpath(striplen)
1135 score = evalpath(striplen)
1136 striplen1 = len(os.path.split(abspfx)[0])
1136 striplen1 = len(os.path.split(abspfx)[0])
1137 if striplen1:
1137 if striplen1:
1138 striplen1 += len(os.sep)
1138 striplen1 += len(os.sep)
1139 if evalpath(striplen1) > score:
1139 if evalpath(striplen1) > score:
1140 striplen = striplen1
1140 striplen = striplen1
1141 res = lambda p: os.path.join(dest, p[striplen:])
1141 res = lambda p: os.path.join(dest, p[striplen:])
1142 else:
1142 else:
1143 # a file
1143 # a file
1144 if destdirexists:
1144 if destdirexists:
1145 res = lambda p: os.path.join(dest, os.path.basename(p))
1145 res = lambda p: os.path.join(dest, os.path.basename(p))
1146 else:
1146 else:
1147 res = lambda p: dest
1147 res = lambda p: dest
1148 return res
1148 return res
1149
1149
1150
1150
1151 pats = list(pats)
1151 pats = list(pats)
1152 if not pats:
1152 if not pats:
1153 raise util.Abort(_('no source or destination specified'))
1153 raise util.Abort(_('no source or destination specified'))
1154 if len(pats) == 1:
1154 if len(pats) == 1:
1155 raise util.Abort(_('no destination specified'))
1155 raise util.Abort(_('no destination specified'))
1156 dest = pats.pop()
1156 dest = pats.pop()
1157 destdirexists = os.path.isdir(dest)
1157 destdirexists = os.path.isdir(dest)
1158 if (len(pats) > 1 or util.patkind(pats[0], None)[0]) and not destdirexists:
1158 if (len(pats) > 1 or util.patkind(pats[0], None)[0]) and not destdirexists:
1159 raise util.Abort(_('with multiple sources, destination must be an '
1159 raise util.Abort(_('with multiple sources, destination must be an '
1160 'existing directory'))
1160 'existing directory'))
1161 if opts['after']:
1161 if opts['after']:
1162 tfn = targetpathafterfn
1162 tfn = targetpathafterfn
1163 else:
1163 else:
1164 tfn = targetpathfn
1164 tfn = targetpathfn
1165 copylist = []
1165 copylist = []
1166 for pat in pats:
1166 for pat in pats:
1167 srcs = []
1167 srcs = []
1168 for tag, abssrc, relsrc, exact in walk(repo, [pat], opts):
1168 for tag, abssrc, relsrc, exact in walk(repo, [pat], opts):
1169 origsrc = okaytocopy(abssrc, relsrc, exact)
1169 origsrc = okaytocopy(abssrc, relsrc, exact)
1170 if origsrc:
1170 if origsrc:
1171 srcs.append((origsrc, abssrc, relsrc, exact))
1171 srcs.append((origsrc, abssrc, relsrc, exact))
1172 if not srcs:
1172 if not srcs:
1173 continue
1173 continue
1174 copylist.append((tfn(pat, dest, srcs), srcs))
1174 copylist.append((tfn(pat, dest, srcs), srcs))
1175 if not copylist:
1175 if not copylist:
1176 raise util.Abort(_('no files to copy'))
1176 raise util.Abort(_('no files to copy'))
1177
1177
1178 for targetpath, srcs in copylist:
1178 for targetpath, srcs in copylist:
1179 for origsrc, abssrc, relsrc, exact in srcs:
1179 for origsrc, abssrc, relsrc, exact in srcs:
1180 copy(origsrc, abssrc, relsrc, targetpath(abssrc), exact)
1180 copy(origsrc, abssrc, relsrc, targetpath(abssrc), exact)
1181
1181
1182 if errors:
1182 if errors:
1183 ui.warn(_('(consider using --after)\n'))
1183 ui.warn(_('(consider using --after)\n'))
1184 return errors, copied
1184 return errors, copied
1185
1185
1186 def copy(ui, repo, *pats, **opts):
1186 def copy(ui, repo, *pats, **opts):
1187 """mark files as copied for the next commit
1187 """mark files as copied for the next commit
1188
1188
1189 Mark dest as having copies of source files. If dest is a
1189 Mark dest as having copies of source files. If dest is a
1190 directory, copies are put in that directory. If dest is a file,
1190 directory, copies are put in that directory. If dest is a file,
1191 there can only be one source.
1191 there can only be one source.
1192
1192
1193 By default, this command copies the contents of files as they
1193 By default, this command copies the contents of files as they
1194 stand in the working directory. If invoked with --after, the
1194 stand in the working directory. If invoked with --after, the
1195 operation is recorded, but no copying is performed.
1195 operation is recorded, but no copying is performed.
1196
1196
1197 This command takes effect in the next commit.
1197 This command takes effect in the next commit.
1198
1198
1199 NOTE: This command should be treated as experimental. While it
1199 NOTE: This command should be treated as experimental. While it
1200 should properly record copied files, this information is not yet
1200 should properly record copied files, this information is not yet
1201 fully used by merge, nor fully reported by log.
1201 fully used by merge, nor fully reported by log.
1202 """
1202 """
1203 wlock = repo.wlock(0)
1203 wlock = repo.wlock(0)
1204 errs, copied = docopy(ui, repo, pats, opts, wlock)
1204 errs, copied = docopy(ui, repo, pats, opts, wlock)
1205 return errs
1205 return errs
1206
1206
1207 def debugancestor(ui, index, rev1, rev2):
1207 def debugancestor(ui, index, rev1, rev2):
1208 """find the ancestor revision of two revisions in a given index"""
1208 """find the ancestor revision of two revisions in a given index"""
1209 r = revlog.revlog(util.opener(os.getcwd(), audit=False), index, "", 0)
1209 r = revlog.revlog(util.opener(os.getcwd(), audit=False), index, "", 0)
1210 a = r.ancestor(r.lookup(rev1), r.lookup(rev2))
1210 a = r.ancestor(r.lookup(rev1), r.lookup(rev2))
1211 ui.write("%d:%s\n" % (r.rev(a), hex(a)))
1211 ui.write("%d:%s\n" % (r.rev(a), hex(a)))
1212
1212
1213 def debugcomplete(ui, cmd='', **opts):
1213 def debugcomplete(ui, cmd='', **opts):
1214 """returns the completion list associated with the given command"""
1214 """returns the completion list associated with the given command"""
1215
1215
1216 if opts['options']:
1216 if opts['options']:
1217 options = []
1217 options = []
1218 otables = [globalopts]
1218 otables = [globalopts]
1219 if cmd:
1219 if cmd:
1220 aliases, entry = findcmd(cmd)
1220 aliases, entry = findcmd(cmd)
1221 otables.append(entry[1])
1221 otables.append(entry[1])
1222 for t in otables:
1222 for t in otables:
1223 for o in t:
1223 for o in t:
1224 if o[0]:
1224 if o[0]:
1225 options.append('-%s' % o[0])
1225 options.append('-%s' % o[0])
1226 options.append('--%s' % o[1])
1226 options.append('--%s' % o[1])
1227 ui.write("%s\n" % "\n".join(options))
1227 ui.write("%s\n" % "\n".join(options))
1228 return
1228 return
1229
1229
1230 clist = findpossible(cmd).keys()
1230 clist = findpossible(cmd).keys()
1231 clist.sort()
1231 clist.sort()
1232 ui.write("%s\n" % "\n".join(clist))
1232 ui.write("%s\n" % "\n".join(clist))
1233
1233
1234 def debugrebuildstate(ui, repo, rev=None):
1234 def debugrebuildstate(ui, repo, rev=None):
1235 """rebuild the dirstate as it would look like for the given revision"""
1235 """rebuild the dirstate as it would look like for the given revision"""
1236 if not rev:
1236 if not rev:
1237 rev = repo.changelog.tip()
1237 rev = repo.changelog.tip()
1238 else:
1238 else:
1239 rev = repo.lookup(rev)
1239 rev = repo.lookup(rev)
1240 change = repo.changelog.read(rev)
1240 change = repo.changelog.read(rev)
1241 n = change[0]
1241 n = change[0]
1242 files = repo.manifest.readflags(n)
1242 files = repo.manifest.readflags(n)
1243 wlock = repo.wlock()
1243 wlock = repo.wlock()
1244 repo.dirstate.rebuild(rev, files.iteritems())
1244 repo.dirstate.rebuild(rev, files.iteritems())
1245
1245
1246 def debugcheckstate(ui, repo):
1246 def debugcheckstate(ui, repo):
1247 """validate the correctness of the current dirstate"""
1247 """validate the correctness of the current dirstate"""
1248 parent1, parent2 = repo.dirstate.parents()
1248 parent1, parent2 = repo.dirstate.parents()
1249 repo.dirstate.read()
1249 repo.dirstate.read()
1250 dc = repo.dirstate.map
1250 dc = repo.dirstate.map
1251 keys = dc.keys()
1251 keys = dc.keys()
1252 keys.sort()
1252 keys.sort()
1253 m1n = repo.changelog.read(parent1)[0]
1253 m1n = repo.changelog.read(parent1)[0]
1254 m2n = repo.changelog.read(parent2)[0]
1254 m2n = repo.changelog.read(parent2)[0]
1255 m1 = repo.manifest.read(m1n)
1255 m1 = repo.manifest.read(m1n)
1256 m2 = repo.manifest.read(m2n)
1256 m2 = repo.manifest.read(m2n)
1257 errors = 0
1257 errors = 0
1258 for f in dc:
1258 for f in dc:
1259 state = repo.dirstate.state(f)
1259 state = repo.dirstate.state(f)
1260 if state in "nr" and f not in m1:
1260 if state in "nr" and f not in m1:
1261 ui.warn(_("%s in state %s, but not in manifest1\n") % (f, state))
1261 ui.warn(_("%s in state %s, but not in manifest1\n") % (f, state))
1262 errors += 1
1262 errors += 1
1263 if state in "a" and f in m1:
1263 if state in "a" and f in m1:
1264 ui.warn(_("%s in state %s, but also in manifest1\n") % (f, state))
1264 ui.warn(_("%s in state %s, but also in manifest1\n") % (f, state))
1265 errors += 1
1265 errors += 1
1266 if state in "m" and f not in m1 and f not in m2:
1266 if state in "m" and f not in m1 and f not in m2:
1267 ui.warn(_("%s in state %s, but not in either manifest\n") %
1267 ui.warn(_("%s in state %s, but not in either manifest\n") %
1268 (f, state))
1268 (f, state))
1269 errors += 1
1269 errors += 1
1270 for f in m1:
1270 for f in m1:
1271 state = repo.dirstate.state(f)
1271 state = repo.dirstate.state(f)
1272 if state not in "nrm":
1272 if state not in "nrm":
1273 ui.warn(_("%s in manifest1, but listed as state %s") % (f, state))
1273 ui.warn(_("%s in manifest1, but listed as state %s") % (f, state))
1274 errors += 1
1274 errors += 1
1275 if errors:
1275 if errors:
1276 error = _(".hg/dirstate inconsistent with current parent's manifest")
1276 error = _(".hg/dirstate inconsistent with current parent's manifest")
1277 raise util.Abort(error)
1277 raise util.Abort(error)
1278
1278
1279 def debugconfig(ui, repo, *values):
1279 def debugconfig(ui, repo, *values):
1280 """show combined config settings from all hgrc files
1280 """show combined config settings from all hgrc files
1281
1281
1282 With no args, print names and values of all config items.
1282 With no args, print names and values of all config items.
1283
1283
1284 With one arg of the form section.name, print just the value of
1284 With one arg of the form section.name, print just the value of
1285 that config item.
1285 that config item.
1286
1286
1287 With multiple args, print names and values of all config items
1287 With multiple args, print names and values of all config items
1288 with matching section names."""
1288 with matching section names."""
1289
1289
1290 if values:
1290 if values:
1291 if len([v for v in values if '.' in v]) > 1:
1291 if len([v for v in values if '.' in v]) > 1:
1292 raise util.Abort(_('only one config item permitted'))
1292 raise util.Abort(_('only one config item permitted'))
1293 for section, name, value in ui.walkconfig():
1293 for section, name, value in ui.walkconfig():
1294 sectname = section + '.' + name
1294 sectname = section + '.' + name
1295 if values:
1295 if values:
1296 for v in values:
1296 for v in values:
1297 if v == section:
1297 if v == section:
1298 ui.write('%s=%s\n' % (sectname, value))
1298 ui.write('%s=%s\n' % (sectname, value))
1299 elif v == sectname:
1299 elif v == sectname:
1300 ui.write(value, '\n')
1300 ui.write(value, '\n')
1301 else:
1301 else:
1302 ui.write('%s=%s\n' % (sectname, value))
1302 ui.write('%s=%s\n' % (sectname, value))
1303
1303
1304 def debugsetparents(ui, repo, rev1, rev2=None):
1304 def debugsetparents(ui, repo, rev1, rev2=None):
1305 """manually set the parents of the current working directory
1305 """manually set the parents of the current working directory
1306
1306
1307 This is useful for writing repository conversion tools, but should
1307 This is useful for writing repository conversion tools, but should
1308 be used with care.
1308 be used with care.
1309 """
1309 """
1310
1310
1311 if not rev2:
1311 if not rev2:
1312 rev2 = hex(nullid)
1312 rev2 = hex(nullid)
1313
1313
1314 repo.dirstate.setparents(repo.lookup(rev1), repo.lookup(rev2))
1314 repo.dirstate.setparents(repo.lookup(rev1), repo.lookup(rev2))
1315
1315
1316 def debugstate(ui, repo):
1316 def debugstate(ui, repo):
1317 """show the contents of the current dirstate"""
1317 """show the contents of the current dirstate"""
1318 repo.dirstate.read()
1318 repo.dirstate.read()
1319 dc = repo.dirstate.map
1319 dc = repo.dirstate.map
1320 keys = dc.keys()
1320 keys = dc.keys()
1321 keys.sort()
1321 keys.sort()
1322 for file_ in keys:
1322 for file_ in keys:
1323 ui.write("%c %3o %10d %s %s\n"
1323 ui.write("%c %3o %10d %s %s\n"
1324 % (dc[file_][0], dc[file_][1] & 0777, dc[file_][2],
1324 % (dc[file_][0], dc[file_][1] & 0777, dc[file_][2],
1325 time.strftime("%x %X",
1325 time.strftime("%x %X",
1326 time.localtime(dc[file_][3])), file_))
1326 time.localtime(dc[file_][3])), file_))
1327 for f in repo.dirstate.copies:
1327 for f in repo.dirstate.copies:
1328 ui.write(_("copy: %s -> %s\n") % (repo.dirstate.copies[f], f))
1328 ui.write(_("copy: %s -> %s\n") % (repo.dirstate.copies[f], f))
1329
1329
1330 def debugdata(ui, file_, rev):
1330 def debugdata(ui, file_, rev):
1331 """dump the contents of an data file revision"""
1331 """dump the contents of an data file revision"""
1332 r = revlog.revlog(util.opener(os.getcwd(), audit=False),
1332 r = revlog.revlog(util.opener(os.getcwd(), audit=False),
1333 file_[:-2] + ".i", file_, 0)
1333 file_[:-2] + ".i", file_, 0)
1334 try:
1334 try:
1335 ui.write(r.revision(r.lookup(rev)))
1335 ui.write(r.revision(r.lookup(rev)))
1336 except KeyError:
1336 except KeyError:
1337 raise util.Abort(_('invalid revision identifier %s'), rev)
1337 raise util.Abort(_('invalid revision identifier %s'), rev)
1338
1338
1339 def debugindex(ui, file_):
1339 def debugindex(ui, file_):
1340 """dump the contents of an index file"""
1340 """dump the contents of an index file"""
1341 r = revlog.revlog(util.opener(os.getcwd(), audit=False), file_, "", 0)
1341 r = revlog.revlog(util.opener(os.getcwd(), audit=False), file_, "", 0)
1342 ui.write(" rev offset length base linkrev" +
1342 ui.write(" rev offset length base linkrev" +
1343 " nodeid p1 p2\n")
1343 " nodeid p1 p2\n")
1344 for i in range(r.count()):
1344 for i in range(r.count()):
1345 node = r.node(i)
1345 node = r.node(i)
1346 pp = r.parents(node)
1346 pp = r.parents(node)
1347 ui.write("% 6d % 9d % 7d % 6d % 7d %s %s %s\n" % (
1347 ui.write("% 6d % 9d % 7d % 6d % 7d %s %s %s\n" % (
1348 i, r.start(i), r.length(i), r.base(i), r.linkrev(node),
1348 i, r.start(i), r.length(i), r.base(i), r.linkrev(node),
1349 short(node), short(pp[0]), short(pp[1])))
1349 short(node), short(pp[0]), short(pp[1])))
1350
1350
1351 def debugindexdot(ui, file_):
1351 def debugindexdot(ui, file_):
1352 """dump an index DAG as a .dot file"""
1352 """dump an index DAG as a .dot file"""
1353 r = revlog.revlog(util.opener(os.getcwd(), audit=False), file_, "", 0)
1353 r = revlog.revlog(util.opener(os.getcwd(), audit=False), file_, "", 0)
1354 ui.write("digraph G {\n")
1354 ui.write("digraph G {\n")
1355 for i in range(r.count()):
1355 for i in range(r.count()):
1356 node = r.node(i)
1356 node = r.node(i)
1357 pp = r.parents(node)
1357 pp = r.parents(node)
1358 ui.write("\t%d -> %d\n" % (r.rev(pp[0]), i))
1358 ui.write("\t%d -> %d\n" % (r.rev(pp[0]), i))
1359 if pp[1] != nullid:
1359 if pp[1] != nullid:
1360 ui.write("\t%d -> %d\n" % (r.rev(pp[1]), i))
1360 ui.write("\t%d -> %d\n" % (r.rev(pp[1]), i))
1361 ui.write("}\n")
1361 ui.write("}\n")
1362
1362
1363 def debugrename(ui, repo, file, rev=None):
1363 def debugrename(ui, repo, file, rev=None):
1364 """dump rename information"""
1364 """dump rename information"""
1365 r = repo.file(relpath(repo, [file])[0])
1365 r = repo.file(relpath(repo, [file])[0])
1366 if rev:
1366 if rev:
1367 try:
1367 try:
1368 # assume all revision numbers are for changesets
1368 # assume all revision numbers are for changesets
1369 n = repo.lookup(rev)
1369 n = repo.lookup(rev)
1370 change = repo.changelog.read(n)
1370 change = repo.changelog.read(n)
1371 m = repo.manifest.read(change[0])
1371 m = repo.manifest.read(change[0])
1372 n = m[relpath(repo, [file])[0]]
1372 n = m[relpath(repo, [file])[0]]
1373 except (hg.RepoError, KeyError):
1373 except (hg.RepoError, KeyError):
1374 n = r.lookup(rev)
1374 n = r.lookup(rev)
1375 else:
1375 else:
1376 n = r.tip()
1376 n = r.tip()
1377 m = r.renamed(n)
1377 m = r.renamed(n)
1378 if m:
1378 if m:
1379 ui.write(_("renamed from %s:%s\n") % (m[0], hex(m[1])))
1379 ui.write(_("renamed from %s:%s\n") % (m[0], hex(m[1])))
1380 else:
1380 else:
1381 ui.write(_("not renamed\n"))
1381 ui.write(_("not renamed\n"))
1382
1382
1383 def debugwalk(ui, repo, *pats, **opts):
1383 def debugwalk(ui, repo, *pats, **opts):
1384 """show how files match on given patterns"""
1384 """show how files match on given patterns"""
1385 items = list(walk(repo, pats, opts))
1385 items = list(walk(repo, pats, opts))
1386 if not items:
1386 if not items:
1387 return
1387 return
1388 fmt = '%%s %%-%ds %%-%ds %%s' % (
1388 fmt = '%%s %%-%ds %%-%ds %%s' % (
1389 max([len(abs) for (src, abs, rel, exact) in items]),
1389 max([len(abs) for (src, abs, rel, exact) in items]),
1390 max([len(rel) for (src, abs, rel, exact) in items]))
1390 max([len(rel) for (src, abs, rel, exact) in items]))
1391 for src, abs, rel, exact in items:
1391 for src, abs, rel, exact in items:
1392 line = fmt % (src, abs, rel, exact and 'exact' or '')
1392 line = fmt % (src, abs, rel, exact and 'exact' or '')
1393 ui.write("%s\n" % line.rstrip())
1393 ui.write("%s\n" % line.rstrip())
1394
1394
1395 def diff(ui, repo, *pats, **opts):
1395 def diff(ui, repo, *pats, **opts):
1396 """diff repository (or selected files)
1396 """diff repository (or selected files)
1397
1397
1398 Show differences between revisions for the specified files.
1398 Show differences between revisions for the specified files.
1399
1399
1400 Differences between files are shown using the unified diff format.
1400 Differences between files are shown using the unified diff format.
1401
1401
1402 When two revision arguments are given, then changes are shown
1402 When two revision arguments are given, then changes are shown
1403 between those revisions. If only one revision is specified then
1403 between those revisions. If only one revision is specified then
1404 that revision is compared to the working directory, and, when no
1404 that revision is compared to the working directory, and, when no
1405 revisions are specified, the working directory files are compared
1405 revisions are specified, the working directory files are compared
1406 to its parent.
1406 to its parent.
1407
1407
1408 Without the -a option, diff will avoid generating diffs of files
1408 Without the -a option, diff will avoid generating diffs of files
1409 it detects as binary. With -a, diff will generate a diff anyway,
1409 it detects as binary. With -a, diff will generate a diff anyway,
1410 probably with undesirable results.
1410 probably with undesirable results.
1411 """
1411 """
1412 node1, node2 = revpair(ui, repo, opts['rev'])
1412 node1, node2 = revpair(ui, repo, opts['rev'])
1413
1413
1414 fns, matchfn, anypats = matchpats(repo, pats, opts)
1414 fns, matchfn, anypats = matchpats(repo, pats, opts)
1415
1415
1416 dodiff(sys.stdout, ui, repo, node1, node2, fns, match=matchfn,
1416 dodiff(sys.stdout, ui, repo, node1, node2, fns, match=matchfn,
1417 text=opts['text'], opts=opts)
1417 text=opts['text'], opts=opts)
1418
1418
1419 def doexport(ui, repo, changeset, seqno, total, revwidth, opts):
1419 def doexport(ui, repo, changeset, seqno, total, revwidth, opts):
1420 node = repo.lookup(changeset)
1420 node = repo.lookup(changeset)
1421 parents = [p for p in repo.changelog.parents(node) if p != nullid]
1421 parents = [p for p in repo.changelog.parents(node) if p != nullid]
1422 if opts['switch_parent']:
1422 if opts['switch_parent']:
1423 parents.reverse()
1423 parents.reverse()
1424 prev = (parents and parents[0]) or nullid
1424 prev = (parents and parents[0]) or nullid
1425 change = repo.changelog.read(node)
1425 change = repo.changelog.read(node)
1426
1426
1427 fp = make_file(repo, opts['output'], node, total=total, seqno=seqno,
1427 fp = make_file(repo, opts['output'], node, total=total, seqno=seqno,
1428 revwidth=revwidth)
1428 revwidth=revwidth)
1429 if fp != sys.stdout:
1429 if fp != sys.stdout:
1430 ui.note("%s\n" % fp.name)
1430 ui.note("%s\n" % fp.name)
1431
1431
1432 fp.write("# HG changeset patch\n")
1432 fp.write("# HG changeset patch\n")
1433 fp.write("# User %s\n" % change[1])
1433 fp.write("# User %s\n" % change[1])
1434 fp.write("# Date %d %d\n" % change[2])
1434 fp.write("# Date %d %d\n" % change[2])
1435 fp.write("# Node ID %s\n" % hex(node))
1435 fp.write("# Node ID %s\n" % hex(node))
1436 fp.write("# Parent %s\n" % hex(prev))
1436 fp.write("# Parent %s\n" % hex(prev))
1437 if len(parents) > 1:
1437 if len(parents) > 1:
1438 fp.write("# Parent %s\n" % hex(parents[1]))
1438 fp.write("# Parent %s\n" % hex(parents[1]))
1439 fp.write(change[4].rstrip())
1439 fp.write(change[4].rstrip())
1440 fp.write("\n\n")
1440 fp.write("\n\n")
1441
1441
1442 dodiff(fp, ui, repo, prev, node, text=opts['text'])
1442 dodiff(fp, ui, repo, prev, node, text=opts['text'])
1443 if fp != sys.stdout:
1443 if fp != sys.stdout:
1444 fp.close()
1444 fp.close()
1445
1445
1446 def export(ui, repo, *changesets, **opts):
1446 def export(ui, repo, *changesets, **opts):
1447 """dump the header and diffs for one or more changesets
1447 """dump the header and diffs for one or more changesets
1448
1448
1449 Print the changeset header and diffs for one or more revisions.
1449 Print the changeset header and diffs for one or more revisions.
1450
1450
1451 The information shown in the changeset header is: author,
1451 The information shown in the changeset header is: author,
1452 changeset hash, parent and commit comment.
1452 changeset hash, parent and commit comment.
1453
1453
1454 Output may be to a file, in which case the name of the file is
1454 Output may be to a file, in which case the name of the file is
1455 given using a format string. The formatting rules are as follows:
1455 given using a format string. The formatting rules are as follows:
1456
1456
1457 %% literal "%" character
1457 %% literal "%" character
1458 %H changeset hash (40 bytes of hexadecimal)
1458 %H changeset hash (40 bytes of hexadecimal)
1459 %N number of patches being generated
1459 %N number of patches being generated
1460 %R changeset revision number
1460 %R changeset revision number
1461 %b basename of the exporting repository
1461 %b basename of the exporting repository
1462 %h short-form changeset hash (12 bytes of hexadecimal)
1462 %h short-form changeset hash (12 bytes of hexadecimal)
1463 %n zero-padded sequence number, starting at 1
1463 %n zero-padded sequence number, starting at 1
1464 %r zero-padded changeset revision number
1464 %r zero-padded changeset revision number
1465
1465
1466 Without the -a option, export will avoid generating diffs of files
1466 Without the -a option, export will avoid generating diffs of files
1467 it detects as binary. With -a, export will generate a diff anyway,
1467 it detects as binary. With -a, export will generate a diff anyway,
1468 probably with undesirable results.
1468 probably with undesirable results.
1469
1469
1470 With the --switch-parent option, the diff will be against the second
1470 With the --switch-parent option, the diff will be against the second
1471 parent. It can be useful to review a merge.
1471 parent. It can be useful to review a merge.
1472 """
1472 """
1473 if not changesets:
1473 if not changesets:
1474 raise util.Abort(_("export requires at least one changeset"))
1474 raise util.Abort(_("export requires at least one changeset"))
1475 seqno = 0
1475 seqno = 0
1476 revs = list(revrange(ui, repo, changesets))
1476 revs = list(revrange(ui, repo, changesets))
1477 total = len(revs)
1477 total = len(revs)
1478 revwidth = max(map(len, revs))
1478 revwidth = max(map(len, revs))
1479 msg = len(revs) > 1 and _("Exporting patches:\n") or _("Exporting patch:\n")
1479 msg = len(revs) > 1 and _("Exporting patches:\n") or _("Exporting patch:\n")
1480 ui.note(msg)
1480 ui.note(msg)
1481 for cset in revs:
1481 for cset in revs:
1482 seqno += 1
1482 seqno += 1
1483 doexport(ui, repo, cset, seqno, total, revwidth, opts)
1483 doexport(ui, repo, cset, seqno, total, revwidth, opts)
1484
1484
1485 def forget(ui, repo, *pats, **opts):
1485 def forget(ui, repo, *pats, **opts):
1486 """don't add the specified files on the next commit (DEPRECATED)
1486 """don't add the specified files on the next commit (DEPRECATED)
1487
1487
1488 (DEPRECATED)
1488 (DEPRECATED)
1489 Undo an 'hg add' scheduled for the next commit.
1489 Undo an 'hg add' scheduled for the next commit.
1490
1490
1491 This command is now deprecated and will be removed in a future
1491 This command is now deprecated and will be removed in a future
1492 release. Please use revert instead.
1492 release. Please use revert instead.
1493 """
1493 """
1494 ui.warn(_("(the forget command is deprecated; use revert instead)\n"))
1494 ui.warn(_("(the forget command is deprecated; use revert instead)\n"))
1495 forget = []
1495 forget = []
1496 for src, abs, rel, exact in walk(repo, pats, opts):
1496 for src, abs, rel, exact in walk(repo, pats, opts):
1497 if repo.dirstate.state(abs) == 'a':
1497 if repo.dirstate.state(abs) == 'a':
1498 forget.append(abs)
1498 forget.append(abs)
1499 if ui.verbose or not exact:
1499 if ui.verbose or not exact:
1500 ui.status(_('forgetting %s\n') % ((pats and rel) or abs))
1500 ui.status(_('forgetting %s\n') % ((pats and rel) or abs))
1501 repo.forget(forget)
1501 repo.forget(forget)
1502
1502
1503 def grep(ui, repo, pattern, *pats, **opts):
1503 def grep(ui, repo, pattern, *pats, **opts):
1504 """search for a pattern in specified files and revisions
1504 """search for a pattern in specified files and revisions
1505
1505
1506 Search revisions of files for a regular expression.
1506 Search revisions of files for a regular expression.
1507
1507
1508 This command behaves differently than Unix grep. It only accepts
1508 This command behaves differently than Unix grep. It only accepts
1509 Python/Perl regexps. It searches repository history, not the
1509 Python/Perl regexps. It searches repository history, not the
1510 working directory. It always prints the revision number in which
1510 working directory. It always prints the revision number in which
1511 a match appears.
1511 a match appears.
1512
1512
1513 By default, grep only prints output for the first revision of a
1513 By default, grep only prints output for the first revision of a
1514 file in which it finds a match. To get it to print every revision
1514 file in which it finds a match. To get it to print every revision
1515 that contains a change in match status ("-" for a match that
1515 that contains a change in match status ("-" for a match that
1516 becomes a non-match, or "+" for a non-match that becomes a match),
1516 becomes a non-match, or "+" for a non-match that becomes a match),
1517 use the --all flag.
1517 use the --all flag.
1518 """
1518 """
1519 reflags = 0
1519 reflags = 0
1520 if opts['ignore_case']:
1520 if opts['ignore_case']:
1521 reflags |= re.I
1521 reflags |= re.I
1522 regexp = re.compile(pattern, reflags)
1522 regexp = re.compile(pattern, reflags)
1523 sep, eol = ':', '\n'
1523 sep, eol = ':', '\n'
1524 if opts['print0']:
1524 if opts['print0']:
1525 sep = eol = '\0'
1525 sep = eol = '\0'
1526
1526
1527 fcache = {}
1527 fcache = {}
1528 def getfile(fn):
1528 def getfile(fn):
1529 if fn not in fcache:
1529 if fn not in fcache:
1530 fcache[fn] = repo.file(fn)
1530 fcache[fn] = repo.file(fn)
1531 return fcache[fn]
1531 return fcache[fn]
1532
1532
1533 def matchlines(body):
1533 def matchlines(body):
1534 begin = 0
1534 begin = 0
1535 linenum = 0
1535 linenum = 0
1536 while True:
1536 while True:
1537 match = regexp.search(body, begin)
1537 match = regexp.search(body, begin)
1538 if not match:
1538 if not match:
1539 break
1539 break
1540 mstart, mend = match.span()
1540 mstart, mend = match.span()
1541 linenum += body.count('\n', begin, mstart) + 1
1541 linenum += body.count('\n', begin, mstart) + 1
1542 lstart = body.rfind('\n', begin, mstart) + 1 or begin
1542 lstart = body.rfind('\n', begin, mstart) + 1 or begin
1543 lend = body.find('\n', mend)
1543 lend = body.find('\n', mend)
1544 yield linenum, mstart - lstart, mend - lstart, body[lstart:lend]
1544 yield linenum, mstart - lstart, mend - lstart, body[lstart:lend]
1545 begin = lend + 1
1545 begin = lend + 1
1546
1546
1547 class linestate(object):
1547 class linestate(object):
1548 def __init__(self, line, linenum, colstart, colend):
1548 def __init__(self, line, linenum, colstart, colend):
1549 self.line = line
1549 self.line = line
1550 self.linenum = linenum
1550 self.linenum = linenum
1551 self.colstart = colstart
1551 self.colstart = colstart
1552 self.colend = colend
1552 self.colend = colend
1553 def __eq__(self, other):
1553 def __eq__(self, other):
1554 return self.line == other.line
1554 return self.line == other.line
1555 def __hash__(self):
1555 def __hash__(self):
1556 return hash(self.line)
1556 return hash(self.line)
1557
1557
1558 matches = {}
1558 matches = {}
1559 def grepbody(fn, rev, body):
1559 def grepbody(fn, rev, body):
1560 matches[rev].setdefault(fn, {})
1560 matches[rev].setdefault(fn, {})
1561 m = matches[rev][fn]
1561 m = matches[rev][fn]
1562 for lnum, cstart, cend, line in matchlines(body):
1562 for lnum, cstart, cend, line in matchlines(body):
1563 s = linestate(line, lnum, cstart, cend)
1563 s = linestate(line, lnum, cstart, cend)
1564 m[s] = s
1564 m[s] = s
1565
1565
1566 # FIXME: prev isn't used, why ?
1566 # FIXME: prev isn't used, why ?
1567 prev = {}
1567 prev = {}
1568 ucache = {}
1568 ucache = {}
1569 def display(fn, rev, states, prevstates):
1569 def display(fn, rev, states, prevstates):
1570 diff = list(sets.Set(states).symmetric_difference(sets.Set(prevstates)))
1570 diff = list(sets.Set(states).symmetric_difference(sets.Set(prevstates)))
1571 diff.sort(lambda x, y: cmp(x.linenum, y.linenum))
1571 diff.sort(lambda x, y: cmp(x.linenum, y.linenum))
1572 counts = {'-': 0, '+': 0}
1572 counts = {'-': 0, '+': 0}
1573 filerevmatches = {}
1573 filerevmatches = {}
1574 for l in diff:
1574 for l in diff:
1575 if incrementing or not opts['all']:
1575 if incrementing or not opts['all']:
1576 change = ((l in prevstates) and '-') or '+'
1576 change = ((l in prevstates) and '-') or '+'
1577 r = rev
1577 r = rev
1578 else:
1578 else:
1579 change = ((l in states) and '-') or '+'
1579 change = ((l in states) and '-') or '+'
1580 r = prev[fn]
1580 r = prev[fn]
1581 cols = [fn, str(rev)]
1581 cols = [fn, str(rev)]
1582 if opts['line_number']:
1582 if opts['line_number']:
1583 cols.append(str(l.linenum))
1583 cols.append(str(l.linenum))
1584 if opts['all']:
1584 if opts['all']:
1585 cols.append(change)
1585 cols.append(change)
1586 if opts['user']:
1586 if opts['user']:
1587 cols.append(trimuser(ui, getchange(rev)[1], rev,
1587 cols.append(trimuser(ui, getchange(rev)[1], rev,
1588 ucache))
1588 ucache))
1589 if opts['files_with_matches']:
1589 if opts['files_with_matches']:
1590 c = (fn, rev)
1590 c = (fn, rev)
1591 if c in filerevmatches:
1591 if c in filerevmatches:
1592 continue
1592 continue
1593 filerevmatches[c] = 1
1593 filerevmatches[c] = 1
1594 else:
1594 else:
1595 cols.append(l.line)
1595 cols.append(l.line)
1596 ui.write(sep.join(cols), eol)
1596 ui.write(sep.join(cols), eol)
1597 counts[change] += 1
1597 counts[change] += 1
1598 return counts['+'], counts['-']
1598 return counts['+'], counts['-']
1599
1599
1600 fstate = {}
1600 fstate = {}
1601 skip = {}
1601 skip = {}
1602 changeiter, getchange, matchfn = walkchangerevs(ui, repo, pats, opts)
1602 changeiter, getchange, matchfn = walkchangerevs(ui, repo, pats, opts)
1603 count = 0
1603 count = 0
1604 incrementing = False
1604 incrementing = False
1605 for st, rev, fns in changeiter:
1605 for st, rev, fns in changeiter:
1606 if st == 'window':
1606 if st == 'window':
1607 incrementing = rev
1607 incrementing = rev
1608 matches.clear()
1608 matches.clear()
1609 elif st == 'add':
1609 elif st == 'add':
1610 change = repo.changelog.read(repo.lookup(str(rev)))
1610 change = repo.changelog.read(repo.lookup(str(rev)))
1611 mf = repo.manifest.read(change[0])
1611 mf = repo.manifest.read(change[0])
1612 matches[rev] = {}
1612 matches[rev] = {}
1613 for fn in fns:
1613 for fn in fns:
1614 if fn in skip:
1614 if fn in skip:
1615 continue
1615 continue
1616 fstate.setdefault(fn, {})
1616 fstate.setdefault(fn, {})
1617 try:
1617 try:
1618 grepbody(fn, rev, getfile(fn).read(mf[fn]))
1618 grepbody(fn, rev, getfile(fn).read(mf[fn]))
1619 except KeyError:
1619 except KeyError:
1620 pass
1620 pass
1621 elif st == 'iter':
1621 elif st == 'iter':
1622 states = matches[rev].items()
1622 states = matches[rev].items()
1623 states.sort()
1623 states.sort()
1624 for fn, m in states:
1624 for fn, m in states:
1625 if fn in skip:
1625 if fn in skip:
1626 continue
1626 continue
1627 if incrementing or not opts['all'] or fstate[fn]:
1627 if incrementing or not opts['all'] or fstate[fn]:
1628 pos, neg = display(fn, rev, m, fstate[fn])
1628 pos, neg = display(fn, rev, m, fstate[fn])
1629 count += pos + neg
1629 count += pos + neg
1630 if pos and not opts['all']:
1630 if pos and not opts['all']:
1631 skip[fn] = True
1631 skip[fn] = True
1632 fstate[fn] = m
1632 fstate[fn] = m
1633 prev[fn] = rev
1633 prev[fn] = rev
1634
1634
1635 if not incrementing:
1635 if not incrementing:
1636 fstate = fstate.items()
1636 fstate = fstate.items()
1637 fstate.sort()
1637 fstate.sort()
1638 for fn, state in fstate:
1638 for fn, state in fstate:
1639 if fn in skip:
1639 if fn in skip:
1640 continue
1640 continue
1641 display(fn, rev, {}, state)
1641 display(fn, rev, {}, state)
1642 return (count == 0 and 1) or 0
1642 return (count == 0 and 1) or 0
1643
1643
1644 def heads(ui, repo, **opts):
1644 def heads(ui, repo, **opts):
1645 """show current repository heads
1645 """show current repository heads
1646
1646
1647 Show all repository head changesets.
1647 Show all repository head changesets.
1648
1648
1649 Repository "heads" are changesets that don't have children
1649 Repository "heads" are changesets that don't have children
1650 changesets. They are where development generally takes place and
1650 changesets. They are where development generally takes place and
1651 are the usual targets for update and merge operations.
1651 are the usual targets for update and merge operations.
1652 """
1652 """
1653 if opts['rev']:
1653 if opts['rev']:
1654 heads = repo.heads(repo.lookup(opts['rev']))
1654 heads = repo.heads(repo.lookup(opts['rev']))
1655 else:
1655 else:
1656 heads = repo.heads()
1656 heads = repo.heads()
1657 br = None
1657 br = None
1658 if opts['branches']:
1658 if opts['branches']:
1659 br = repo.branchlookup(heads)
1659 br = repo.branchlookup(heads)
1660 displayer = show_changeset(ui, repo, opts)
1660 displayer = show_changeset(ui, repo, opts)
1661 for n in heads:
1661 for n in heads:
1662 displayer.show(changenode=n, brinfo=br)
1662 displayer.show(changenode=n, brinfo=br)
1663
1663
1664 def identify(ui, repo):
1664 def identify(ui, repo):
1665 """print information about the working copy
1665 """print information about the working copy
1666
1666
1667 Print a short summary of the current state of the repo.
1667 Print a short summary of the current state of the repo.
1668
1668
1669 This summary identifies the repository state using one or two parent
1669 This summary identifies the repository state using one or two parent
1670 hash identifiers, followed by a "+" if there are uncommitted changes
1670 hash identifiers, followed by a "+" if there are uncommitted changes
1671 in the working directory, followed by a list of tags for this revision.
1671 in the working directory, followed by a list of tags for this revision.
1672 """
1672 """
1673 parents = [p for p in repo.dirstate.parents() if p != nullid]
1673 parents = [p for p in repo.dirstate.parents() if p != nullid]
1674 if not parents:
1674 if not parents:
1675 ui.write(_("unknown\n"))
1675 ui.write(_("unknown\n"))
1676 return
1676 return
1677
1677
1678 hexfunc = ui.verbose and hex or short
1678 hexfunc = ui.verbose and hex or short
1679 modified, added, removed, deleted, unknown = repo.changes()
1679 modified, added, removed, deleted, unknown = repo.changes()
1680 output = ["%s%s" %
1680 output = ["%s%s" %
1681 ('+'.join([hexfunc(parent) for parent in parents]),
1681 ('+'.join([hexfunc(parent) for parent in parents]),
1682 (modified or added or removed or deleted) and "+" or "")]
1682 (modified or added or removed or deleted) and "+" or "")]
1683
1683
1684 if not ui.quiet:
1684 if not ui.quiet:
1685 # multiple tags for a single parent separated by '/'
1685 # multiple tags for a single parent separated by '/'
1686 parenttags = ['/'.join(tags)
1686 parenttags = ['/'.join(tags)
1687 for tags in map(repo.nodetags, parents) if tags]
1687 for tags in map(repo.nodetags, parents) if tags]
1688 # tags for multiple parents separated by ' + '
1688 # tags for multiple parents separated by ' + '
1689 if parenttags:
1689 if parenttags:
1690 output.append(' + '.join(parenttags))
1690 output.append(' + '.join(parenttags))
1691
1691
1692 ui.write("%s\n" % ' '.join(output))
1692 ui.write("%s\n" % ' '.join(output))
1693
1693
1694 def import_(ui, repo, patch1, *patches, **opts):
1694 def import_(ui, repo, patch1, *patches, **opts):
1695 """import an ordered set of patches
1695 """import an ordered set of patches
1696
1696
1697 Import a list of patches and commit them individually.
1697 Import a list of patches and commit them individually.
1698
1698
1699 If there are outstanding changes in the working directory, import
1699 If there are outstanding changes in the working directory, import
1700 will abort unless given the -f flag.
1700 will abort unless given the -f flag.
1701
1701
1702 You can import a patch straight from a mail message. Even patches
1702 You can import a patch straight from a mail message. Even patches
1703 as attachments work (body part must be type text/plain or
1703 as attachments work (body part must be type text/plain or
1704 text/x-patch to be used). From and Subject headers of email
1704 text/x-patch to be used). From and Subject headers of email
1705 message are used as default committer and commit message. All
1705 message are used as default committer and commit message. All
1706 text/plain body parts before first diff are added to commit
1706 text/plain body parts before first diff are added to commit
1707 message.
1707 message.
1708
1708
1709 If imported patch was generated by hg export, user and description
1709 If imported patch was generated by hg export, user and description
1710 from patch override values from message headers and body. Values
1710 from patch override values from message headers and body. Values
1711 given on command line with -m and -u override these.
1711 given on command line with -m and -u override these.
1712
1712
1713 To read a patch from standard input, use patch name "-".
1713 To read a patch from standard input, use patch name "-".
1714 """
1714 """
1715 patches = (patch1,) + patches
1715 patches = (patch1,) + patches
1716
1716
1717 if not opts['force']:
1717 if not opts['force']:
1718 bail_if_changed(repo)
1718 bail_if_changed(repo)
1719
1719
1720 d = opts["base"]
1720 d = opts["base"]
1721 strip = opts["strip"]
1721 strip = opts["strip"]
1722
1722
1723 mailre = re.compile(r'(?:From |[\w-]+:)')
1723 mailre = re.compile(r'(?:From |[\w-]+:)')
1724
1724
1725 # attempt to detect the start of a patch
1725 # attempt to detect the start of a patch
1726 # (this heuristic is borrowed from quilt)
1726 # (this heuristic is borrowed from quilt)
1727 diffre = re.compile(r'^(?:Index:[ \t]|diff[ \t]|RCS file: |' +
1727 diffre = re.compile(r'^(?:Index:[ \t]|diff[ \t]|RCS file: |' +
1728 'retrieving revision [0-9]+(\.[0-9]+)*$|' +
1728 'retrieving revision [0-9]+(\.[0-9]+)*$|' +
1729 '(---|\*\*\*)[ \t])', re.MULTILINE)
1729 '(---|\*\*\*)[ \t])', re.MULTILINE)
1730
1730
1731 for patch in patches:
1731 for patch in patches:
1732 pf = os.path.join(d, patch)
1732 pf = os.path.join(d, patch)
1733
1733
1734 message = None
1734 message = None
1735 user = None
1735 user = None
1736 date = None
1736 date = None
1737 hgpatch = False
1737 hgpatch = False
1738
1738
1739 p = email.Parser.Parser()
1739 p = email.Parser.Parser()
1740 if pf == '-':
1740 if pf == '-':
1741 msg = p.parse(sys.stdin)
1741 msg = p.parse(sys.stdin)
1742 ui.status(_("applying patch from stdin\n"))
1742 ui.status(_("applying patch from stdin\n"))
1743 else:
1743 else:
1744 msg = p.parse(file(pf))
1744 msg = p.parse(file(pf))
1745 ui.status(_("applying %s\n") % patch)
1745 ui.status(_("applying %s\n") % patch)
1746
1746
1747 fd, tmpname = tempfile.mkstemp(prefix='hg-patch-')
1747 fd, tmpname = tempfile.mkstemp(prefix='hg-patch-')
1748 tmpfp = os.fdopen(fd, 'w')
1748 tmpfp = os.fdopen(fd, 'w')
1749 try:
1749 try:
1750 message = msg['Subject']
1750 message = msg['Subject']
1751 if message:
1751 if message:
1752 message = message.replace('\n\t', ' ')
1752 message = message.replace('\n\t', ' ')
1753 ui.debug('Subject: %s\n' % message)
1753 ui.debug('Subject: %s\n' % message)
1754 user = msg['From']
1754 user = msg['From']
1755 if user:
1755 if user:
1756 ui.debug('From: %s\n' % user)
1756 ui.debug('From: %s\n' % user)
1757 diffs_seen = 0
1757 diffs_seen = 0
1758 ok_types = ('text/plain', 'text/x-patch')
1758 ok_types = ('text/plain', 'text/x-patch')
1759 for part in msg.walk():
1759 for part in msg.walk():
1760 content_type = part.get_content_type()
1760 content_type = part.get_content_type()
1761 ui.debug('Content-Type: %s\n' % content_type)
1761 ui.debug('Content-Type: %s\n' % content_type)
1762 if content_type not in ok_types:
1762 if content_type not in ok_types:
1763 continue
1763 continue
1764 payload = part.get_payload(decode=True)
1764 payload = part.get_payload(decode=True)
1765 m = diffre.search(payload)
1765 m = diffre.search(payload)
1766 if m:
1766 if m:
1767 ui.debug(_('found patch at byte %d\n') % m.start(0))
1767 ui.debug(_('found patch at byte %d\n') % m.start(0))
1768 diffs_seen += 1
1768 diffs_seen += 1
1769 hgpatch = False
1769 hgpatch = False
1770 fp = cStringIO.StringIO()
1770 fp = cStringIO.StringIO()
1771 if message:
1771 if message:
1772 fp.write(message)
1772 fp.write(message)
1773 fp.write('\n')
1773 fp.write('\n')
1774 for line in payload[:m.start(0)].splitlines():
1774 for line in payload[:m.start(0)].splitlines():
1775 if line.startswith('# HG changeset patch'):
1775 if line.startswith('# HG changeset patch'):
1776 ui.debug(_('patch generated by hg export\n'))
1776 ui.debug(_('patch generated by hg export\n'))
1777 hgpatch = True
1777 hgpatch = True
1778 # drop earlier commit message content
1778 # drop earlier commit message content
1779 fp.seek(0)
1779 fp.seek(0)
1780 fp.truncate()
1780 fp.truncate()
1781 elif hgpatch:
1781 elif hgpatch:
1782 if line.startswith('# User '):
1782 if line.startswith('# User '):
1783 user = line[7:]
1783 user = line[7:]
1784 ui.debug('From: %s\n' % user)
1784 ui.debug('From: %s\n' % user)
1785 elif line.startswith("# Date "):
1785 elif line.startswith("# Date "):
1786 date = line[7:]
1786 date = line[7:]
1787 if not line.startswith('# '):
1787 if not line.startswith('# '):
1788 fp.write(line)
1788 fp.write(line)
1789 fp.write('\n')
1789 fp.write('\n')
1790 message = fp.getvalue()
1790 message = fp.getvalue()
1791 if tmpfp:
1791 if tmpfp:
1792 tmpfp.write(payload)
1792 tmpfp.write(payload)
1793 if not payload.endswith('\n'):
1793 if not payload.endswith('\n'):
1794 tmpfp.write('\n')
1794 tmpfp.write('\n')
1795 elif not diffs_seen and message and content_type == 'text/plain':
1795 elif not diffs_seen and message and content_type == 'text/plain':
1796 message += '\n' + payload
1796 message += '\n' + payload
1797
1797
1798 if opts['message']:
1798 if opts['message']:
1799 # pickup the cmdline msg
1799 # pickup the cmdline msg
1800 message = opts['message']
1800 message = opts['message']
1801 elif message:
1801 elif message:
1802 # pickup the patch msg
1802 # pickup the patch msg
1803 message = message.strip()
1803 message = message.strip()
1804 else:
1804 else:
1805 # launch the editor
1805 # launch the editor
1806 message = None
1806 message = None
1807 ui.debug(_('message:\n%s\n') % message)
1807 ui.debug(_('message:\n%s\n') % message)
1808
1808
1809 tmpfp.close()
1809 tmpfp.close()
1810 if not diffs_seen:
1810 if not diffs_seen:
1811 raise util.Abort(_('no diffs found'))
1811 raise util.Abort(_('no diffs found'))
1812
1812
1813 files = util.patch(strip, tmpname, ui)
1813 files = util.patch(strip, tmpname, ui)
1814 if len(files) > 0:
1814 if len(files) > 0:
1815 addremove_lock(ui, repo, files, {})
1815 addremove_lock(ui, repo, files, {})
1816 repo.commit(files, message, user, date)
1816 repo.commit(files, message, user, date)
1817 finally:
1817 finally:
1818 os.unlink(tmpname)
1818 os.unlink(tmpname)
1819
1819
1820 def incoming(ui, repo, source="default", **opts):
1820 def incoming(ui, repo, source="default", **opts):
1821 """show new changesets found in source
1821 """show new changesets found in source
1822
1822
1823 Show new changesets found in the specified path/URL or the default
1823 Show new changesets found in the specified path/URL or the default
1824 pull location. These are the changesets that would be pulled if a pull
1824 pull location. These are the changesets that would be pulled if a pull
1825 was requested.
1825 was requested.
1826
1826
1827 For remote repository, using --bundle avoids downloading the changesets
1827 For remote repository, using --bundle avoids downloading the changesets
1828 twice if the incoming is followed by a pull.
1828 twice if the incoming is followed by a pull.
1829
1829
1830 See pull for valid source format details.
1830 See pull for valid source format details.
1831 """
1831 """
1832 source = ui.expandpath(source)
1832 source = ui.expandpath(source)
1833 ui.setconfig_remoteopts(**opts)
1833 ui.setconfig_remoteopts(**opts)
1834
1834
1835 other = hg.repository(ui, source)
1835 other = hg.repository(ui, source)
1836 incoming = repo.findincoming(other, force=opts["force"])
1836 incoming = repo.findincoming(other, force=opts["force"])
1837 if not incoming:
1837 if not incoming:
1838 ui.status(_("no changes found\n"))
1838 ui.status(_("no changes found\n"))
1839 return
1839 return
1840
1840
1841 cleanup = None
1841 cleanup = None
1842 try:
1842 try:
1843 fname = opts["bundle"]
1843 fname = opts["bundle"]
1844 if fname or not other.local():
1844 if fname or not other.local():
1845 # create a bundle (uncompressed if other repo is not local)
1845 # create a bundle (uncompressed if other repo is not local)
1846 cg = other.changegroup(incoming, "incoming")
1846 cg = other.changegroup(incoming, "incoming")
1847 fname = cleanup = write_bundle(cg, fname, compress=other.local())
1847 fname = cleanup = write_bundle(cg, fname, compress=other.local())
1848 # keep written bundle?
1848 # keep written bundle?
1849 if opts["bundle"]:
1849 if opts["bundle"]:
1850 cleanup = None
1850 cleanup = None
1851 if not other.local():
1851 if not other.local():
1852 # use the created uncompressed bundlerepo
1852 # use the created uncompressed bundlerepo
1853 other = bundlerepo.bundlerepository(ui, repo.root, fname)
1853 other = bundlerepo.bundlerepository(ui, repo.root, fname)
1854
1854
1855 revs = None
1855 revs = None
1856 if opts['rev']:
1856 if opts['rev']:
1857 revs = [other.lookup(rev) for rev in opts['rev']]
1857 revs = [other.lookup(rev) for rev in opts['rev']]
1858 o = other.changelog.nodesbetween(incoming, revs)[0]
1858 o = other.changelog.nodesbetween(incoming, revs)[0]
1859 if opts['newest_first']:
1859 if opts['newest_first']:
1860 o.reverse()
1860 o.reverse()
1861 displayer = show_changeset(ui, other, opts)
1861 displayer = show_changeset(ui, other, opts)
1862 for n in o:
1862 for n in o:
1863 parents = [p for p in other.changelog.parents(n) if p != nullid]
1863 parents = [p for p in other.changelog.parents(n) if p != nullid]
1864 if opts['no_merges'] and len(parents) == 2:
1864 if opts['no_merges'] and len(parents) == 2:
1865 continue
1865 continue
1866 displayer.show(changenode=n)
1866 displayer.show(changenode=n)
1867 if opts['patch']:
1867 if opts['patch']:
1868 prev = (parents and parents[0]) or nullid
1868 prev = (parents and parents[0]) or nullid
1869 dodiff(ui, ui, other, prev, n)
1869 dodiff(ui, ui, other, prev, n)
1870 ui.write("\n")
1870 ui.write("\n")
1871 finally:
1871 finally:
1872 if hasattr(other, 'close'):
1872 if hasattr(other, 'close'):
1873 other.close()
1873 other.close()
1874 if cleanup:
1874 if cleanup:
1875 os.unlink(cleanup)
1875 os.unlink(cleanup)
1876
1876
1877 def init(ui, dest=".", **opts):
1877 def init(ui, dest=".", **opts):
1878 """create a new repository in the given directory
1878 """create a new repository in the given directory
1879
1879
1880 Initialize a new repository in the given directory. If the given
1880 Initialize a new repository in the given directory. If the given
1881 directory does not exist, it is created.
1881 directory does not exist, it is created.
1882
1882
1883 If no directory is given, the current directory is used.
1883 If no directory is given, the current directory is used.
1884
1884
1885 It is possible to specify an ssh:// URL as the destination.
1885 It is possible to specify an ssh:// URL as the destination.
1886 Look at the help text for the pull command for important details
1886 Look at the help text for the pull command for important details
1887 about ssh:// URLs.
1887 about ssh:// URLs.
1888 """
1888 """
1889 ui.setconfig_remoteopts(**opts)
1889 ui.setconfig_remoteopts(**opts)
1890 hg.repository(ui, dest, create=1)
1890 hg.repository(ui, dest, create=1)
1891
1891
1892 def locate(ui, repo, *pats, **opts):
1892 def locate(ui, repo, *pats, **opts):
1893 """locate files matching specific patterns
1893 """locate files matching specific patterns
1894
1894
1895 Print all files under Mercurial control whose names match the
1895 Print all files under Mercurial control whose names match the
1896 given patterns.
1896 given patterns.
1897
1897
1898 This command searches the current directory and its
1898 This command searches the current directory and its
1899 subdirectories. To search an entire repository, move to the root
1899 subdirectories. To search an entire repository, move to the root
1900 of the repository.
1900 of the repository.
1901
1901
1902 If no patterns are given to match, this command prints all file
1902 If no patterns are given to match, this command prints all file
1903 names.
1903 names.
1904
1904
1905 If you want to feed the output of this command into the "xargs"
1905 If you want to feed the output of this command into the "xargs"
1906 command, use the "-0" option to both this command and "xargs".
1906 command, use the "-0" option to both this command and "xargs".
1907 This will avoid the problem of "xargs" treating single filenames
1907 This will avoid the problem of "xargs" treating single filenames
1908 that contain white space as multiple filenames.
1908 that contain white space as multiple filenames.
1909 """
1909 """
1910 end = opts['print0'] and '\0' or '\n'
1910 end = opts['print0'] and '\0' or '\n'
1911 rev = opts['rev']
1911 rev = opts['rev']
1912 if rev:
1912 if rev:
1913 node = repo.lookup(rev)
1913 node = repo.lookup(rev)
1914 else:
1914 else:
1915 node = None
1915 node = None
1916
1916
1917 for src, abs, rel, exact in walk(repo, pats, opts, node=node,
1917 for src, abs, rel, exact in walk(repo, pats, opts, node=node,
1918 head='(?:.*/|)'):
1918 head='(?:.*/|)'):
1919 if not node and repo.dirstate.state(abs) == '?':
1919 if not node and repo.dirstate.state(abs) == '?':
1920 continue
1920 continue
1921 if opts['fullpath']:
1921 if opts['fullpath']:
1922 ui.write(os.path.join(repo.root, abs), end)
1922 ui.write(os.path.join(repo.root, abs), end)
1923 else:
1923 else:
1924 ui.write(((pats and rel) or abs), end)
1924 ui.write(((pats and rel) or abs), end)
1925
1925
1926 def log(ui, repo, *pats, **opts):
1926 def log(ui, repo, *pats, **opts):
1927 """show revision history of entire repository or files
1927 """show revision history of entire repository or files
1928
1928
1929 Print the revision history of the specified files or the entire project.
1929 Print the revision history of the specified files or the entire project.
1930
1930
1931 By default this command outputs: changeset id and hash, tags,
1931 By default this command outputs: changeset id and hash, tags,
1932 non-trivial parents, user, date and time, and a summary for each
1932 non-trivial parents, user, date and time, and a summary for each
1933 commit. When the -v/--verbose switch is used, the list of changed
1933 commit. When the -v/--verbose switch is used, the list of changed
1934 files and full commit message is shown.
1934 files and full commit message is shown.
1935 """
1935 """
1936 class dui(object):
1936 class dui(object):
1937 # Implement and delegate some ui protocol. Save hunks of
1937 # Implement and delegate some ui protocol. Save hunks of
1938 # output for later display in the desired order.
1938 # output for later display in the desired order.
1939 def __init__(self, ui):
1939 def __init__(self, ui):
1940 self.ui = ui
1940 self.ui = ui
1941 self.hunk = {}
1941 self.hunk = {}
1942 self.header = {}
1942 self.header = {}
1943 def bump(self, rev):
1943 def bump(self, rev):
1944 self.rev = rev
1944 self.rev = rev
1945 self.hunk[rev] = []
1945 self.hunk[rev] = []
1946 self.header[rev] = []
1946 self.header[rev] = []
1947 def note(self, *args):
1947 def note(self, *args):
1948 if self.verbose:
1948 if self.verbose:
1949 self.write(*args)
1949 self.write(*args)
1950 def status(self, *args):
1950 def status(self, *args):
1951 if not self.quiet:
1951 if not self.quiet:
1952 self.write(*args)
1952 self.write(*args)
1953 def write(self, *args):
1953 def write(self, *args):
1954 self.hunk[self.rev].append(args)
1954 self.hunk[self.rev].append(args)
1955 def write_header(self, *args):
1955 def write_header(self, *args):
1956 self.header[self.rev].append(args)
1956 self.header[self.rev].append(args)
1957 def debug(self, *args):
1957 def debug(self, *args):
1958 if self.debugflag:
1958 if self.debugflag:
1959 self.write(*args)
1959 self.write(*args)
1960 def __getattr__(self, key):
1960 def __getattr__(self, key):
1961 return getattr(self.ui, key)
1961 return getattr(self.ui, key)
1962
1962
1963 changeiter, getchange, matchfn = walkchangerevs(ui, repo, pats, opts)
1963 changeiter, getchange, matchfn = walkchangerevs(ui, repo, pats, opts)
1964
1964
1965 if opts['limit']:
1965 if opts['limit']:
1966 try:
1966 try:
1967 limit = int(opts['limit'])
1967 limit = int(opts['limit'])
1968 except ValueError:
1968 except ValueError:
1969 raise util.Abort(_('limit must be a positive integer'))
1969 raise util.Abort(_('limit must be a positive integer'))
1970 if limit <= 0: raise util.Abort(_('limit must be positive'))
1970 if limit <= 0: raise util.Abort(_('limit must be positive'))
1971 else:
1971 else:
1972 limit = sys.maxint
1972 limit = sys.maxint
1973 count = 0
1973 count = 0
1974
1974
1975 displayer = show_changeset(ui, repo, opts)
1975 displayer = show_changeset(ui, repo, opts)
1976 for st, rev, fns in changeiter:
1976 for st, rev, fns in changeiter:
1977 if st == 'window':
1977 if st == 'window':
1978 du = dui(ui)
1978 du = dui(ui)
1979 displayer.ui = du
1979 displayer.ui = du
1980 elif st == 'add':
1980 elif st == 'add':
1981 du.bump(rev)
1981 du.bump(rev)
1982 changenode = repo.changelog.node(rev)
1982 changenode = repo.changelog.node(rev)
1983 parents = [p for p in repo.changelog.parents(changenode)
1983 parents = [p for p in repo.changelog.parents(changenode)
1984 if p != nullid]
1984 if p != nullid]
1985 if opts['no_merges'] and len(parents) == 2:
1985 if opts['no_merges'] and len(parents) == 2:
1986 continue
1986 continue
1987 if opts['only_merges'] and len(parents) != 2:
1987 if opts['only_merges'] and len(parents) != 2:
1988 continue
1988 continue
1989
1989
1990 if opts['keyword']:
1990 if opts['keyword']:
1991 changes = getchange(rev)
1991 changes = getchange(rev)
1992 miss = 0
1992 miss = 0
1993 for k in [kw.lower() for kw in opts['keyword']]:
1993 for k in [kw.lower() for kw in opts['keyword']]:
1994 if not (k in changes[1].lower() or
1994 if not (k in changes[1].lower() or
1995 k in changes[4].lower() or
1995 k in changes[4].lower() or
1996 k in " ".join(changes[3][:20]).lower()):
1996 k in " ".join(changes[3][:20]).lower()):
1997 miss = 1
1997 miss = 1
1998 break
1998 break
1999 if miss:
1999 if miss:
2000 continue
2000 continue
2001
2001
2002 br = None
2002 br = None
2003 if opts['branches']:
2003 if opts['branches']:
2004 br = repo.branchlookup([repo.changelog.node(rev)])
2004 br = repo.branchlookup([repo.changelog.node(rev)])
2005
2005
2006 displayer.show(rev, brinfo=br)
2006 displayer.show(rev, brinfo=br)
2007 if opts['patch']:
2007 if opts['patch']:
2008 prev = (parents and parents[0]) or nullid
2008 prev = (parents and parents[0]) or nullid
2009 dodiff(du, du, repo, prev, changenode, match=matchfn)
2009 dodiff(du, du, repo, prev, changenode, match=matchfn)
2010 du.write("\n\n")
2010 du.write("\n\n")
2011 elif st == 'iter':
2011 elif st == 'iter':
2012 if count == limit: break
2012 if count == limit: break
2013 if du.header[rev]:
2013 if du.header[rev]:
2014 for args in du.header[rev]:
2014 for args in du.header[rev]:
2015 ui.write_header(*args)
2015 ui.write_header(*args)
2016 if du.hunk[rev]:
2016 if du.hunk[rev]:
2017 count += 1
2017 count += 1
2018 for args in du.hunk[rev]:
2018 for args in du.hunk[rev]:
2019 ui.write(*args)
2019 ui.write(*args)
2020
2020
2021 def manifest(ui, repo, rev=None):
2021 def manifest(ui, repo, rev=None):
2022 """output the latest or given revision of the project manifest
2022 """output the latest or given revision of the project manifest
2023
2023
2024 Print a list of version controlled files for the given revision.
2024 Print a list of version controlled files for the given revision.
2025
2025
2026 The manifest is the list of files being version controlled. If no revision
2026 The manifest is the list of files being version controlled. If no revision
2027 is given then the tip is used.
2027 is given then the tip is used.
2028 """
2028 """
2029 if rev:
2029 if rev:
2030 try:
2030 try:
2031 # assume all revision numbers are for changesets
2031 # assume all revision numbers are for changesets
2032 n = repo.lookup(rev)
2032 n = repo.lookup(rev)
2033 change = repo.changelog.read(n)
2033 change = repo.changelog.read(n)
2034 n = change[0]
2034 n = change[0]
2035 except hg.RepoError:
2035 except hg.RepoError:
2036 n = repo.manifest.lookup(rev)
2036 n = repo.manifest.lookup(rev)
2037 else:
2037 else:
2038 n = repo.manifest.tip()
2038 n = repo.manifest.tip()
2039 m = repo.manifest.read(n)
2039 m = repo.manifest.read(n)
2040 mf = repo.manifest.readflags(n)
2040 mf = repo.manifest.readflags(n)
2041 files = m.keys()
2041 files = m.keys()
2042 files.sort()
2042 files.sort()
2043
2043
2044 for f in files:
2044 for f in files:
2045 ui.write("%40s %3s %s\n" % (hex(m[f]), mf[f] and "755" or "644", f))
2045 ui.write("%40s %3s %s\n" % (hex(m[f]), mf[f] and "755" or "644", f))
2046
2046
2047 def merge(ui, repo, node=None, **opts):
2047 def merge(ui, repo, node=None, **opts):
2048 """Merge working directory with another revision
2048 """Merge working directory with another revision
2049
2049
2050 Merge the contents of the current working directory and the
2050 Merge the contents of the current working directory and the
2051 requested revision. Files that changed between either parent are
2051 requested revision. Files that changed between either parent are
2052 marked as changed for the next commit and a commit must be
2052 marked as changed for the next commit and a commit must be
2053 performed before any further updates are allowed.
2053 performed before any further updates are allowed.
2054 """
2054 """
2055 return doupdate(ui, repo, node=node, merge=True, **opts)
2055 return doupdate(ui, repo, node=node, merge=True, **opts)
2056
2056
2057 def outgoing(ui, repo, dest=None, **opts):
2057 def outgoing(ui, repo, dest=None, **opts):
2058 """show changesets not found in destination
2058 """show changesets not found in destination
2059
2059
2060 Show changesets not found in the specified destination repository or
2060 Show changesets not found in the specified destination repository or
2061 the default push location. These are the changesets that would be pushed
2061 the default push location. These are the changesets that would be pushed
2062 if a push was requested.
2062 if a push was requested.
2063
2063
2064 See pull for valid destination format details.
2064 See pull for valid destination format details.
2065 """
2065 """
2066 dest = ui.expandpath(dest or 'default-push', dest or 'default')
2066 dest = ui.expandpath(dest or 'default-push', dest or 'default')
2067 ui.setconfig_remoteopts(**opts)
2067 ui.setconfig_remoteopts(**opts)
2068 revs = None
2068 revs = None
2069 if opts['rev']:
2069 if opts['rev']:
2070 revs = [repo.lookup(rev) for rev in opts['rev']]
2070 revs = [repo.lookup(rev) for rev in opts['rev']]
2071
2071
2072 other = hg.repository(ui, dest)
2072 other = hg.repository(ui, dest)
2073 o = repo.findoutgoing(other, force=opts['force'])
2073 o = repo.findoutgoing(other, force=opts['force'])
2074 if not o:
2074 if not o:
2075 ui.status(_("no changes found\n"))
2075 ui.status(_("no changes found\n"))
2076 return
2076 return
2077 o = repo.changelog.nodesbetween(o, revs)[0]
2077 o = repo.changelog.nodesbetween(o, revs)[0]
2078 if opts['newest_first']:
2078 if opts['newest_first']:
2079 o.reverse()
2079 o.reverse()
2080 displayer = show_changeset(ui, repo, opts)
2080 displayer = show_changeset(ui, repo, opts)
2081 for n in o:
2081 for n in o:
2082 parents = [p for p in repo.changelog.parents(n) if p != nullid]
2082 parents = [p for p in repo.changelog.parents(n) if p != nullid]
2083 if opts['no_merges'] and len(parents) == 2:
2083 if opts['no_merges'] and len(parents) == 2:
2084 continue
2084 continue
2085 displayer.show(changenode=n)
2085 displayer.show(changenode=n)
2086 if opts['patch']:
2086 if opts['patch']:
2087 prev = (parents and parents[0]) or nullid
2087 prev = (parents and parents[0]) or nullid
2088 dodiff(ui, ui, repo, prev, n)
2088 dodiff(ui, ui, repo, prev, n)
2089 ui.write("\n")
2089 ui.write("\n")
2090
2090
2091 def parents(ui, repo, file_=None, rev=None, branches=None, **opts):
2091 def parents(ui, repo, file_=None, rev=None, branches=None, **opts):
2092 """show the parents of the working dir or revision
2092 """show the parents of the working dir or revision
2093
2093
2094 Print the working directory's parent revisions.
2094 Print the working directory's parent revisions.
2095 """
2095 """
2096 # legacy
2096 # legacy
2097 if file_ and not rev:
2097 if file_ and not rev:
2098 try:
2098 try:
2099 rev = repo.lookup(file_)
2099 rev = repo.lookup(file_)
2100 file_ = None
2100 file_ = None
2101 except hg.RepoError:
2101 except hg.RepoError:
2102 pass
2102 pass
2103 else:
2103 else:
2104 ui.warn(_("'hg parent REV' is deprecated, "
2104 ui.warn(_("'hg parent REV' is deprecated, "
2105 "please use 'hg parents -r REV instead\n"))
2105 "please use 'hg parents -r REV instead\n"))
2106
2106
2107 if rev:
2107 if rev:
2108 if file_:
2108 if file_:
2109 ctx = repo.filectx(file_, changeid=rev)
2109 ctx = repo.filectx(file_, changeid=rev)
2110 else:
2110 else:
2111 ctx = repo.changectx(rev)
2111 ctx = repo.changectx(rev)
2112 p = [cp.node() for cp in ctx.parents()]
2112 p = [cp.node() for cp in ctx.parents()]
2113 else:
2113 else:
2114 p = repo.dirstate.parents()
2114 p = repo.dirstate.parents()
2115
2115
2116 br = None
2116 br = None
2117 if branches is not None:
2117 if branches is not None:
2118 br = repo.branchlookup(p)
2118 br = repo.branchlookup(p)
2119 displayer = show_changeset(ui, repo, opts)
2119 displayer = show_changeset(ui, repo, opts)
2120 for n in p:
2120 for n in p:
2121 if n != nullid:
2121 if n != nullid:
2122 displayer.show(changenode=n, brinfo=br)
2122 displayer.show(changenode=n, brinfo=br)
2123
2123
2124 def paths(ui, repo, search=None):
2124 def paths(ui, repo, search=None):
2125 """show definition of symbolic path names
2125 """show definition of symbolic path names
2126
2126
2127 Show definition of symbolic path name NAME. If no name is given, show
2127 Show definition of symbolic path name NAME. If no name is given, show
2128 definition of available names.
2128 definition of available names.
2129
2129
2130 Path names are defined in the [paths] section of /etc/mercurial/hgrc
2130 Path names are defined in the [paths] section of /etc/mercurial/hgrc
2131 and $HOME/.hgrc. If run inside a repository, .hg/hgrc is used, too.
2131 and $HOME/.hgrc. If run inside a repository, .hg/hgrc is used, too.
2132 """
2132 """
2133 if search:
2133 if search:
2134 for name, path in ui.configitems("paths"):
2134 for name, path in ui.configitems("paths"):
2135 if name == search:
2135 if name == search:
2136 ui.write("%s\n" % path)
2136 ui.write("%s\n" % path)
2137 return
2137 return
2138 ui.warn(_("not found!\n"))
2138 ui.warn(_("not found!\n"))
2139 return 1
2139 return 1
2140 else:
2140 else:
2141 for name, path in ui.configitems("paths"):
2141 for name, path in ui.configitems("paths"):
2142 ui.write("%s = %s\n" % (name, path))
2142 ui.write("%s = %s\n" % (name, path))
2143
2143
2144 def postincoming(ui, repo, modheads, optupdate):
2144 def postincoming(ui, repo, modheads, optupdate):
2145 if modheads == 0:
2145 if modheads == 0:
2146 return
2146 return
2147 if optupdate:
2147 if optupdate:
2148 if modheads == 1:
2148 if modheads == 1:
2149 return doupdate(ui, repo)
2149 return doupdate(ui, repo)
2150 else:
2150 else:
2151 ui.status(_("not updating, since new heads added\n"))
2151 ui.status(_("not updating, since new heads added\n"))
2152 if modheads > 1:
2152 if modheads > 1:
2153 ui.status(_("(run 'hg heads' to see heads, 'hg merge' to merge)\n"))
2153 ui.status(_("(run 'hg heads' to see heads, 'hg merge' to merge)\n"))
2154 else:
2154 else:
2155 ui.status(_("(run 'hg update' to get a working copy)\n"))
2155 ui.status(_("(run 'hg update' to get a working copy)\n"))
2156
2156
2157 def pull(ui, repo, source="default", **opts):
2157 def pull(ui, repo, source="default", **opts):
2158 """pull changes from the specified source
2158 """pull changes from the specified source
2159
2159
2160 Pull changes from a remote repository to a local one.
2160 Pull changes from a remote repository to a local one.
2161
2161
2162 This finds all changes from the repository at the specified path
2162 This finds all changes from the repository at the specified path
2163 or URL and adds them to the local repository. By default, this
2163 or URL and adds them to the local repository. By default, this
2164 does not update the copy of the project in the working directory.
2164 does not update the copy of the project in the working directory.
2165
2165
2166 Valid URLs are of the form:
2166 Valid URLs are of the form:
2167
2167
2168 local/filesystem/path
2168 local/filesystem/path
2169 http://[user@]host[:port]/[path]
2169 http://[user@]host[:port]/[path]
2170 https://[user@]host[:port]/[path]
2170 https://[user@]host[:port]/[path]
2171 ssh://[user@]host[:port]/[path]
2171 ssh://[user@]host[:port]/[path]
2172
2172
2173 Some notes about using SSH with Mercurial:
2173 Some notes about using SSH with Mercurial:
2174 - SSH requires an accessible shell account on the destination machine
2174 - SSH requires an accessible shell account on the destination machine
2175 and a copy of hg in the remote path or specified with as remotecmd.
2175 and a copy of hg in the remote path or specified with as remotecmd.
2176 - path is relative to the remote user's home directory by default.
2176 - path is relative to the remote user's home directory by default.
2177 Use an extra slash at the start of a path to specify an absolute path:
2177 Use an extra slash at the start of a path to specify an absolute path:
2178 ssh://example.com//tmp/repository
2178 ssh://example.com//tmp/repository
2179 - Mercurial doesn't use its own compression via SSH; the right thing
2179 - Mercurial doesn't use its own compression via SSH; the right thing
2180 to do is to configure it in your ~/.ssh/ssh_config, e.g.:
2180 to do is to configure it in your ~/.ssh/ssh_config, e.g.:
2181 Host *.mylocalnetwork.example.com
2181 Host *.mylocalnetwork.example.com
2182 Compression off
2182 Compression off
2183 Host *
2183 Host *
2184 Compression on
2184 Compression on
2185 Alternatively specify "ssh -C" as your ssh command in your hgrc or
2185 Alternatively specify "ssh -C" as your ssh command in your hgrc or
2186 with the --ssh command line option.
2186 with the --ssh command line option.
2187 """
2187 """
2188 source = ui.expandpath(source)
2188 source = ui.expandpath(source)
2189 ui.setconfig_remoteopts(**opts)
2189 ui.setconfig_remoteopts(**opts)
2190
2190
2191 other = hg.repository(ui, source)
2191 other = hg.repository(ui, source)
2192 ui.status(_('pulling from %s\n') % (source))
2192 ui.status(_('pulling from %s\n') % (source))
2193 revs = None
2193 revs = None
2194 if opts['rev'] and not other.local():
2194 if opts['rev'] and not other.local():
2195 raise util.Abort(_("pull -r doesn't work for remote repositories yet"))
2195 raise util.Abort(_("pull -r doesn't work for remote repositories yet"))
2196 elif opts['rev']:
2196 elif opts['rev']:
2197 revs = [other.lookup(rev) for rev in opts['rev']]
2197 revs = [other.lookup(rev) for rev in opts['rev']]
2198 modheads = repo.pull(other, heads=revs, force=opts['force'])
2198 modheads = repo.pull(other, heads=revs, force=opts['force'])
2199 return postincoming(ui, repo, modheads, opts['update'])
2199 return postincoming(ui, repo, modheads, opts['update'])
2200
2200
2201 def push(ui, repo, dest=None, **opts):
2201 def push(ui, repo, dest=None, **opts):
2202 """push changes to the specified destination
2202 """push changes to the specified destination
2203
2203
2204 Push changes from the local repository to the given destination.
2204 Push changes from the local repository to the given destination.
2205
2205
2206 This is the symmetrical operation for pull. It helps to move
2206 This is the symmetrical operation for pull. It helps to move
2207 changes from the current repository to a different one. If the
2207 changes from the current repository to a different one. If the
2208 destination is local this is identical to a pull in that directory
2208 destination is local this is identical to a pull in that directory
2209 from the current one.
2209 from the current one.
2210
2210
2211 By default, push will refuse to run if it detects the result would
2211 By default, push will refuse to run if it detects the result would
2212 increase the number of remote heads. This generally indicates the
2212 increase the number of remote heads. This generally indicates the
2213 the client has forgotten to sync and merge before pushing.
2213 the client has forgotten to sync and merge before pushing.
2214
2214
2215 Valid URLs are of the form:
2215 Valid URLs are of the form:
2216
2216
2217 local/filesystem/path
2217 local/filesystem/path
2218 ssh://[user@]host[:port]/[path]
2218 ssh://[user@]host[:port]/[path]
2219
2219
2220 Look at the help text for the pull command for important details
2220 Look at the help text for the pull command for important details
2221 about ssh:// URLs.
2221 about ssh:// URLs.
2222
2222
2223 Pushing to http:// and https:// URLs is possible, too, if this
2223 Pushing to http:// and https:// URLs is possible, too, if this
2224 feature is enabled on the remote Mercurial server.
2224 feature is enabled on the remote Mercurial server.
2225 """
2225 """
2226 dest = ui.expandpath(dest or 'default-push', dest or 'default')
2226 dest = ui.expandpath(dest or 'default-push', dest or 'default')
2227 ui.setconfig_remoteopts(**opts)
2227 ui.setconfig_remoteopts(**opts)
2228
2228
2229 other = hg.repository(ui, dest)
2229 other = hg.repository(ui, dest)
2230 ui.status('pushing to %s\n' % (dest))
2230 ui.status('pushing to %s\n' % (dest))
2231 revs = None
2231 revs = None
2232 if opts['rev']:
2232 if opts['rev']:
2233 revs = [repo.lookup(rev) for rev in opts['rev']]
2233 revs = [repo.lookup(rev) for rev in opts['rev']]
2234 r = repo.push(other, opts['force'], revs=revs)
2234 r = repo.push(other, opts['force'], revs=revs)
2235 return r == 0
2235 return r == 0
2236
2236
2237 def rawcommit(ui, repo, *flist, **rc):
2237 def rawcommit(ui, repo, *flist, **rc):
2238 """raw commit interface (DEPRECATED)
2238 """raw commit interface (DEPRECATED)
2239
2239
2240 (DEPRECATED)
2240 (DEPRECATED)
2241 Lowlevel commit, for use in helper scripts.
2241 Lowlevel commit, for use in helper scripts.
2242
2242
2243 This command is not intended to be used by normal users, as it is
2243 This command is not intended to be used by normal users, as it is
2244 primarily useful for importing from other SCMs.
2244 primarily useful for importing from other SCMs.
2245
2245
2246 This command is now deprecated and will be removed in a future
2246 This command is now deprecated and will be removed in a future
2247 release, please use debugsetparents and commit instead.
2247 release, please use debugsetparents and commit instead.
2248 """
2248 """
2249
2249
2250 ui.warn(_("(the rawcommit command is deprecated)\n"))
2250 ui.warn(_("(the rawcommit command is deprecated)\n"))
2251
2251
2252 message = rc['message']
2252 message = rc['message']
2253 if not message and rc['logfile']:
2253 if not message and rc['logfile']:
2254 try:
2254 try:
2255 message = open(rc['logfile']).read()
2255 message = open(rc['logfile']).read()
2256 except IOError:
2256 except IOError:
2257 pass
2257 pass
2258 if not message and not rc['logfile']:
2258 if not message and not rc['logfile']:
2259 raise util.Abort(_("missing commit message"))
2259 raise util.Abort(_("missing commit message"))
2260
2260
2261 files = relpath(repo, list(flist))
2261 files = relpath(repo, list(flist))
2262 if rc['files']:
2262 if rc['files']:
2263 files += open(rc['files']).read().splitlines()
2263 files += open(rc['files']).read().splitlines()
2264
2264
2265 rc['parent'] = map(repo.lookup, rc['parent'])
2265 rc['parent'] = map(repo.lookup, rc['parent'])
2266
2266
2267 try:
2267 try:
2268 repo.rawcommit(files, message, rc['user'], rc['date'], *rc['parent'])
2268 repo.rawcommit(files, message, rc['user'], rc['date'], *rc['parent'])
2269 except ValueError, inst:
2269 except ValueError, inst:
2270 raise util.Abort(str(inst))
2270 raise util.Abort(str(inst))
2271
2271
2272 def recover(ui, repo):
2272 def recover(ui, repo):
2273 """roll back an interrupted transaction
2273 """roll back an interrupted transaction
2274
2274
2275 Recover from an interrupted commit or pull.
2275 Recover from an interrupted commit or pull.
2276
2276
2277 This command tries to fix the repository status after an interrupted
2277 This command tries to fix the repository status after an interrupted
2278 operation. It should only be necessary when Mercurial suggests it.
2278 operation. It should only be necessary when Mercurial suggests it.
2279 """
2279 """
2280 if repo.recover():
2280 if repo.recover():
2281 return repo.verify()
2281 return repo.verify()
2282 return 1
2282 return 1
2283
2283
2284 def remove(ui, repo, *pats, **opts):
2284 def remove(ui, repo, *pats, **opts):
2285 """remove the specified files on the next commit
2285 """remove the specified files on the next commit
2286
2286
2287 Schedule the indicated files for removal from the repository.
2287 Schedule the indicated files for removal from the repository.
2288
2288
2289 This command schedules the files to be removed at the next commit.
2289 This command schedules the files to be removed at the next commit.
2290 This only removes files from the current branch, not from the
2290 This only removes files from the current branch, not from the
2291 entire project history. If the files still exist in the working
2291 entire project history. If the files still exist in the working
2292 directory, they will be deleted from it. If invoked with --after,
2292 directory, they will be deleted from it. If invoked with --after,
2293 files that have been manually deleted are marked as removed.
2293 files that have been manually deleted are marked as removed.
2294
2294
2295 Modified files and added files are not removed by default. To
2295 Modified files and added files are not removed by default. To
2296 remove them, use the -f/--force option.
2296 remove them, use the -f/--force option.
2297 """
2297 """
2298 names = []
2298 names = []
2299 if not opts['after'] and not pats:
2299 if not opts['after'] and not pats:
2300 raise util.Abort(_('no files specified'))
2300 raise util.Abort(_('no files specified'))
2301 files, matchfn, anypats = matchpats(repo, pats, opts)
2301 files, matchfn, anypats = matchpats(repo, pats, opts)
2302 exact = dict.fromkeys(files)
2302 exact = dict.fromkeys(files)
2303 mardu = map(dict.fromkeys, repo.changes(files=files, match=matchfn))
2303 mardu = map(dict.fromkeys, repo.changes(files=files, match=matchfn))
2304 modified, added, removed, deleted, unknown = mardu
2304 modified, added, removed, deleted, unknown = mardu
2305 remove, forget = [], []
2305 remove, forget = [], []
2306 for src, abs, rel, exact in walk(repo, pats, opts):
2306 for src, abs, rel, exact in walk(repo, pats, opts):
2307 reason = None
2307 reason = None
2308 if abs not in deleted and opts['after']:
2308 if abs not in deleted and opts['after']:
2309 reason = _('is still present')
2309 reason = _('is still present')
2310 elif abs in modified and not opts['force']:
2310 elif abs in modified and not opts['force']:
2311 reason = _('is modified (use -f to force removal)')
2311 reason = _('is modified (use -f to force removal)')
2312 elif abs in added:
2312 elif abs in added:
2313 if opts['force']:
2313 if opts['force']:
2314 forget.append(abs)
2314 forget.append(abs)
2315 continue
2315 continue
2316 reason = _('has been marked for add (use -f to force removal)')
2316 reason = _('has been marked for add (use -f to force removal)')
2317 elif abs in unknown:
2317 elif abs in unknown:
2318 reason = _('is not managed')
2318 reason = _('is not managed')
2319 elif abs in removed:
2319 elif abs in removed:
2320 continue
2320 continue
2321 if reason:
2321 if reason:
2322 if exact:
2322 if exact:
2323 ui.warn(_('not removing %s: file %s\n') % (rel, reason))
2323 ui.warn(_('not removing %s: file %s\n') % (rel, reason))
2324 else:
2324 else:
2325 if ui.verbose or not exact:
2325 if ui.verbose or not exact:
2326 ui.status(_('removing %s\n') % rel)
2326 ui.status(_('removing %s\n') % rel)
2327 remove.append(abs)
2327 remove.append(abs)
2328 repo.forget(forget)
2328 repo.forget(forget)
2329 repo.remove(remove, unlink=not opts['after'])
2329 repo.remove(remove, unlink=not opts['after'])
2330
2330
2331 def rename(ui, repo, *pats, **opts):
2331 def rename(ui, repo, *pats, **opts):
2332 """rename files; equivalent of copy + remove
2332 """rename files; equivalent of copy + remove
2333
2333
2334 Mark dest as copies of sources; mark sources for deletion. If
2334 Mark dest as copies of sources; mark sources for deletion. If
2335 dest is a directory, copies are put in that directory. If dest is
2335 dest is a directory, copies are put in that directory. If dest is
2336 a file, there can only be one source.
2336 a file, there can only be one source.
2337
2337
2338 By default, this command copies the contents of files as they
2338 By default, this command copies the contents of files as they
2339 stand in the working directory. If invoked with --after, the
2339 stand in the working directory. If invoked with --after, the
2340 operation is recorded, but no copying is performed.
2340 operation is recorded, but no copying is performed.
2341
2341
2342 This command takes effect in the next commit.
2342 This command takes effect in the next commit.
2343
2343
2344 NOTE: This command should be treated as experimental. While it
2344 NOTE: This command should be treated as experimental. While it
2345 should properly record rename files, this information is not yet
2345 should properly record rename files, this information is not yet
2346 fully used by merge, nor fully reported by log.
2346 fully used by merge, nor fully reported by log.
2347 """
2347 """
2348 wlock = repo.wlock(0)
2348 wlock = repo.wlock(0)
2349 errs, copied = docopy(ui, repo, pats, opts, wlock)
2349 errs, copied = docopy(ui, repo, pats, opts, wlock)
2350 names = []
2350 names = []
2351 for abs, rel, exact in copied:
2351 for abs, rel, exact in copied:
2352 if ui.verbose or not exact:
2352 if ui.verbose or not exact:
2353 ui.status(_('removing %s\n') % rel)
2353 ui.status(_('removing %s\n') % rel)
2354 names.append(abs)
2354 names.append(abs)
2355 if not opts.get('dry_run'):
2355 if not opts.get('dry_run'):
2356 repo.remove(names, True, wlock)
2356 repo.remove(names, True, wlock)
2357 return errs
2357 return errs
2358
2358
2359 def revert(ui, repo, *pats, **opts):
2359 def revert(ui, repo, *pats, **opts):
2360 """revert files or dirs to their states as of some revision
2360 """revert files or dirs to their states as of some revision
2361
2361
2362 With no revision specified, revert the named files or directories
2362 With no revision specified, revert the named files or directories
2363 to the contents they had in the parent of the working directory.
2363 to the contents they had in the parent of the working directory.
2364 This restores the contents of the affected files to an unmodified
2364 This restores the contents of the affected files to an unmodified
2365 state. If the working directory has two parents, you must
2365 state. If the working directory has two parents, you must
2366 explicitly specify the revision to revert to.
2366 explicitly specify the revision to revert to.
2367
2367
2368 Modified files are saved with a .orig suffix before reverting.
2368 Modified files are saved with a .orig suffix before reverting.
2369 To disable these backups, use --no-backup.
2369 To disable these backups, use --no-backup.
2370
2370
2371 Using the -r option, revert the given files or directories to
2371 Using the -r option, revert the given files or directories to
2372 their contents as of a specific revision. This can be helpful to"roll
2372 their contents as of a specific revision. This can be helpful to"roll
2373 back" some or all of a change that should not have been committed.
2373 back" some or all of a change that should not have been committed.
2374
2374
2375 Revert modifies the working directory. It does not commit any
2375 Revert modifies the working directory. It does not commit any
2376 changes, or change the parent of the working directory. If you
2376 changes, or change the parent of the working directory. If you
2377 revert to a revision other than the parent of the working
2377 revert to a revision other than the parent of the working
2378 directory, the reverted files will thus appear modified
2378 directory, the reverted files will thus appear modified
2379 afterwards.
2379 afterwards.
2380
2380
2381 If a file has been deleted, it is recreated. If the executable
2381 If a file has been deleted, it is recreated. If the executable
2382 mode of a file was changed, it is reset.
2382 mode of a file was changed, it is reset.
2383
2383
2384 If names are given, all files matching the names are reverted.
2384 If names are given, all files matching the names are reverted.
2385
2385
2386 If no arguments are given, all files in the repository are reverted.
2386 If no arguments are given, all files in the repository are reverted.
2387 """
2387 """
2388 parent, p2 = repo.dirstate.parents()
2388 parent, p2 = repo.dirstate.parents()
2389 if opts['rev']:
2389 if opts['rev']:
2390 node = repo.lookup(opts['rev'])
2390 node = repo.lookup(opts['rev'])
2391 elif p2 != nullid:
2391 elif p2 != nullid:
2392 raise util.Abort(_('working dir has two parents; '
2392 raise util.Abort(_('working dir has two parents; '
2393 'you must specify the revision to revert to'))
2393 'you must specify the revision to revert to'))
2394 else:
2394 else:
2395 node = parent
2395 node = parent
2396 mf = repo.manifest.read(repo.changelog.read(node)[0])
2396 mf = repo.manifest.read(repo.changelog.read(node)[0])
2397 if node == parent:
2397 if node == parent:
2398 pmf = mf
2398 pmf = mf
2399 else:
2399 else:
2400 pmf = None
2400 pmf = None
2401
2401
2402 wlock = repo.wlock()
2402 wlock = repo.wlock()
2403
2403
2404 # need all matching names in dirstate and manifest of target rev,
2404 # need all matching names in dirstate and manifest of target rev,
2405 # so have to walk both. do not print errors if files exist in one
2405 # so have to walk both. do not print errors if files exist in one
2406 # but not other.
2406 # but not other.
2407
2407
2408 names = {}
2408 names = {}
2409 target_only = {}
2409 target_only = {}
2410
2410
2411 # walk dirstate.
2411 # walk dirstate.
2412
2412
2413 for src, abs, rel, exact in walk(repo, pats, opts, badmatch=mf.has_key):
2413 for src, abs, rel, exact in walk(repo, pats, opts, badmatch=mf.has_key):
2414 names[abs] = (rel, exact)
2414 names[abs] = (rel, exact)
2415 if src == 'b':
2415 if src == 'b':
2416 target_only[abs] = True
2416 target_only[abs] = True
2417
2417
2418 # walk target manifest.
2418 # walk target manifest.
2419
2419
2420 for src, abs, rel, exact in walk(repo, pats, opts, node=node,
2420 for src, abs, rel, exact in walk(repo, pats, opts, node=node,
2421 badmatch=names.has_key):
2421 badmatch=names.has_key):
2422 if abs in names: continue
2422 if abs in names: continue
2423 names[abs] = (rel, exact)
2423 names[abs] = (rel, exact)
2424 target_only[abs] = True
2424 target_only[abs] = True
2425
2425
2426 changes = repo.changes(match=names.has_key, wlock=wlock)
2426 changes = repo.changes(match=names.has_key, wlock=wlock)
2427 modified, added, removed, deleted, unknown = map(dict.fromkeys, changes)
2427 modified, added, removed, deleted, unknown = map(dict.fromkeys, changes)
2428
2428
2429 revert = ([], _('reverting %s\n'))
2429 revert = ([], _('reverting %s\n'))
2430 add = ([], _('adding %s\n'))
2430 add = ([], _('adding %s\n'))
2431 remove = ([], _('removing %s\n'))
2431 remove = ([], _('removing %s\n'))
2432 forget = ([], _('forgetting %s\n'))
2432 forget = ([], _('forgetting %s\n'))
2433 undelete = ([], _('undeleting %s\n'))
2433 undelete = ([], _('undeleting %s\n'))
2434 update = {}
2434 update = {}
2435
2435
2436 disptable = (
2436 disptable = (
2437 # dispatch table:
2437 # dispatch table:
2438 # file state
2438 # file state
2439 # action if in target manifest
2439 # action if in target manifest
2440 # action if not in target manifest
2440 # action if not in target manifest
2441 # make backup if in target manifest
2441 # make backup if in target manifest
2442 # make backup if not in target manifest
2442 # make backup if not in target manifest
2443 (modified, revert, remove, True, True),
2443 (modified, revert, remove, True, True),
2444 (added, revert, forget, True, False),
2444 (added, revert, forget, True, False),
2445 (removed, undelete, None, False, False),
2445 (removed, undelete, None, False, False),
2446 (deleted, revert, remove, False, False),
2446 (deleted, revert, remove, False, False),
2447 (unknown, add, None, True, False),
2447 (unknown, add, None, True, False),
2448 (target_only, add, None, False, False),
2448 (target_only, add, None, False, False),
2449 )
2449 )
2450
2450
2451 entries = names.items()
2451 entries = names.items()
2452 entries.sort()
2452 entries.sort()
2453
2453
2454 for abs, (rel, exact) in entries:
2454 for abs, (rel, exact) in entries:
2455 mfentry = mf.get(abs)
2455 mfentry = mf.get(abs)
2456 def handle(xlist, dobackup):
2456 def handle(xlist, dobackup):
2457 xlist[0].append(abs)
2457 xlist[0].append(abs)
2458 update[abs] = 1
2458 update[abs] = 1
2459 if dobackup and not opts['no_backup'] and os.path.exists(rel):
2459 if dobackup and not opts['no_backup'] and os.path.exists(rel):
2460 bakname = "%s.orig" % rel
2460 bakname = "%s.orig" % rel
2461 ui.note(_('saving current version of %s as %s\n') %
2461 ui.note(_('saving current version of %s as %s\n') %
2462 (rel, bakname))
2462 (rel, bakname))
2463 if not opts.get('dry_run'):
2463 if not opts.get('dry_run'):
2464 shutil.copyfile(rel, bakname)
2464 shutil.copyfile(rel, bakname)
2465 shutil.copymode(rel, bakname)
2465 shutil.copymode(rel, bakname)
2466 if ui.verbose or not exact:
2466 if ui.verbose or not exact:
2467 ui.status(xlist[1] % rel)
2467 ui.status(xlist[1] % rel)
2468 for table, hitlist, misslist, backuphit, backupmiss in disptable:
2468 for table, hitlist, misslist, backuphit, backupmiss in disptable:
2469 if abs not in table: continue
2469 if abs not in table: continue
2470 # file has changed in dirstate
2470 # file has changed in dirstate
2471 if mfentry:
2471 if mfentry:
2472 handle(hitlist, backuphit)
2472 handle(hitlist, backuphit)
2473 elif misslist is not None:
2473 elif misslist is not None:
2474 handle(misslist, backupmiss)
2474 handle(misslist, backupmiss)
2475 else:
2475 else:
2476 if exact: ui.warn(_('file not managed: %s\n' % rel))
2476 if exact: ui.warn(_('file not managed: %s\n' % rel))
2477 break
2477 break
2478 else:
2478 else:
2479 # file has not changed in dirstate
2479 # file has not changed in dirstate
2480 if node == parent:
2480 if node == parent:
2481 if exact: ui.warn(_('no changes needed to %s\n' % rel))
2481 if exact: ui.warn(_('no changes needed to %s\n' % rel))
2482 continue
2482 continue
2483 if pmf is None:
2483 if pmf is None:
2484 # only need parent manifest in this unlikely case,
2484 # only need parent manifest in this unlikely case,
2485 # so do not read by default
2485 # so do not read by default
2486 pmf = repo.manifest.read(repo.changelog.read(parent)[0])
2486 pmf = repo.manifest.read(repo.changelog.read(parent)[0])
2487 if abs in pmf:
2487 if abs in pmf:
2488 if mfentry:
2488 if mfentry:
2489 # if version of file is same in parent and target
2489 # if version of file is same in parent and target
2490 # manifests, do nothing
2490 # manifests, do nothing
2491 if pmf[abs] != mfentry:
2491 if pmf[abs] != mfentry:
2492 handle(revert, False)
2492 handle(revert, False)
2493 else:
2493 else:
2494 handle(remove, False)
2494 handle(remove, False)
2495
2495
2496 if not opts.get('dry_run'):
2496 if not opts.get('dry_run'):
2497 repo.dirstate.forget(forget[0])
2497 repo.dirstate.forget(forget[0])
2498 r = repo.update(node, False, True, update.has_key, False, wlock=wlock,
2498 r = repo.update(node, False, True, update.has_key, False, wlock=wlock,
2499 show_stats=False)
2499 show_stats=False)
2500 repo.dirstate.update(add[0], 'a')
2500 repo.dirstate.update(add[0], 'a')
2501 repo.dirstate.update(undelete[0], 'n')
2501 repo.dirstate.update(undelete[0], 'n')
2502 repo.dirstate.update(remove[0], 'r')
2502 repo.dirstate.update(remove[0], 'r')
2503 return r
2503 return r
2504
2504
2505 def rollback(ui, repo):
2505 def rollback(ui, repo):
2506 """roll back the last transaction in this repository
2506 """roll back the last transaction in this repository
2507
2507
2508 Roll back the last transaction in this repository, restoring the
2508 Roll back the last transaction in this repository, restoring the
2509 project to its state prior to the transaction.
2509 project to its state prior to the transaction.
2510
2510
2511 Transactions are used to encapsulate the effects of all commands
2511 Transactions are used to encapsulate the effects of all commands
2512 that create new changesets or propagate existing changesets into a
2512 that create new changesets or propagate existing changesets into a
2513 repository. For example, the following commands are transactional,
2513 repository. For example, the following commands are transactional,
2514 and their effects can be rolled back:
2514 and their effects can be rolled back:
2515
2515
2516 commit
2516 commit
2517 import
2517 import
2518 pull
2518 pull
2519 push (with this repository as destination)
2519 push (with this repository as destination)
2520 unbundle
2520 unbundle
2521
2521
2522 This command should be used with care. There is only one level of
2522 This command should be used with care. There is only one level of
2523 rollback, and there is no way to undo a rollback.
2523 rollback, and there is no way to undo a rollback.
2524
2524
2525 This command is not intended for use on public repositories. Once
2525 This command is not intended for use on public repositories. Once
2526 changes are visible for pull by other users, rolling a transaction
2526 changes are visible for pull by other users, rolling a transaction
2527 back locally is ineffective (someone else may already have pulled
2527 back locally is ineffective (someone else may already have pulled
2528 the changes). Furthermore, a race is possible with readers of the
2528 the changes). Furthermore, a race is possible with readers of the
2529 repository; for example an in-progress pull from the repository
2529 repository; for example an in-progress pull from the repository
2530 may fail if a rollback is performed.
2530 may fail if a rollback is performed.
2531 """
2531 """
2532 repo.rollback()
2532 repo.rollback()
2533
2533
2534 def root(ui, repo):
2534 def root(ui, repo):
2535 """print the root (top) of the current working dir
2535 """print the root (top) of the current working dir
2536
2536
2537 Print the root directory of the current repository.
2537 Print the root directory of the current repository.
2538 """
2538 """
2539 ui.write(repo.root + "\n")
2539 ui.write(repo.root + "\n")
2540
2540
2541 def serve(ui, repo, **opts):
2541 def serve(ui, repo, **opts):
2542 """export the repository via HTTP
2542 """export the repository via HTTP
2543
2543
2544 Start a local HTTP repository browser and pull server.
2544 Start a local HTTP repository browser and pull server.
2545
2545
2546 By default, the server logs accesses to stdout and errors to
2546 By default, the server logs accesses to stdout and errors to
2547 stderr. Use the "-A" and "-E" options to log to files.
2547 stderr. Use the "-A" and "-E" options to log to files.
2548 """
2548 """
2549
2549
2550 if opts["stdio"]:
2550 if opts["stdio"]:
2551 if repo is None:
2551 if repo is None:
2552 raise hg.RepoError(_('no repo found'))
2552 raise hg.RepoError(_('no repo found'))
2553 s = sshserver.sshserver(ui, repo)
2553 s = sshserver.sshserver(ui, repo)
2554 s.serve_forever()
2554 s.serve_forever()
2555
2555
2556 optlist = ("name templates style address port ipv6"
2556 optlist = ("name templates style address port ipv6"
2557 " accesslog errorlog webdir_conf")
2557 " accesslog errorlog webdir_conf")
2558 for o in optlist.split():
2558 for o in optlist.split():
2559 if opts[o]:
2559 if opts[o]:
2560 ui.setconfig("web", o, opts[o])
2560 ui.setconfig("web", o, opts[o])
2561
2561
2562 if repo is None and not ui.config("web", "webdir_conf"):
2562 if repo is None and not ui.config("web", "webdir_conf"):
2563 raise hg.RepoError(_('no repo found'))
2563 raise hg.RepoError(_('no repo found'))
2564
2564
2565 if opts['daemon'] and not opts['daemon_pipefds']:
2565 if opts['daemon'] and not opts['daemon_pipefds']:
2566 rfd, wfd = os.pipe()
2566 rfd, wfd = os.pipe()
2567 args = sys.argv[:]
2567 args = sys.argv[:]
2568 args.append('--daemon-pipefds=%d,%d' % (rfd, wfd))
2568 args.append('--daemon-pipefds=%d,%d' % (rfd, wfd))
2569 pid = os.spawnvp(os.P_NOWAIT | getattr(os, 'P_DETACH', 0),
2569 pid = os.spawnvp(os.P_NOWAIT | getattr(os, 'P_DETACH', 0),
2570 args[0], args)
2570 args[0], args)
2571 os.close(wfd)
2571 os.close(wfd)
2572 os.read(rfd, 1)
2572 os.read(rfd, 1)
2573 os._exit(0)
2573 os._exit(0)
2574
2574
2575 try:
2575 try:
2576 httpd = hgweb.server.create_server(ui, repo)
2576 httpd = hgweb.server.create_server(ui, repo)
2577 except socket.error, inst:
2577 except socket.error, inst:
2578 raise util.Abort(_('cannot start server: ') + inst.args[1])
2578 raise util.Abort(_('cannot start server: ') + inst.args[1])
2579
2579
2580 if ui.verbose:
2580 if ui.verbose:
2581 addr, port = httpd.socket.getsockname()
2581 addr, port = httpd.socket.getsockname()
2582 if addr == '0.0.0.0':
2582 if addr == '0.0.0.0':
2583 addr = socket.gethostname()
2583 addr = socket.gethostname()
2584 else:
2584 else:
2585 try:
2585 try:
2586 addr = socket.gethostbyaddr(addr)[0]
2586 addr = socket.gethostbyaddr(addr)[0]
2587 except socket.error:
2587 except socket.error:
2588 pass
2588 pass
2589 if port != 80:
2589 if port != 80:
2590 ui.status(_('listening at http://%s:%d/\n') % (addr, port))
2590 ui.status(_('listening at http://%s:%d/\n') % (addr, port))
2591 else:
2591 else:
2592 ui.status(_('listening at http://%s/\n') % addr)
2592 ui.status(_('listening at http://%s/\n') % addr)
2593
2593
2594 if opts['pid_file']:
2594 if opts['pid_file']:
2595 fp = open(opts['pid_file'], 'w')
2595 fp = open(opts['pid_file'], 'w')
2596 fp.write(str(os.getpid()) + '\n')
2596 fp.write(str(os.getpid()) + '\n')
2597 fp.close()
2597 fp.close()
2598
2598
2599 if opts['daemon_pipefds']:
2599 if opts['daemon_pipefds']:
2600 rfd, wfd = [int(x) for x in opts['daemon_pipefds'].split(',')]
2600 rfd, wfd = [int(x) for x in opts['daemon_pipefds'].split(',')]
2601 os.close(rfd)
2601 os.close(rfd)
2602 os.write(wfd, 'y')
2602 os.write(wfd, 'y')
2603 os.close(wfd)
2603 os.close(wfd)
2604 sys.stdout.flush()
2604 sys.stdout.flush()
2605 sys.stderr.flush()
2605 sys.stderr.flush()
2606 fd = os.open(util.nulldev, os.O_RDWR)
2606 fd = os.open(util.nulldev, os.O_RDWR)
2607 if fd != 0: os.dup2(fd, 0)
2607 if fd != 0: os.dup2(fd, 0)
2608 if fd != 1: os.dup2(fd, 1)
2608 if fd != 1: os.dup2(fd, 1)
2609 if fd != 2: os.dup2(fd, 2)
2609 if fd != 2: os.dup2(fd, 2)
2610 if fd not in (0, 1, 2): os.close(fd)
2610 if fd not in (0, 1, 2): os.close(fd)
2611
2611
2612 httpd.serve_forever()
2612 httpd.serve_forever()
2613
2613
2614 def status(ui, repo, *pats, **opts):
2614 def status(ui, repo, *pats, **opts):
2615 """show changed files in the working directory
2615 """show changed files in the working directory
2616
2616
2617 Show status of files in the repository. If names are given, only
2617 Show status of files in the repository. If names are given, only
2618 files that match are shown. Files that are clean or ignored, are
2618 files that match are shown. Files that are clean or ignored, are
2619 not listed unless -c (clean), -i (ignored) or -A is given.
2619 not listed unless -c (clean), -i (ignored) or -A is given.
2620
2620
2621 The codes used to show the status of files are:
2621 The codes used to show the status of files are:
2622 M = modified
2622 M = modified
2623 A = added
2623 A = added
2624 R = removed
2624 R = removed
2625 C = clean
2625 C = clean
2626 ! = deleted, but still tracked
2626 ! = deleted, but still tracked
2627 ? = not tracked
2627 ? = not tracked
2628 I = ignored (not shown by default)
2628 I = ignored (not shown by default)
2629 = the previous added file was copied from here
2629 = the previous added file was copied from here
2630 """
2630 """
2631
2631
2632 all = opts['all']
2632 all = opts['all']
2633
2633
2634 files, matchfn, anypats = matchpats(repo, pats, opts)
2634 files, matchfn, anypats = matchpats(repo, pats, opts)
2635 cwd = (pats and repo.getcwd()) or ''
2635 cwd = (pats and repo.getcwd()) or ''
2636 modified, added, removed, deleted, unknown, ignored, clean = [
2636 modified, added, removed, deleted, unknown, ignored, clean = [
2637 [util.pathto(cwd, x) for x in n]
2637 [util.pathto(cwd, x) for x in n]
2638 for n in repo.status(files=files, match=matchfn,
2638 for n in repo.status(files=files, match=matchfn,
2639 list_ignored=all or opts['ignored'],
2639 list_ignored=all or opts['ignored'],
2640 list_clean=all or opts['clean'])]
2640 list_clean=all or opts['clean'])]
2641
2641
2642 changetypes = (('modified', 'M', modified),
2642 changetypes = (('modified', 'M', modified),
2643 ('added', 'A', added),
2643 ('added', 'A', added),
2644 ('removed', 'R', removed),
2644 ('removed', 'R', removed),
2645 ('deleted', '!', deleted),
2645 ('deleted', '!', deleted),
2646 ('unknown', '?', unknown),
2646 ('unknown', '?', unknown),
2647 ('ignored', 'I', ignored))
2647 ('ignored', 'I', ignored))
2648
2648
2649 explicit_changetypes = changetypes + (('clean', 'C', clean),)
2649 explicit_changetypes = changetypes + (('clean', 'C', clean),)
2650
2650
2651 end = opts['print0'] and '\0' or '\n'
2651 end = opts['print0'] and '\0' or '\n'
2652
2652
2653 for opt, char, changes in ([ct for ct in explicit_changetypes
2653 for opt, char, changes in ([ct for ct in explicit_changetypes
2654 if all or opts[ct[0]]]
2654 if all or opts[ct[0]]]
2655 or changetypes):
2655 or changetypes):
2656 if opts['no_status']:
2656 if opts['no_status']:
2657 format = "%%s%s" % end
2657 format = "%%s%s" % end
2658 else:
2658 else:
2659 format = "%s %%s%s" % (char, end)
2659 format = "%s %%s%s" % (char, end)
2660
2660
2661 for f in changes:
2661 for f in changes:
2662 ui.write(format % f)
2662 ui.write(format % f)
2663 if ((all or opts.get('copies')) and not opts.get('no_status')
2663 if ((all or opts.get('copies')) and not opts.get('no_status')
2664 and opt == 'added' and repo.dirstate.copies.has_key(f)):
2664 and opt == 'added' and repo.dirstate.copies.has_key(f)):
2665 ui.write(' %s%s' % (repo.dirstate.copies[f], end))
2665 ui.write(' %s%s' % (repo.dirstate.copies[f], end))
2666
2666
2667 def tag(ui, repo, name, rev_=None, **opts):
2667 def tag(ui, repo, name, rev_=None, **opts):
2668 """add a tag for the current tip or a given revision
2668 """add a tag for the current tip or a given revision
2669
2669
2670 Name a particular revision using <name>.
2670 Name a particular revision using <name>.
2671
2671
2672 Tags are used to name particular revisions of the repository and are
2672 Tags are used to name particular revisions of the repository and are
2673 very useful to compare different revision, to go back to significant
2673 very useful to compare different revision, to go back to significant
2674 earlier versions or to mark branch points as releases, etc.
2674 earlier versions or to mark branch points as releases, etc.
2675
2675
2676 If no revision is given, the parent of the working directory is used.
2676 If no revision is given, the parent of the working directory is used.
2677
2677
2678 To facilitate version control, distribution, and merging of tags,
2678 To facilitate version control, distribution, and merging of tags,
2679 they are stored as a file named ".hgtags" which is managed
2679 they are stored as a file named ".hgtags" which is managed
2680 similarly to other project files and can be hand-edited if
2680 similarly to other project files and can be hand-edited if
2681 necessary. The file '.hg/localtags' is used for local tags (not
2681 necessary. The file '.hg/localtags' is used for local tags (not
2682 shared among repositories).
2682 shared among repositories).
2683 """
2683 """
2684 if name == "tip":
2684 if name == "tip":
2685 raise util.Abort(_("the name 'tip' is reserved"))
2685 raise util.Abort(_("the name 'tip' is reserved"))
2686 if rev_ is not None:
2686 if rev_ is not None:
2687 ui.warn(_("use of 'hg tag NAME [REV]' is deprecated, "
2687 ui.warn(_("use of 'hg tag NAME [REV]' is deprecated, "
2688 "please use 'hg tag [-r REV] NAME' instead\n"))
2688 "please use 'hg tag [-r REV] NAME' instead\n"))
2689 if opts['rev']:
2689 if opts['rev']:
2690 raise util.Abort(_("use only one form to specify the revision"))
2690 raise util.Abort(_("use only one form to specify the revision"))
2691 if opts['rev']:
2691 if opts['rev']:
2692 rev_ = opts['rev']
2692 rev_ = opts['rev']
2693 if rev_:
2693 if rev_:
2694 r = hex(repo.lookup(rev_))
2694 r = hex(repo.lookup(rev_))
2695 else:
2695 else:
2696 p1, p2 = repo.dirstate.parents()
2696 p1, p2 = repo.dirstate.parents()
2697 if p1 == nullid:
2697 if p1 == nullid:
2698 raise util.Abort(_('no revision to tag'))
2698 raise util.Abort(_('no revision to tag'))
2699 if p2 != nullid:
2699 if p2 != nullid:
2700 raise util.Abort(_('outstanding uncommitted merges'))
2700 raise util.Abort(_('outstanding uncommitted merges'))
2701 r = hex(p1)
2701 r = hex(p1)
2702
2702
2703 repo.tag(name, r, opts['local'], opts['message'], opts['user'],
2703 repo.tag(name, r, opts['local'], opts['message'], opts['user'],
2704 opts['date'])
2704 opts['date'])
2705
2705
2706 def tags(ui, repo):
2706 def tags(ui, repo):
2707 """list repository tags
2707 """list repository tags
2708
2708
2709 List the repository tags.
2709 List the repository tags.
2710
2710
2711 This lists both regular and local tags.
2711 This lists both regular and local tags.
2712 """
2712 """
2713
2713
2714 l = repo.tagslist()
2714 l = repo.tagslist()
2715 l.reverse()
2715 l.reverse()
2716 for t, n in l:
2716 for t, n in l:
2717 try:
2717 try:
2718 r = "%5d:%s" % (repo.changelog.rev(n), hex(n))
2718 r = "%5d:%s" % (repo.changelog.rev(n), hex(n))
2719 except KeyError:
2719 except KeyError:
2720 r = " ?:?"
2720 r = " ?:?"
2721 if ui.quiet:
2721 if ui.quiet:
2722 ui.write("%s\n" % t)
2722 ui.write("%s\n" % t)
2723 else:
2723 else:
2724 ui.write("%-30s %s\n" % (t, r))
2724 ui.write("%-30s %s\n" % (t, r))
2725
2725
2726 def tip(ui, repo, **opts):
2726 def tip(ui, repo, **opts):
2727 """show the tip revision
2727 """show the tip revision
2728
2728
2729 Show the tip revision.
2729 Show the tip revision.
2730 """
2730 """
2731 n = repo.changelog.tip()
2731 n = repo.changelog.tip()
2732 br = None
2732 br = None
2733 if opts['branches']:
2733 if opts['branches']:
2734 br = repo.branchlookup([n])
2734 br = repo.branchlookup([n])
2735 show_changeset(ui, repo, opts).show(changenode=n, brinfo=br)
2735 show_changeset(ui, repo, opts).show(changenode=n, brinfo=br)
2736 if opts['patch']:
2736 if opts['patch']:
2737 dodiff(ui, ui, repo, repo.changelog.parents(n)[0], n)
2737 dodiff(ui, ui, repo, repo.changelog.parents(n)[0], n)
2738
2738
2739 def unbundle(ui, repo, fname, **opts):
2739 def unbundle(ui, repo, fname, **opts):
2740 """apply a changegroup file
2740 """apply a changegroup file
2741
2741
2742 Apply a compressed changegroup file generated by the bundle
2742 Apply a compressed changegroup file generated by the bundle
2743 command.
2743 command.
2744 """
2744 """
2745 f = urllib.urlopen(fname)
2745 f = urllib.urlopen(fname)
2746
2746
2747 header = f.read(6)
2747 header = f.read(6)
2748 if not header.startswith("HG"):
2748 if not header.startswith("HG"):
2749 raise util.Abort(_("%s: not a Mercurial bundle file") % fname)
2749 raise util.Abort(_("%s: not a Mercurial bundle file") % fname)
2750 elif not header.startswith("HG10"):
2750 elif not header.startswith("HG10"):
2751 raise util.Abort(_("%s: unknown bundle version") % fname)
2751 raise util.Abort(_("%s: unknown bundle version") % fname)
2752 elif header == "HG10BZ":
2752 elif header == "HG10BZ":
2753 def generator(f):
2753 def generator(f):
2754 zd = bz2.BZ2Decompressor()
2754 zd = bz2.BZ2Decompressor()
2755 zd.decompress("BZ")
2755 zd.decompress("BZ")
2756 for chunk in f:
2756 for chunk in f:
2757 yield zd.decompress(chunk)
2757 yield zd.decompress(chunk)
2758 elif header == "HG10UN":
2758 elif header == "HG10UN":
2759 def generator(f):
2759 def generator(f):
2760 for chunk in f:
2760 for chunk in f:
2761 yield chunk
2761 yield chunk
2762 else:
2762 else:
2763 raise util.Abort(_("%s: unknown bundle compression type")
2763 raise util.Abort(_("%s: unknown bundle compression type")
2764 % fname)
2764 % fname)
2765 gen = generator(util.filechunkiter(f, 4096))
2765 gen = generator(util.filechunkiter(f, 4096))
2766 modheads = repo.addchangegroup(util.chunkbuffer(gen), 'unbundle')
2766 modheads = repo.addchangegroup(util.chunkbuffer(gen), 'unbundle',
2767 'bundle:' + fname)
2767 return postincoming(ui, repo, modheads, opts['update'])
2768 return postincoming(ui, repo, modheads, opts['update'])
2768
2769
2769 def undo(ui, repo):
2770 def undo(ui, repo):
2770 """undo the last commit or pull (DEPRECATED)
2771 """undo the last commit or pull (DEPRECATED)
2771
2772
2772 (DEPRECATED)
2773 (DEPRECATED)
2773 This command is now deprecated and will be removed in a future
2774 This command is now deprecated and will be removed in a future
2774 release. Please use the rollback command instead. For usage
2775 release. Please use the rollback command instead. For usage
2775 instructions, see the rollback command.
2776 instructions, see the rollback command.
2776 """
2777 """
2777 ui.warn(_('(the undo command is deprecated; use rollback instead)\n'))
2778 ui.warn(_('(the undo command is deprecated; use rollback instead)\n'))
2778 repo.rollback()
2779 repo.rollback()
2779
2780
2780 def update(ui, repo, node=None, merge=False, clean=False, force=None,
2781 def update(ui, repo, node=None, merge=False, clean=False, force=None,
2781 branch=None, **opts):
2782 branch=None, **opts):
2782 """update or merge working directory
2783 """update or merge working directory
2783
2784
2784 Update the working directory to the specified revision.
2785 Update the working directory to the specified revision.
2785
2786
2786 If there are no outstanding changes in the working directory and
2787 If there are no outstanding changes in the working directory and
2787 there is a linear relationship between the current version and the
2788 there is a linear relationship between the current version and the
2788 requested version, the result is the requested version.
2789 requested version, the result is the requested version.
2789
2790
2790 To merge the working directory with another revision, use the
2791 To merge the working directory with another revision, use the
2791 merge command.
2792 merge command.
2792
2793
2793 By default, update will refuse to run if doing so would require
2794 By default, update will refuse to run if doing so would require
2794 merging or discarding local changes.
2795 merging or discarding local changes.
2795 """
2796 """
2796 if merge:
2797 if merge:
2797 ui.warn(_('(the -m/--merge option is deprecated; '
2798 ui.warn(_('(the -m/--merge option is deprecated; '
2798 'use the merge command instead)\n'))
2799 'use the merge command instead)\n'))
2799 return doupdate(ui, repo, node, merge, clean, force, branch, **opts)
2800 return doupdate(ui, repo, node, merge, clean, force, branch, **opts)
2800
2801
2801 def doupdate(ui, repo, node=None, merge=False, clean=False, force=None,
2802 def doupdate(ui, repo, node=None, merge=False, clean=False, force=None,
2802 branch=None, **opts):
2803 branch=None, **opts):
2803 if branch:
2804 if branch:
2804 br = repo.branchlookup(branch=branch)
2805 br = repo.branchlookup(branch=branch)
2805 found = []
2806 found = []
2806 for x in br:
2807 for x in br:
2807 if branch in br[x]:
2808 if branch in br[x]:
2808 found.append(x)
2809 found.append(x)
2809 if len(found) > 1:
2810 if len(found) > 1:
2810 ui.warn(_("Found multiple heads for %s\n") % branch)
2811 ui.warn(_("Found multiple heads for %s\n") % branch)
2811 for x in found:
2812 for x in found:
2812 show_changeset(ui, repo, opts).show(changenode=x, brinfo=br)
2813 show_changeset(ui, repo, opts).show(changenode=x, brinfo=br)
2813 return 1
2814 return 1
2814 if len(found) == 1:
2815 if len(found) == 1:
2815 node = found[0]
2816 node = found[0]
2816 ui.warn(_("Using head %s for branch %s\n") % (short(node), branch))
2817 ui.warn(_("Using head %s for branch %s\n") % (short(node), branch))
2817 else:
2818 else:
2818 ui.warn(_("branch %s not found\n") % (branch))
2819 ui.warn(_("branch %s not found\n") % (branch))
2819 return 1
2820 return 1
2820 else:
2821 else:
2821 node = node and repo.lookup(node) or repo.changelog.tip()
2822 node = node and repo.lookup(node) or repo.changelog.tip()
2822 return repo.update(node, allow=merge, force=clean, forcemerge=force)
2823 return repo.update(node, allow=merge, force=clean, forcemerge=force)
2823
2824
2824 def verify(ui, repo):
2825 def verify(ui, repo):
2825 """verify the integrity of the repository
2826 """verify the integrity of the repository
2826
2827
2827 Verify the integrity of the current repository.
2828 Verify the integrity of the current repository.
2828
2829
2829 This will perform an extensive check of the repository's
2830 This will perform an extensive check of the repository's
2830 integrity, validating the hashes and checksums of each entry in
2831 integrity, validating the hashes and checksums of each entry in
2831 the changelog, manifest, and tracked files, as well as the
2832 the changelog, manifest, and tracked files, as well as the
2832 integrity of their crosslinks and indices.
2833 integrity of their crosslinks and indices.
2833 """
2834 """
2834 return repo.verify()
2835 return repo.verify()
2835
2836
2836 # Command options and aliases are listed here, alphabetically
2837 # Command options and aliases are listed here, alphabetically
2837
2838
2838 table = {
2839 table = {
2839 "^add":
2840 "^add":
2840 (add,
2841 (add,
2841 [('I', 'include', [], _('include names matching the given patterns')),
2842 [('I', 'include', [], _('include names matching the given patterns')),
2842 ('X', 'exclude', [], _('exclude names matching the given patterns')),
2843 ('X', 'exclude', [], _('exclude names matching the given patterns')),
2843 ('n', 'dry-run', None, _('do not perform actions, just print output'))],
2844 ('n', 'dry-run', None, _('do not perform actions, just print output'))],
2844 _('hg add [OPTION]... [FILE]...')),
2845 _('hg add [OPTION]... [FILE]...')),
2845 "debugaddremove|addremove":
2846 "debugaddremove|addremove":
2846 (addremove,
2847 (addremove,
2847 [('I', 'include', [], _('include names matching the given patterns')),
2848 [('I', 'include', [], _('include names matching the given patterns')),
2848 ('X', 'exclude', [], _('exclude names matching the given patterns')),
2849 ('X', 'exclude', [], _('exclude names matching the given patterns')),
2849 ('n', 'dry-run', None, _('do not perform actions, just print output'))],
2850 ('n', 'dry-run', None, _('do not perform actions, just print output'))],
2850 _('hg addremove [OPTION]... [FILE]...')),
2851 _('hg addremove [OPTION]... [FILE]...')),
2851 "^annotate":
2852 "^annotate":
2852 (annotate,
2853 (annotate,
2853 [('r', 'rev', '', _('annotate the specified revision')),
2854 [('r', 'rev', '', _('annotate the specified revision')),
2854 ('a', 'text', None, _('treat all files as text')),
2855 ('a', 'text', None, _('treat all files as text')),
2855 ('u', 'user', None, _('list the author')),
2856 ('u', 'user', None, _('list the author')),
2856 ('d', 'date', None, _('list the date')),
2857 ('d', 'date', None, _('list the date')),
2857 ('n', 'number', None, _('list the revision number (default)')),
2858 ('n', 'number', None, _('list the revision number (default)')),
2858 ('c', 'changeset', None, _('list the changeset')),
2859 ('c', 'changeset', None, _('list the changeset')),
2859 ('I', 'include', [], _('include names matching the given patterns')),
2860 ('I', 'include', [], _('include names matching the given patterns')),
2860 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
2861 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
2861 _('hg annotate [-r REV] [-a] [-u] [-d] [-n] [-c] FILE...')),
2862 _('hg annotate [-r REV] [-a] [-u] [-d] [-n] [-c] FILE...')),
2862 "archive":
2863 "archive":
2863 (archive,
2864 (archive,
2864 [('', 'no-decode', None, _('do not pass files through decoders')),
2865 [('', 'no-decode', None, _('do not pass files through decoders')),
2865 ('p', 'prefix', '', _('directory prefix for files in archive')),
2866 ('p', 'prefix', '', _('directory prefix for files in archive')),
2866 ('r', 'rev', '', _('revision to distribute')),
2867 ('r', 'rev', '', _('revision to distribute')),
2867 ('t', 'type', '', _('type of distribution to create')),
2868 ('t', 'type', '', _('type of distribution to create')),
2868 ('I', 'include', [], _('include names matching the given patterns')),
2869 ('I', 'include', [], _('include names matching the given patterns')),
2869 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
2870 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
2870 _('hg archive [OPTION]... DEST')),
2871 _('hg archive [OPTION]... DEST')),
2871 "backout":
2872 "backout":
2872 (backout,
2873 (backout,
2873 [('', 'merge', None,
2874 [('', 'merge', None,
2874 _('merge with old dirstate parent after backout')),
2875 _('merge with old dirstate parent after backout')),
2875 ('m', 'message', '', _('use <text> as commit message')),
2876 ('m', 'message', '', _('use <text> as commit message')),
2876 ('l', 'logfile', '', _('read commit message from <file>')),
2877 ('l', 'logfile', '', _('read commit message from <file>')),
2877 ('d', 'date', '', _('record datecode as commit date')),
2878 ('d', 'date', '', _('record datecode as commit date')),
2878 ('', 'parent', '', _('parent to choose when backing out merge')),
2879 ('', 'parent', '', _('parent to choose when backing out merge')),
2879 ('u', 'user', '', _('record user as committer')),
2880 ('u', 'user', '', _('record user as committer')),
2880 ('I', 'include', [], _('include names matching the given patterns')),
2881 ('I', 'include', [], _('include names matching the given patterns')),
2881 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
2882 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
2882 _('hg backout [OPTION]... REV')),
2883 _('hg backout [OPTION]... REV')),
2883 "bundle":
2884 "bundle":
2884 (bundle,
2885 (bundle,
2885 [('f', 'force', None,
2886 [('f', 'force', None,
2886 _('run even when remote repository is unrelated'))],
2887 _('run even when remote repository is unrelated'))],
2887 _('hg bundle FILE DEST')),
2888 _('hg bundle FILE DEST')),
2888 "cat":
2889 "cat":
2889 (cat,
2890 (cat,
2890 [('o', 'output', '', _('print output to file with formatted name')),
2891 [('o', 'output', '', _('print output to file with formatted name')),
2891 ('r', 'rev', '', _('print the given revision')),
2892 ('r', 'rev', '', _('print the given revision')),
2892 ('I', 'include', [], _('include names matching the given patterns')),
2893 ('I', 'include', [], _('include names matching the given patterns')),
2893 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
2894 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
2894 _('hg cat [OPTION]... FILE...')),
2895 _('hg cat [OPTION]... FILE...')),
2895 "^clone":
2896 "^clone":
2896 (clone,
2897 (clone,
2897 [('U', 'noupdate', None, _('do not update the new working directory')),
2898 [('U', 'noupdate', None, _('do not update the new working directory')),
2898 ('r', 'rev', [],
2899 ('r', 'rev', [],
2899 _('a changeset you would like to have after cloning')),
2900 _('a changeset you would like to have after cloning')),
2900 ('', 'pull', None, _('use pull protocol to copy metadata')),
2901 ('', 'pull', None, _('use pull protocol to copy metadata')),
2901 ('', 'uncompressed', None,
2902 ('', 'uncompressed', None,
2902 _('use uncompressed transfer (fast over LAN)')),
2903 _('use uncompressed transfer (fast over LAN)')),
2903 ('e', 'ssh', '', _('specify ssh command to use')),
2904 ('e', 'ssh', '', _('specify ssh command to use')),
2904 ('', 'remotecmd', '',
2905 ('', 'remotecmd', '',
2905 _('specify hg command to run on the remote side'))],
2906 _('specify hg command to run on the remote side'))],
2906 _('hg clone [OPTION]... SOURCE [DEST]')),
2907 _('hg clone [OPTION]... SOURCE [DEST]')),
2907 "^commit|ci":
2908 "^commit|ci":
2908 (commit,
2909 (commit,
2909 [('A', 'addremove', None,
2910 [('A', 'addremove', None,
2910 _('mark new/missing files as added/removed before committing')),
2911 _('mark new/missing files as added/removed before committing')),
2911 ('m', 'message', '', _('use <text> as commit message')),
2912 ('m', 'message', '', _('use <text> as commit message')),
2912 ('l', 'logfile', '', _('read the commit message from <file>')),
2913 ('l', 'logfile', '', _('read the commit message from <file>')),
2913 ('d', 'date', '', _('record datecode as commit date')),
2914 ('d', 'date', '', _('record datecode as commit date')),
2914 ('u', 'user', '', _('record user as commiter')),
2915 ('u', 'user', '', _('record user as commiter')),
2915 ('I', 'include', [], _('include names matching the given patterns')),
2916 ('I', 'include', [], _('include names matching the given patterns')),
2916 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
2917 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
2917 _('hg commit [OPTION]... [FILE]...')),
2918 _('hg commit [OPTION]... [FILE]...')),
2918 "copy|cp":
2919 "copy|cp":
2919 (copy,
2920 (copy,
2920 [('A', 'after', None, _('record a copy that has already occurred')),
2921 [('A', 'after', None, _('record a copy that has already occurred')),
2921 ('f', 'force', None,
2922 ('f', 'force', None,
2922 _('forcibly copy over an existing managed file')),
2923 _('forcibly copy over an existing managed file')),
2923 ('I', 'include', [], _('include names matching the given patterns')),
2924 ('I', 'include', [], _('include names matching the given patterns')),
2924 ('X', 'exclude', [], _('exclude names matching the given patterns')),
2925 ('X', 'exclude', [], _('exclude names matching the given patterns')),
2925 ('n', 'dry-run', None, _('do not perform actions, just print output'))],
2926 ('n', 'dry-run', None, _('do not perform actions, just print output'))],
2926 _('hg copy [OPTION]... [SOURCE]... DEST')),
2927 _('hg copy [OPTION]... [SOURCE]... DEST')),
2927 "debugancestor": (debugancestor, [], _('debugancestor INDEX REV1 REV2')),
2928 "debugancestor": (debugancestor, [], _('debugancestor INDEX REV1 REV2')),
2928 "debugcomplete":
2929 "debugcomplete":
2929 (debugcomplete,
2930 (debugcomplete,
2930 [('o', 'options', None, _('show the command options'))],
2931 [('o', 'options', None, _('show the command options'))],
2931 _('debugcomplete [-o] CMD')),
2932 _('debugcomplete [-o] CMD')),
2932 "debugrebuildstate":
2933 "debugrebuildstate":
2933 (debugrebuildstate,
2934 (debugrebuildstate,
2934 [('r', 'rev', '', _('revision to rebuild to'))],
2935 [('r', 'rev', '', _('revision to rebuild to'))],
2935 _('debugrebuildstate [-r REV] [REV]')),
2936 _('debugrebuildstate [-r REV] [REV]')),
2936 "debugcheckstate": (debugcheckstate, [], _('debugcheckstate')),
2937 "debugcheckstate": (debugcheckstate, [], _('debugcheckstate')),
2937 "debugconfig": (debugconfig, [], _('debugconfig [NAME]...')),
2938 "debugconfig": (debugconfig, [], _('debugconfig [NAME]...')),
2938 "debugsetparents": (debugsetparents, [], _('debugsetparents REV1 [REV2]')),
2939 "debugsetparents": (debugsetparents, [], _('debugsetparents REV1 [REV2]')),
2939 "debugstate": (debugstate, [], _('debugstate')),
2940 "debugstate": (debugstate, [], _('debugstate')),
2940 "debugdata": (debugdata, [], _('debugdata FILE REV')),
2941 "debugdata": (debugdata, [], _('debugdata FILE REV')),
2941 "debugindex": (debugindex, [], _('debugindex FILE')),
2942 "debugindex": (debugindex, [], _('debugindex FILE')),
2942 "debugindexdot": (debugindexdot, [], _('debugindexdot FILE')),
2943 "debugindexdot": (debugindexdot, [], _('debugindexdot FILE')),
2943 "debugrename": (debugrename, [], _('debugrename FILE [REV]')),
2944 "debugrename": (debugrename, [], _('debugrename FILE [REV]')),
2944 "debugwalk":
2945 "debugwalk":
2945 (debugwalk,
2946 (debugwalk,
2946 [('I', 'include', [], _('include names matching the given patterns')),
2947 [('I', 'include', [], _('include names matching the given patterns')),
2947 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
2948 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
2948 _('debugwalk [OPTION]... [FILE]...')),
2949 _('debugwalk [OPTION]... [FILE]...')),
2949 "^diff":
2950 "^diff":
2950 (diff,
2951 (diff,
2951 [('r', 'rev', [], _('revision')),
2952 [('r', 'rev', [], _('revision')),
2952 ('a', 'text', None, _('treat all files as text')),
2953 ('a', 'text', None, _('treat all files as text')),
2953 ('p', 'show-function', None,
2954 ('p', 'show-function', None,
2954 _('show which function each change is in')),
2955 _('show which function each change is in')),
2955 ('w', 'ignore-all-space', None,
2956 ('w', 'ignore-all-space', None,
2956 _('ignore white space when comparing lines')),
2957 _('ignore white space when comparing lines')),
2957 ('b', 'ignore-space-change', None,
2958 ('b', 'ignore-space-change', None,
2958 _('ignore changes in the amount of white space')),
2959 _('ignore changes in the amount of white space')),
2959 ('B', 'ignore-blank-lines', None,
2960 ('B', 'ignore-blank-lines', None,
2960 _('ignore changes whose lines are all blank')),
2961 _('ignore changes whose lines are all blank')),
2961 ('I', 'include', [], _('include names matching the given patterns')),
2962 ('I', 'include', [], _('include names matching the given patterns')),
2962 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
2963 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
2963 _('hg diff [-a] [-I] [-X] [-r REV1 [-r REV2]] [FILE]...')),
2964 _('hg diff [-a] [-I] [-X] [-r REV1 [-r REV2]] [FILE]...')),
2964 "^export":
2965 "^export":
2965 (export,
2966 (export,
2966 [('o', 'output', '', _('print output to file with formatted name')),
2967 [('o', 'output', '', _('print output to file with formatted name')),
2967 ('a', 'text', None, _('treat all files as text')),
2968 ('a', 'text', None, _('treat all files as text')),
2968 ('', 'switch-parent', None, _('diff against the second parent'))],
2969 ('', 'switch-parent', None, _('diff against the second parent'))],
2969 _('hg export [-a] [-o OUTFILESPEC] REV...')),
2970 _('hg export [-a] [-o OUTFILESPEC] REV...')),
2970 "debugforget|forget":
2971 "debugforget|forget":
2971 (forget,
2972 (forget,
2972 [('I', 'include', [], _('include names matching the given patterns')),
2973 [('I', 'include', [], _('include names matching the given patterns')),
2973 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
2974 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
2974 _('hg forget [OPTION]... FILE...')),
2975 _('hg forget [OPTION]... FILE...')),
2975 "grep":
2976 "grep":
2976 (grep,
2977 (grep,
2977 [('0', 'print0', None, _('end fields with NUL')),
2978 [('0', 'print0', None, _('end fields with NUL')),
2978 ('', 'all', None, _('print all revisions that match')),
2979 ('', 'all', None, _('print all revisions that match')),
2979 ('i', 'ignore-case', None, _('ignore case when matching')),
2980 ('i', 'ignore-case', None, _('ignore case when matching')),
2980 ('l', 'files-with-matches', None,
2981 ('l', 'files-with-matches', None,
2981 _('print only filenames and revs that match')),
2982 _('print only filenames and revs that match')),
2982 ('n', 'line-number', None, _('print matching line numbers')),
2983 ('n', 'line-number', None, _('print matching line numbers')),
2983 ('r', 'rev', [], _('search in given revision range')),
2984 ('r', 'rev', [], _('search in given revision range')),
2984 ('u', 'user', None, _('print user who committed change')),
2985 ('u', 'user', None, _('print user who committed change')),
2985 ('I', 'include', [], _('include names matching the given patterns')),
2986 ('I', 'include', [], _('include names matching the given patterns')),
2986 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
2987 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
2987 _('hg grep [OPTION]... PATTERN [FILE]...')),
2988 _('hg grep [OPTION]... PATTERN [FILE]...')),
2988 "heads":
2989 "heads":
2989 (heads,
2990 (heads,
2990 [('b', 'branches', None, _('show branches')),
2991 [('b', 'branches', None, _('show branches')),
2991 ('', 'style', '', _('display using template map file')),
2992 ('', 'style', '', _('display using template map file')),
2992 ('r', 'rev', '', _('show only heads which are descendants of rev')),
2993 ('r', 'rev', '', _('show only heads which are descendants of rev')),
2993 ('', 'template', '', _('display with template'))],
2994 ('', 'template', '', _('display with template'))],
2994 _('hg heads [-b] [-r <rev>]')),
2995 _('hg heads [-b] [-r <rev>]')),
2995 "help": (help_, [], _('hg help [COMMAND]')),
2996 "help": (help_, [], _('hg help [COMMAND]')),
2996 "identify|id": (identify, [], _('hg identify')),
2997 "identify|id": (identify, [], _('hg identify')),
2997 "import|patch":
2998 "import|patch":
2998 (import_,
2999 (import_,
2999 [('p', 'strip', 1,
3000 [('p', 'strip', 1,
3000 _('directory strip option for patch. This has the same\n'
3001 _('directory strip option for patch. This has the same\n'
3001 'meaning as the corresponding patch option')),
3002 'meaning as the corresponding patch option')),
3002 ('m', 'message', '', _('use <text> as commit message')),
3003 ('m', 'message', '', _('use <text> as commit message')),
3003 ('b', 'base', '', _('base path')),
3004 ('b', 'base', '', _('base path')),
3004 ('f', 'force', None,
3005 ('f', 'force', None,
3005 _('skip check for outstanding uncommitted changes'))],
3006 _('skip check for outstanding uncommitted changes'))],
3006 _('hg import [-p NUM] [-b BASE] [-m MESSAGE] [-f] PATCH...')),
3007 _('hg import [-p NUM] [-b BASE] [-m MESSAGE] [-f] PATCH...')),
3007 "incoming|in": (incoming,
3008 "incoming|in": (incoming,
3008 [('M', 'no-merges', None, _('do not show merges')),
3009 [('M', 'no-merges', None, _('do not show merges')),
3009 ('f', 'force', None,
3010 ('f', 'force', None,
3010 _('run even when remote repository is unrelated')),
3011 _('run even when remote repository is unrelated')),
3011 ('', 'style', '', _('display using template map file')),
3012 ('', 'style', '', _('display using template map file')),
3012 ('n', 'newest-first', None, _('show newest record first')),
3013 ('n', 'newest-first', None, _('show newest record first')),
3013 ('', 'bundle', '', _('file to store the bundles into')),
3014 ('', 'bundle', '', _('file to store the bundles into')),
3014 ('p', 'patch', None, _('show patch')),
3015 ('p', 'patch', None, _('show patch')),
3015 ('r', 'rev', [], _('a specific revision you would like to pull')),
3016 ('r', 'rev', [], _('a specific revision you would like to pull')),
3016 ('', 'template', '', _('display with template')),
3017 ('', 'template', '', _('display with template')),
3017 ('e', 'ssh', '', _('specify ssh command to use')),
3018 ('e', 'ssh', '', _('specify ssh command to use')),
3018 ('', 'remotecmd', '',
3019 ('', 'remotecmd', '',
3019 _('specify hg command to run on the remote side'))],
3020 _('specify hg command to run on the remote side'))],
3020 _('hg incoming [-p] [-n] [-M] [-r REV]...'
3021 _('hg incoming [-p] [-n] [-M] [-r REV]...'
3021 ' [--bundle FILENAME] [SOURCE]')),
3022 ' [--bundle FILENAME] [SOURCE]')),
3022 "^init":
3023 "^init":
3023 (init,
3024 (init,
3024 [('e', 'ssh', '', _('specify ssh command to use')),
3025 [('e', 'ssh', '', _('specify ssh command to use')),
3025 ('', 'remotecmd', '',
3026 ('', 'remotecmd', '',
3026 _('specify hg command to run on the remote side'))],
3027 _('specify hg command to run on the remote side'))],
3027 _('hg init [-e FILE] [--remotecmd FILE] [DEST]')),
3028 _('hg init [-e FILE] [--remotecmd FILE] [DEST]')),
3028 "locate":
3029 "locate":
3029 (locate,
3030 (locate,
3030 [('r', 'rev', '', _('search the repository as it stood at rev')),
3031 [('r', 'rev', '', _('search the repository as it stood at rev')),
3031 ('0', 'print0', None,
3032 ('0', 'print0', None,
3032 _('end filenames with NUL, for use with xargs')),
3033 _('end filenames with NUL, for use with xargs')),
3033 ('f', 'fullpath', None,
3034 ('f', 'fullpath', None,
3034 _('print complete paths from the filesystem root')),
3035 _('print complete paths from the filesystem root')),
3035 ('I', 'include', [], _('include names matching the given patterns')),
3036 ('I', 'include', [], _('include names matching the given patterns')),
3036 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
3037 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
3037 _('hg locate [OPTION]... [PATTERN]...')),
3038 _('hg locate [OPTION]... [PATTERN]...')),
3038 "^log|history":
3039 "^log|history":
3039 (log,
3040 (log,
3040 [('b', 'branches', None, _('show branches')),
3041 [('b', 'branches', None, _('show branches')),
3041 ('k', 'keyword', [], _('search for a keyword')),
3042 ('k', 'keyword', [], _('search for a keyword')),
3042 ('l', 'limit', '', _('limit number of changes displayed')),
3043 ('l', 'limit', '', _('limit number of changes displayed')),
3043 ('r', 'rev', [], _('show the specified revision or range')),
3044 ('r', 'rev', [], _('show the specified revision or range')),
3044 ('M', 'no-merges', None, _('do not show merges')),
3045 ('M', 'no-merges', None, _('do not show merges')),
3045 ('', 'style', '', _('display using template map file')),
3046 ('', 'style', '', _('display using template map file')),
3046 ('m', 'only-merges', None, _('show only merges')),
3047 ('m', 'only-merges', None, _('show only merges')),
3047 ('p', 'patch', None, _('show patch')),
3048 ('p', 'patch', None, _('show patch')),
3048 ('', 'template', '', _('display with template')),
3049 ('', 'template', '', _('display with template')),
3049 ('I', 'include', [], _('include names matching the given patterns')),
3050 ('I', 'include', [], _('include names matching the given patterns')),
3050 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
3051 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
3051 _('hg log [OPTION]... [FILE]')),
3052 _('hg log [OPTION]... [FILE]')),
3052 "manifest": (manifest, [], _('hg manifest [REV]')),
3053 "manifest": (manifest, [], _('hg manifest [REV]')),
3053 "merge":
3054 "merge":
3054 (merge,
3055 (merge,
3055 [('b', 'branch', '', _('merge with head of a specific branch')),
3056 [('b', 'branch', '', _('merge with head of a specific branch')),
3056 ('f', 'force', None, _('force a merge with outstanding changes'))],
3057 ('f', 'force', None, _('force a merge with outstanding changes'))],
3057 _('hg merge [-b TAG] [-f] [REV]')),
3058 _('hg merge [-b TAG] [-f] [REV]')),
3058 "outgoing|out": (outgoing,
3059 "outgoing|out": (outgoing,
3059 [('M', 'no-merges', None, _('do not show merges')),
3060 [('M', 'no-merges', None, _('do not show merges')),
3060 ('f', 'force', None,
3061 ('f', 'force', None,
3061 _('run even when remote repository is unrelated')),
3062 _('run even when remote repository is unrelated')),
3062 ('p', 'patch', None, _('show patch')),
3063 ('p', 'patch', None, _('show patch')),
3063 ('', 'style', '', _('display using template map file')),
3064 ('', 'style', '', _('display using template map file')),
3064 ('r', 'rev', [], _('a specific revision you would like to push')),
3065 ('r', 'rev', [], _('a specific revision you would like to push')),
3065 ('n', 'newest-first', None, _('show newest record first')),
3066 ('n', 'newest-first', None, _('show newest record first')),
3066 ('', 'template', '', _('display with template')),
3067 ('', 'template', '', _('display with template')),
3067 ('e', 'ssh', '', _('specify ssh command to use')),
3068 ('e', 'ssh', '', _('specify ssh command to use')),
3068 ('', 'remotecmd', '',
3069 ('', 'remotecmd', '',
3069 _('specify hg command to run on the remote side'))],
3070 _('specify hg command to run on the remote side'))],
3070 _('hg outgoing [-M] [-p] [-n] [-r REV]... [DEST]')),
3071 _('hg outgoing [-M] [-p] [-n] [-r REV]... [DEST]')),
3071 "^parents":
3072 "^parents":
3072 (parents,
3073 (parents,
3073 [('b', 'branches', None, _('show branches')),
3074 [('b', 'branches', None, _('show branches')),
3074 ('r', 'rev', '', _('show parents from the specified rev')),
3075 ('r', 'rev', '', _('show parents from the specified rev')),
3075 ('', 'style', '', _('display using template map file')),
3076 ('', 'style', '', _('display using template map file')),
3076 ('', 'template', '', _('display with template'))],
3077 ('', 'template', '', _('display with template'))],
3077 _('hg parents [-b] [-r REV] [FILE]')),
3078 _('hg parents [-b] [-r REV] [FILE]')),
3078 "paths": (paths, [], _('hg paths [NAME]')),
3079 "paths": (paths, [], _('hg paths [NAME]')),
3079 "^pull":
3080 "^pull":
3080 (pull,
3081 (pull,
3081 [('u', 'update', None,
3082 [('u', 'update', None,
3082 _('update the working directory to tip after pull')),
3083 _('update the working directory to tip after pull')),
3083 ('e', 'ssh', '', _('specify ssh command to use')),
3084 ('e', 'ssh', '', _('specify ssh command to use')),
3084 ('f', 'force', None,
3085 ('f', 'force', None,
3085 _('run even when remote repository is unrelated')),
3086 _('run even when remote repository is unrelated')),
3086 ('r', 'rev', [], _('a specific revision you would like to pull')),
3087 ('r', 'rev', [], _('a specific revision you would like to pull')),
3087 ('', 'remotecmd', '',
3088 ('', 'remotecmd', '',
3088 _('specify hg command to run on the remote side'))],
3089 _('specify hg command to run on the remote side'))],
3089 _('hg pull [-u] [-r REV]... [-e FILE] [--remotecmd FILE] [SOURCE]')),
3090 _('hg pull [-u] [-r REV]... [-e FILE] [--remotecmd FILE] [SOURCE]')),
3090 "^push":
3091 "^push":
3091 (push,
3092 (push,
3092 [('f', 'force', None, _('force push')),
3093 [('f', 'force', None, _('force push')),
3093 ('e', 'ssh', '', _('specify ssh command to use')),
3094 ('e', 'ssh', '', _('specify ssh command to use')),
3094 ('r', 'rev', [], _('a specific revision you would like to push')),
3095 ('r', 'rev', [], _('a specific revision you would like to push')),
3095 ('', 'remotecmd', '',
3096 ('', 'remotecmd', '',
3096 _('specify hg command to run on the remote side'))],
3097 _('specify hg command to run on the remote side'))],
3097 _('hg push [-f] [-r REV]... [-e FILE] [--remotecmd FILE] [DEST]')),
3098 _('hg push [-f] [-r REV]... [-e FILE] [--remotecmd FILE] [DEST]')),
3098 "debugrawcommit|rawcommit":
3099 "debugrawcommit|rawcommit":
3099 (rawcommit,
3100 (rawcommit,
3100 [('p', 'parent', [], _('parent')),
3101 [('p', 'parent', [], _('parent')),
3101 ('d', 'date', '', _('date code')),
3102 ('d', 'date', '', _('date code')),
3102 ('u', 'user', '', _('user')),
3103 ('u', 'user', '', _('user')),
3103 ('F', 'files', '', _('file list')),
3104 ('F', 'files', '', _('file list')),
3104 ('m', 'message', '', _('commit message')),
3105 ('m', 'message', '', _('commit message')),
3105 ('l', 'logfile', '', _('commit message file'))],
3106 ('l', 'logfile', '', _('commit message file'))],
3106 _('hg debugrawcommit [OPTION]... [FILE]...')),
3107 _('hg debugrawcommit [OPTION]... [FILE]...')),
3107 "recover": (recover, [], _('hg recover')),
3108 "recover": (recover, [], _('hg recover')),
3108 "^remove|rm":
3109 "^remove|rm":
3109 (remove,
3110 (remove,
3110 [('A', 'after', None, _('record remove that has already occurred')),
3111 [('A', 'after', None, _('record remove that has already occurred')),
3111 ('f', 'force', None, _('remove file even if modified')),
3112 ('f', 'force', None, _('remove file even if modified')),
3112 ('I', 'include', [], _('include names matching the given patterns')),
3113 ('I', 'include', [], _('include names matching the given patterns')),
3113 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
3114 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
3114 _('hg remove [OPTION]... FILE...')),
3115 _('hg remove [OPTION]... FILE...')),
3115 "rename|mv":
3116 "rename|mv":
3116 (rename,
3117 (rename,
3117 [('A', 'after', None, _('record a rename that has already occurred')),
3118 [('A', 'after', None, _('record a rename that has already occurred')),
3118 ('f', 'force', None,
3119 ('f', 'force', None,
3119 _('forcibly copy over an existing managed file')),
3120 _('forcibly copy over an existing managed file')),
3120 ('I', 'include', [], _('include names matching the given patterns')),
3121 ('I', 'include', [], _('include names matching the given patterns')),
3121 ('X', 'exclude', [], _('exclude names matching the given patterns')),
3122 ('X', 'exclude', [], _('exclude names matching the given patterns')),
3122 ('n', 'dry-run', None, _('do not perform actions, just print output'))],
3123 ('n', 'dry-run', None, _('do not perform actions, just print output'))],
3123 _('hg rename [OPTION]... SOURCE... DEST')),
3124 _('hg rename [OPTION]... SOURCE... DEST')),
3124 "^revert":
3125 "^revert":
3125 (revert,
3126 (revert,
3126 [('r', 'rev', '', _('revision to revert to')),
3127 [('r', 'rev', '', _('revision to revert to')),
3127 ('', 'no-backup', None, _('do not save backup copies of files')),
3128 ('', 'no-backup', None, _('do not save backup copies of files')),
3128 ('I', 'include', [], _('include names matching given patterns')),
3129 ('I', 'include', [], _('include names matching given patterns')),
3129 ('X', 'exclude', [], _('exclude names matching given patterns')),
3130 ('X', 'exclude', [], _('exclude names matching given patterns')),
3130 ('n', 'dry-run', None, _('do not perform actions, just print output'))],
3131 ('n', 'dry-run', None, _('do not perform actions, just print output'))],
3131 _('hg revert [-r REV] [NAME]...')),
3132 _('hg revert [-r REV] [NAME]...')),
3132 "rollback": (rollback, [], _('hg rollback')),
3133 "rollback": (rollback, [], _('hg rollback')),
3133 "root": (root, [], _('hg root')),
3134 "root": (root, [], _('hg root')),
3134 "^serve":
3135 "^serve":
3135 (serve,
3136 (serve,
3136 [('A', 'accesslog', '', _('name of access log file to write to')),
3137 [('A', 'accesslog', '', _('name of access log file to write to')),
3137 ('d', 'daemon', None, _('run server in background')),
3138 ('d', 'daemon', None, _('run server in background')),
3138 ('', 'daemon-pipefds', '', _('used internally by daemon mode')),
3139 ('', 'daemon-pipefds', '', _('used internally by daemon mode')),
3139 ('E', 'errorlog', '', _('name of error log file to write to')),
3140 ('E', 'errorlog', '', _('name of error log file to write to')),
3140 ('p', 'port', 0, _('port to use (default: 8000)')),
3141 ('p', 'port', 0, _('port to use (default: 8000)')),
3141 ('a', 'address', '', _('address to use')),
3142 ('a', 'address', '', _('address to use')),
3142 ('n', 'name', '',
3143 ('n', 'name', '',
3143 _('name to show in web pages (default: working dir)')),
3144 _('name to show in web pages (default: working dir)')),
3144 ('', 'webdir-conf', '', _('name of the webdir config file'
3145 ('', 'webdir-conf', '', _('name of the webdir config file'
3145 ' (serve more than one repo)')),
3146 ' (serve more than one repo)')),
3146 ('', 'pid-file', '', _('name of file to write process ID to')),
3147 ('', 'pid-file', '', _('name of file to write process ID to')),
3147 ('', 'stdio', None, _('for remote clients')),
3148 ('', 'stdio', None, _('for remote clients')),
3148 ('t', 'templates', '', _('web templates to use')),
3149 ('t', 'templates', '', _('web templates to use')),
3149 ('', 'style', '', _('template style to use')),
3150 ('', 'style', '', _('template style to use')),
3150 ('6', 'ipv6', None, _('use IPv6 in addition to IPv4'))],
3151 ('6', 'ipv6', None, _('use IPv6 in addition to IPv4'))],
3151 _('hg serve [OPTION]...')),
3152 _('hg serve [OPTION]...')),
3152 "^status|st":
3153 "^status|st":
3153 (status,
3154 (status,
3154 [('A', 'all', None, _('show status of all files')),
3155 [('A', 'all', None, _('show status of all files')),
3155 ('m', 'modified', None, _('show only modified files')),
3156 ('m', 'modified', None, _('show only modified files')),
3156 ('a', 'added', None, _('show only added files')),
3157 ('a', 'added', None, _('show only added files')),
3157 ('r', 'removed', None, _('show only removed files')),
3158 ('r', 'removed', None, _('show only removed files')),
3158 ('d', 'deleted', None, _('show only deleted (but tracked) files')),
3159 ('d', 'deleted', None, _('show only deleted (but tracked) files')),
3159 ('c', 'clean', None, _('show only files without changes')),
3160 ('c', 'clean', None, _('show only files without changes')),
3160 ('u', 'unknown', None, _('show only unknown (not tracked) files')),
3161 ('u', 'unknown', None, _('show only unknown (not tracked) files')),
3161 ('i', 'ignored', None, _('show ignored files')),
3162 ('i', 'ignored', None, _('show ignored files')),
3162 ('n', 'no-status', None, _('hide status prefix')),
3163 ('n', 'no-status', None, _('hide status prefix')),
3163 ('C', 'copies', None, _('show source of copied files')),
3164 ('C', 'copies', None, _('show source of copied files')),
3164 ('0', 'print0', None,
3165 ('0', 'print0', None,
3165 _('end filenames with NUL, for use with xargs')),
3166 _('end filenames with NUL, for use with xargs')),
3166 ('I', 'include', [], _('include names matching the given patterns')),
3167 ('I', 'include', [], _('include names matching the given patterns')),
3167 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
3168 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
3168 _('hg status [OPTION]... [FILE]...')),
3169 _('hg status [OPTION]... [FILE]...')),
3169 "tag":
3170 "tag":
3170 (tag,
3171 (tag,
3171 [('l', 'local', None, _('make the tag local')),
3172 [('l', 'local', None, _('make the tag local')),
3172 ('m', 'message', '', _('message for tag commit log entry')),
3173 ('m', 'message', '', _('message for tag commit log entry')),
3173 ('d', 'date', '', _('record datecode as commit date')),
3174 ('d', 'date', '', _('record datecode as commit date')),
3174 ('u', 'user', '', _('record user as commiter')),
3175 ('u', 'user', '', _('record user as commiter')),
3175 ('r', 'rev', '', _('revision to tag'))],
3176 ('r', 'rev', '', _('revision to tag'))],
3176 _('hg tag [-l] [-m TEXT] [-d DATE] [-u USER] [-r REV] NAME')),
3177 _('hg tag [-l] [-m TEXT] [-d DATE] [-u USER] [-r REV] NAME')),
3177 "tags": (tags, [], _('hg tags')),
3178 "tags": (tags, [], _('hg tags')),
3178 "tip":
3179 "tip":
3179 (tip,
3180 (tip,
3180 [('b', 'branches', None, _('show branches')),
3181 [('b', 'branches', None, _('show branches')),
3181 ('', 'style', '', _('display using template map file')),
3182 ('', 'style', '', _('display using template map file')),
3182 ('p', 'patch', None, _('show patch')),
3183 ('p', 'patch', None, _('show patch')),
3183 ('', 'template', '', _('display with template'))],
3184 ('', 'template', '', _('display with template'))],
3184 _('hg tip [-b] [-p]')),
3185 _('hg tip [-b] [-p]')),
3185 "unbundle":
3186 "unbundle":
3186 (unbundle,
3187 (unbundle,
3187 [('u', 'update', None,
3188 [('u', 'update', None,
3188 _('update the working directory to tip after unbundle'))],
3189 _('update the working directory to tip after unbundle'))],
3189 _('hg unbundle [-u] FILE')),
3190 _('hg unbundle [-u] FILE')),
3190 "debugundo|undo": (undo, [], _('hg undo')),
3191 "debugundo|undo": (undo, [], _('hg undo')),
3191 "^update|up|checkout|co":
3192 "^update|up|checkout|co":
3192 (update,
3193 (update,
3193 [('b', 'branch', '', _('checkout the head of a specific branch')),
3194 [('b', 'branch', '', _('checkout the head of a specific branch')),
3194 ('m', 'merge', None, _('allow merging of branches (DEPRECATED)')),
3195 ('m', 'merge', None, _('allow merging of branches (DEPRECATED)')),
3195 ('C', 'clean', None, _('overwrite locally modified files')),
3196 ('C', 'clean', None, _('overwrite locally modified files')),
3196 ('f', 'force', None, _('force a merge with outstanding changes'))],
3197 ('f', 'force', None, _('force a merge with outstanding changes'))],
3197 _('hg update [-b TAG] [-m] [-C] [-f] [REV]')),
3198 _('hg update [-b TAG] [-m] [-C] [-f] [REV]')),
3198 "verify": (verify, [], _('hg verify')),
3199 "verify": (verify, [], _('hg verify')),
3199 "version": (show_version, [], _('hg version')),
3200 "version": (show_version, [], _('hg version')),
3200 }
3201 }
3201
3202
3202 globalopts = [
3203 globalopts = [
3203 ('R', 'repository', '',
3204 ('R', 'repository', '',
3204 _('repository root directory or symbolic path name')),
3205 _('repository root directory or symbolic path name')),
3205 ('', 'cwd', '', _('change working directory')),
3206 ('', 'cwd', '', _('change working directory')),
3206 ('y', 'noninteractive', None,
3207 ('y', 'noninteractive', None,
3207 _('do not prompt, assume \'yes\' for any required answers')),
3208 _('do not prompt, assume \'yes\' for any required answers')),
3208 ('q', 'quiet', None, _('suppress output')),
3209 ('q', 'quiet', None, _('suppress output')),
3209 ('v', 'verbose', None, _('enable additional output')),
3210 ('v', 'verbose', None, _('enable additional output')),
3210 ('', 'config', [], _('set/override config option')),
3211 ('', 'config', [], _('set/override config option')),
3211 ('', 'debug', None, _('enable debugging output')),
3212 ('', 'debug', None, _('enable debugging output')),
3212 ('', 'debugger', None, _('start debugger')),
3213 ('', 'debugger', None, _('start debugger')),
3213 ('', 'lsprof', None, _('print improved command execution profile')),
3214 ('', 'lsprof', None, _('print improved command execution profile')),
3214 ('', 'traceback', None, _('print traceback on exception')),
3215 ('', 'traceback', None, _('print traceback on exception')),
3215 ('', 'time', None, _('time how long the command takes')),
3216 ('', 'time', None, _('time how long the command takes')),
3216 ('', 'profile', None, _('print command execution profile')),
3217 ('', 'profile', None, _('print command execution profile')),
3217 ('', 'version', None, _('output version information and exit')),
3218 ('', 'version', None, _('output version information and exit')),
3218 ('h', 'help', None, _('display help and exit')),
3219 ('h', 'help', None, _('display help and exit')),
3219 ]
3220 ]
3220
3221
3221 norepo = ("clone init version help debugancestor debugcomplete debugdata"
3222 norepo = ("clone init version help debugancestor debugcomplete debugdata"
3222 " debugindex debugindexdot")
3223 " debugindex debugindexdot")
3223 optionalrepo = ("paths serve debugconfig")
3224 optionalrepo = ("paths serve debugconfig")
3224
3225
3225 def findpossible(cmd):
3226 def findpossible(cmd):
3226 """
3227 """
3227 Return cmd -> (aliases, command table entry)
3228 Return cmd -> (aliases, command table entry)
3228 for each matching command.
3229 for each matching command.
3229 Return debug commands (or their aliases) only if no normal command matches.
3230 Return debug commands (or their aliases) only if no normal command matches.
3230 """
3231 """
3231 choice = {}
3232 choice = {}
3232 debugchoice = {}
3233 debugchoice = {}
3233 for e in table.keys():
3234 for e in table.keys():
3234 aliases = e.lstrip("^").split("|")
3235 aliases = e.lstrip("^").split("|")
3235 found = None
3236 found = None
3236 if cmd in aliases:
3237 if cmd in aliases:
3237 found = cmd
3238 found = cmd
3238 else:
3239 else:
3239 for a in aliases:
3240 for a in aliases:
3240 if a.startswith(cmd):
3241 if a.startswith(cmd):
3241 found = a
3242 found = a
3242 break
3243 break
3243 if found is not None:
3244 if found is not None:
3244 if aliases[0].startswith("debug"):
3245 if aliases[0].startswith("debug"):
3245 debugchoice[found] = (aliases, table[e])
3246 debugchoice[found] = (aliases, table[e])
3246 else:
3247 else:
3247 choice[found] = (aliases, table[e])
3248 choice[found] = (aliases, table[e])
3248
3249
3249 if not choice and debugchoice:
3250 if not choice and debugchoice:
3250 choice = debugchoice
3251 choice = debugchoice
3251
3252
3252 return choice
3253 return choice
3253
3254
3254 def findcmd(cmd):
3255 def findcmd(cmd):
3255 """Return (aliases, command table entry) for command string."""
3256 """Return (aliases, command table entry) for command string."""
3256 choice = findpossible(cmd)
3257 choice = findpossible(cmd)
3257
3258
3258 if choice.has_key(cmd):
3259 if choice.has_key(cmd):
3259 return choice[cmd]
3260 return choice[cmd]
3260
3261
3261 if len(choice) > 1:
3262 if len(choice) > 1:
3262 clist = choice.keys()
3263 clist = choice.keys()
3263 clist.sort()
3264 clist.sort()
3264 raise AmbiguousCommand(cmd, clist)
3265 raise AmbiguousCommand(cmd, clist)
3265
3266
3266 if choice:
3267 if choice:
3267 return choice.values()[0]
3268 return choice.values()[0]
3268
3269
3269 raise UnknownCommand(cmd)
3270 raise UnknownCommand(cmd)
3270
3271
3271 def catchterm(*args):
3272 def catchterm(*args):
3272 raise util.SignalInterrupt
3273 raise util.SignalInterrupt
3273
3274
3274 def run():
3275 def run():
3275 sys.exit(dispatch(sys.argv[1:]))
3276 sys.exit(dispatch(sys.argv[1:]))
3276
3277
3277 class ParseError(Exception):
3278 class ParseError(Exception):
3278 """Exception raised on errors in parsing the command line."""
3279 """Exception raised on errors in parsing the command line."""
3279
3280
3280 def parse(ui, args):
3281 def parse(ui, args):
3281 options = {}
3282 options = {}
3282 cmdoptions = {}
3283 cmdoptions = {}
3283
3284
3284 try:
3285 try:
3285 args = fancyopts.fancyopts(args, globalopts, options)
3286 args = fancyopts.fancyopts(args, globalopts, options)
3286 except fancyopts.getopt.GetoptError, inst:
3287 except fancyopts.getopt.GetoptError, inst:
3287 raise ParseError(None, inst)
3288 raise ParseError(None, inst)
3288
3289
3289 if args:
3290 if args:
3290 cmd, args = args[0], args[1:]
3291 cmd, args = args[0], args[1:]
3291 aliases, i = findcmd(cmd)
3292 aliases, i = findcmd(cmd)
3292 cmd = aliases[0]
3293 cmd = aliases[0]
3293 defaults = ui.config("defaults", cmd)
3294 defaults = ui.config("defaults", cmd)
3294 if defaults:
3295 if defaults:
3295 args = defaults.split() + args
3296 args = defaults.split() + args
3296 c = list(i[1])
3297 c = list(i[1])
3297 else:
3298 else:
3298 cmd = None
3299 cmd = None
3299 c = []
3300 c = []
3300
3301
3301 # combine global options into local
3302 # combine global options into local
3302 for o in globalopts:
3303 for o in globalopts:
3303 c.append((o[0], o[1], options[o[1]], o[3]))
3304 c.append((o[0], o[1], options[o[1]], o[3]))
3304
3305
3305 try:
3306 try:
3306 args = fancyopts.fancyopts(args, c, cmdoptions)
3307 args = fancyopts.fancyopts(args, c, cmdoptions)
3307 except fancyopts.getopt.GetoptError, inst:
3308 except fancyopts.getopt.GetoptError, inst:
3308 raise ParseError(cmd, inst)
3309 raise ParseError(cmd, inst)
3309
3310
3310 # separate global options back out
3311 # separate global options back out
3311 for o in globalopts:
3312 for o in globalopts:
3312 n = o[1]
3313 n = o[1]
3313 options[n] = cmdoptions[n]
3314 options[n] = cmdoptions[n]
3314 del cmdoptions[n]
3315 del cmdoptions[n]
3315
3316
3316 return (cmd, cmd and i[0] or None, args, options, cmdoptions)
3317 return (cmd, cmd and i[0] or None, args, options, cmdoptions)
3317
3318
3318 external = {}
3319 external = {}
3319
3320
3320 def findext(name):
3321 def findext(name):
3321 '''return module with given extension name'''
3322 '''return module with given extension name'''
3322 try:
3323 try:
3323 return sys.modules[external[name]]
3324 return sys.modules[external[name]]
3324 except KeyError:
3325 except KeyError:
3325 dotname = '.' + name
3326 dotname = '.' + name
3326 for k, v in external.iteritems():
3327 for k, v in external.iteritems():
3327 if k.endswith('.' + name) or v == name:
3328 if k.endswith('.' + name) or v == name:
3328 return sys.modules[v]
3329 return sys.modules[v]
3329 raise KeyError(name)
3330 raise KeyError(name)
3330
3331
3331 def dispatch(args):
3332 def dispatch(args):
3332 for name in 'SIGBREAK', 'SIGHUP', 'SIGTERM':
3333 for name in 'SIGBREAK', 'SIGHUP', 'SIGTERM':
3333 num = getattr(signal, name, None)
3334 num = getattr(signal, name, None)
3334 if num: signal.signal(num, catchterm)
3335 if num: signal.signal(num, catchterm)
3335
3336
3336 try:
3337 try:
3337 u = ui.ui(traceback='--traceback' in sys.argv[1:])
3338 u = ui.ui(traceback='--traceback' in sys.argv[1:])
3338 except util.Abort, inst:
3339 except util.Abort, inst:
3339 sys.stderr.write(_("abort: %s\n") % inst)
3340 sys.stderr.write(_("abort: %s\n") % inst)
3340 return -1
3341 return -1
3341
3342
3342 for ext_name, load_from_name in u.extensions():
3343 for ext_name, load_from_name in u.extensions():
3343 try:
3344 try:
3344 if load_from_name:
3345 if load_from_name:
3345 # the module will be loaded in sys.modules
3346 # the module will be loaded in sys.modules
3346 # choose an unique name so that it doesn't
3347 # choose an unique name so that it doesn't
3347 # conflicts with other modules
3348 # conflicts with other modules
3348 module_name = "hgext_%s" % ext_name.replace('.', '_')
3349 module_name = "hgext_%s" % ext_name.replace('.', '_')
3349 mod = imp.load_source(module_name, load_from_name)
3350 mod = imp.load_source(module_name, load_from_name)
3350 else:
3351 else:
3351 def importh(name):
3352 def importh(name):
3352 mod = __import__(name)
3353 mod = __import__(name)
3353 components = name.split('.')
3354 components = name.split('.')
3354 for comp in components[1:]:
3355 for comp in components[1:]:
3355 mod = getattr(mod, comp)
3356 mod = getattr(mod, comp)
3356 return mod
3357 return mod
3357 try:
3358 try:
3358 mod = importh("hgext.%s" % ext_name)
3359 mod = importh("hgext.%s" % ext_name)
3359 except ImportError:
3360 except ImportError:
3360 mod = importh(ext_name)
3361 mod = importh(ext_name)
3361 external[ext_name] = mod.__name__
3362 external[ext_name] = mod.__name__
3362 except (util.SignalInterrupt, KeyboardInterrupt):
3363 except (util.SignalInterrupt, KeyboardInterrupt):
3363 raise
3364 raise
3364 except Exception, inst:
3365 except Exception, inst:
3365 u.warn(_("*** failed to import extension %s: %s\n") % (ext_name, inst))
3366 u.warn(_("*** failed to import extension %s: %s\n") % (ext_name, inst))
3366 if u.print_exc():
3367 if u.print_exc():
3367 return 1
3368 return 1
3368
3369
3369 for name in external.itervalues():
3370 for name in external.itervalues():
3370 mod = sys.modules[name]
3371 mod = sys.modules[name]
3371 uisetup = getattr(mod, 'uisetup', None)
3372 uisetup = getattr(mod, 'uisetup', None)
3372 if uisetup:
3373 if uisetup:
3373 uisetup(u)
3374 uisetup(u)
3374 cmdtable = getattr(mod, 'cmdtable', {})
3375 cmdtable = getattr(mod, 'cmdtable', {})
3375 for t in cmdtable:
3376 for t in cmdtable:
3376 if t in table:
3377 if t in table:
3377 u.warn(_("module %s overrides %s\n") % (name, t))
3378 u.warn(_("module %s overrides %s\n") % (name, t))
3378 table.update(cmdtable)
3379 table.update(cmdtable)
3379
3380
3380 try:
3381 try:
3381 cmd, func, args, options, cmdoptions = parse(u, args)
3382 cmd, func, args, options, cmdoptions = parse(u, args)
3382 if options["time"]:
3383 if options["time"]:
3383 def get_times():
3384 def get_times():
3384 t = os.times()
3385 t = os.times()
3385 if t[4] == 0.0: # Windows leaves this as zero, so use time.clock()
3386 if t[4] == 0.0: # Windows leaves this as zero, so use time.clock()
3386 t = (t[0], t[1], t[2], t[3], time.clock())
3387 t = (t[0], t[1], t[2], t[3], time.clock())
3387 return t
3388 return t
3388 s = get_times()
3389 s = get_times()
3389 def print_time():
3390 def print_time():
3390 t = get_times()
3391 t = get_times()
3391 u.warn(_("Time: real %.3f secs (user %.3f+%.3f sys %.3f+%.3f)\n") %
3392 u.warn(_("Time: real %.3f secs (user %.3f+%.3f sys %.3f+%.3f)\n") %
3392 (t[4]-s[4], t[0]-s[0], t[2]-s[2], t[1]-s[1], t[3]-s[3]))
3393 (t[4]-s[4], t[0]-s[0], t[2]-s[2], t[1]-s[1], t[3]-s[3]))
3393 atexit.register(print_time)
3394 atexit.register(print_time)
3394
3395
3395 u.updateopts(options["verbose"], options["debug"], options["quiet"],
3396 u.updateopts(options["verbose"], options["debug"], options["quiet"],
3396 not options["noninteractive"], options["traceback"],
3397 not options["noninteractive"], options["traceback"],
3397 options["config"])
3398 options["config"])
3398
3399
3399 # enter the debugger before command execution
3400 # enter the debugger before command execution
3400 if options['debugger']:
3401 if options['debugger']:
3401 pdb.set_trace()
3402 pdb.set_trace()
3402
3403
3403 try:
3404 try:
3404 if options['cwd']:
3405 if options['cwd']:
3405 try:
3406 try:
3406 os.chdir(options['cwd'])
3407 os.chdir(options['cwd'])
3407 except OSError, inst:
3408 except OSError, inst:
3408 raise util.Abort('%s: %s' %
3409 raise util.Abort('%s: %s' %
3409 (options['cwd'], inst.strerror))
3410 (options['cwd'], inst.strerror))
3410
3411
3411 path = u.expandpath(options["repository"]) or ""
3412 path = u.expandpath(options["repository"]) or ""
3412 repo = path and hg.repository(u, path=path) or None
3413 repo = path and hg.repository(u, path=path) or None
3413
3414
3414 if options['help']:
3415 if options['help']:
3415 return help_(u, cmd, options['version'])
3416 return help_(u, cmd, options['version'])
3416 elif options['version']:
3417 elif options['version']:
3417 return show_version(u)
3418 return show_version(u)
3418 elif not cmd:
3419 elif not cmd:
3419 return help_(u, 'shortlist')
3420 return help_(u, 'shortlist')
3420
3421
3421 if cmd not in norepo.split():
3422 if cmd not in norepo.split():
3422 try:
3423 try:
3423 if not repo:
3424 if not repo:
3424 repo = hg.repository(u, path=path)
3425 repo = hg.repository(u, path=path)
3425 u = repo.ui
3426 u = repo.ui
3426 for name in external.itervalues():
3427 for name in external.itervalues():
3427 mod = sys.modules[name]
3428 mod = sys.modules[name]
3428 if hasattr(mod, 'reposetup'):
3429 if hasattr(mod, 'reposetup'):
3429 mod.reposetup(u, repo)
3430 mod.reposetup(u, repo)
3430 except hg.RepoError:
3431 except hg.RepoError:
3431 if cmd not in optionalrepo.split():
3432 if cmd not in optionalrepo.split():
3432 raise
3433 raise
3433 d = lambda: func(u, repo, *args, **cmdoptions)
3434 d = lambda: func(u, repo, *args, **cmdoptions)
3434 else:
3435 else:
3435 d = lambda: func(u, *args, **cmdoptions)
3436 d = lambda: func(u, *args, **cmdoptions)
3436
3437
3437 try:
3438 try:
3438 if options['profile']:
3439 if options['profile']:
3439 import hotshot, hotshot.stats
3440 import hotshot, hotshot.stats
3440 prof = hotshot.Profile("hg.prof")
3441 prof = hotshot.Profile("hg.prof")
3441 try:
3442 try:
3442 try:
3443 try:
3443 return prof.runcall(d)
3444 return prof.runcall(d)
3444 except:
3445 except:
3445 try:
3446 try:
3446 u.warn(_('exception raised - generating '
3447 u.warn(_('exception raised - generating '
3447 'profile anyway\n'))
3448 'profile anyway\n'))
3448 except:
3449 except:
3449 pass
3450 pass
3450 raise
3451 raise
3451 finally:
3452 finally:
3452 prof.close()
3453 prof.close()
3453 stats = hotshot.stats.load("hg.prof")
3454 stats = hotshot.stats.load("hg.prof")
3454 stats.strip_dirs()
3455 stats.strip_dirs()
3455 stats.sort_stats('time', 'calls')
3456 stats.sort_stats('time', 'calls')
3456 stats.print_stats(40)
3457 stats.print_stats(40)
3457 elif options['lsprof']:
3458 elif options['lsprof']:
3458 try:
3459 try:
3459 from mercurial import lsprof
3460 from mercurial import lsprof
3460 except ImportError:
3461 except ImportError:
3461 raise util.Abort(_(
3462 raise util.Abort(_(
3462 'lsprof not available - install from '
3463 'lsprof not available - install from '
3463 'http://codespeak.net/svn/user/arigo/hack/misc/lsprof/'))
3464 'http://codespeak.net/svn/user/arigo/hack/misc/lsprof/'))
3464 p = lsprof.Profiler()
3465 p = lsprof.Profiler()
3465 p.enable(subcalls=True)
3466 p.enable(subcalls=True)
3466 try:
3467 try:
3467 return d()
3468 return d()
3468 finally:
3469 finally:
3469 p.disable()
3470 p.disable()
3470 stats = lsprof.Stats(p.getstats())
3471 stats = lsprof.Stats(p.getstats())
3471 stats.sort()
3472 stats.sort()
3472 stats.pprint(top=10, file=sys.stderr, climit=5)
3473 stats.pprint(top=10, file=sys.stderr, climit=5)
3473 else:
3474 else:
3474 return d()
3475 return d()
3475 finally:
3476 finally:
3476 u.flush()
3477 u.flush()
3477 except:
3478 except:
3478 # enter the debugger when we hit an exception
3479 # enter the debugger when we hit an exception
3479 if options['debugger']:
3480 if options['debugger']:
3480 pdb.post_mortem(sys.exc_info()[2])
3481 pdb.post_mortem(sys.exc_info()[2])
3481 u.print_exc()
3482 u.print_exc()
3482 raise
3483 raise
3483 except ParseError, inst:
3484 except ParseError, inst:
3484 if inst.args[0]:
3485 if inst.args[0]:
3485 u.warn(_("hg %s: %s\n") % (inst.args[0], inst.args[1]))
3486 u.warn(_("hg %s: %s\n") % (inst.args[0], inst.args[1]))
3486 help_(u, inst.args[0])
3487 help_(u, inst.args[0])
3487 else:
3488 else:
3488 u.warn(_("hg: %s\n") % inst.args[1])
3489 u.warn(_("hg: %s\n") % inst.args[1])
3489 help_(u, 'shortlist')
3490 help_(u, 'shortlist')
3490 except AmbiguousCommand, inst:
3491 except AmbiguousCommand, inst:
3491 u.warn(_("hg: command '%s' is ambiguous:\n %s\n") %
3492 u.warn(_("hg: command '%s' is ambiguous:\n %s\n") %
3492 (inst.args[0], " ".join(inst.args[1])))
3493 (inst.args[0], " ".join(inst.args[1])))
3493 except UnknownCommand, inst:
3494 except UnknownCommand, inst:
3494 u.warn(_("hg: unknown command '%s'\n") % inst.args[0])
3495 u.warn(_("hg: unknown command '%s'\n") % inst.args[0])
3495 help_(u, 'shortlist')
3496 help_(u, 'shortlist')
3496 except hg.RepoError, inst:
3497 except hg.RepoError, inst:
3497 u.warn(_("abort: %s!\n") % inst)
3498 u.warn(_("abort: %s!\n") % inst)
3498 except lock.LockHeld, inst:
3499 except lock.LockHeld, inst:
3499 if inst.errno == errno.ETIMEDOUT:
3500 if inst.errno == errno.ETIMEDOUT:
3500 reason = _('timed out waiting for lock held by %s') % inst.locker
3501 reason = _('timed out waiting for lock held by %s') % inst.locker
3501 else:
3502 else:
3502 reason = _('lock held by %s') % inst.locker
3503 reason = _('lock held by %s') % inst.locker
3503 u.warn(_("abort: %s: %s\n") % (inst.desc or inst.filename, reason))
3504 u.warn(_("abort: %s: %s\n") % (inst.desc or inst.filename, reason))
3504 except lock.LockUnavailable, inst:
3505 except lock.LockUnavailable, inst:
3505 u.warn(_("abort: could not lock %s: %s\n") %
3506 u.warn(_("abort: could not lock %s: %s\n") %
3506 (inst.desc or inst.filename, inst.strerror))
3507 (inst.desc or inst.filename, inst.strerror))
3507 except revlog.RevlogError, inst:
3508 except revlog.RevlogError, inst:
3508 u.warn(_("abort: "), inst, "!\n")
3509 u.warn(_("abort: "), inst, "!\n")
3509 except util.SignalInterrupt:
3510 except util.SignalInterrupt:
3510 u.warn(_("killed!\n"))
3511 u.warn(_("killed!\n"))
3511 except KeyboardInterrupt:
3512 except KeyboardInterrupt:
3512 try:
3513 try:
3513 u.warn(_("interrupted!\n"))
3514 u.warn(_("interrupted!\n"))
3514 except IOError, inst:
3515 except IOError, inst:
3515 if inst.errno == errno.EPIPE:
3516 if inst.errno == errno.EPIPE:
3516 if u.debugflag:
3517 if u.debugflag:
3517 u.warn(_("\nbroken pipe\n"))
3518 u.warn(_("\nbroken pipe\n"))
3518 else:
3519 else:
3519 raise
3520 raise
3520 except IOError, inst:
3521 except IOError, inst:
3521 if hasattr(inst, "code"):
3522 if hasattr(inst, "code"):
3522 u.warn(_("abort: %s\n") % inst)
3523 u.warn(_("abort: %s\n") % inst)
3523 elif hasattr(inst, "reason"):
3524 elif hasattr(inst, "reason"):
3524 u.warn(_("abort: error: %s\n") % inst.reason[1])
3525 u.warn(_("abort: error: %s\n") % inst.reason[1])
3525 elif hasattr(inst, "args") and inst[0] == errno.EPIPE:
3526 elif hasattr(inst, "args") and inst[0] == errno.EPIPE:
3526 if u.debugflag:
3527 if u.debugflag:
3527 u.warn(_("broken pipe\n"))
3528 u.warn(_("broken pipe\n"))
3528 elif getattr(inst, "strerror", None):
3529 elif getattr(inst, "strerror", None):
3529 if getattr(inst, "filename", None):
3530 if getattr(inst, "filename", None):
3530 u.warn(_("abort: %s - %s\n") % (inst.strerror, inst.filename))
3531 u.warn(_("abort: %s - %s\n") % (inst.strerror, inst.filename))
3531 else:
3532 else:
3532 u.warn(_("abort: %s\n") % inst.strerror)
3533 u.warn(_("abort: %s\n") % inst.strerror)
3533 else:
3534 else:
3534 raise
3535 raise
3535 except OSError, inst:
3536 except OSError, inst:
3536 if hasattr(inst, "filename"):
3537 if hasattr(inst, "filename"):
3537 u.warn(_("abort: %s: %s\n") % (inst.strerror, inst.filename))
3538 u.warn(_("abort: %s: %s\n") % (inst.strerror, inst.filename))
3538 else:
3539 else:
3539 u.warn(_("abort: %s\n") % inst.strerror)
3540 u.warn(_("abort: %s\n") % inst.strerror)
3540 except util.Abort, inst:
3541 except util.Abort, inst:
3541 u.warn(_('abort: '), inst.args[0] % inst.args[1:], '\n')
3542 u.warn(_('abort: '), inst.args[0] % inst.args[1:], '\n')
3542 except TypeError, inst:
3543 except TypeError, inst:
3543 # was this an argument error?
3544 # was this an argument error?
3544 tb = traceback.extract_tb(sys.exc_info()[2])
3545 tb = traceback.extract_tb(sys.exc_info()[2])
3545 if len(tb) > 2: # no
3546 if len(tb) > 2: # no
3546 raise
3547 raise
3547 u.debug(inst, "\n")
3548 u.debug(inst, "\n")
3548 u.warn(_("%s: invalid arguments\n") % cmd)
3549 u.warn(_("%s: invalid arguments\n") % cmd)
3549 help_(u, cmd)
3550 help_(u, cmd)
3550 except SystemExit, inst:
3551 except SystemExit, inst:
3551 # Commands shouldn't sys.exit directly, but give a return code.
3552 # Commands shouldn't sys.exit directly, but give a return code.
3552 # Just in case catch this and and pass exit code to caller.
3553 # Just in case catch this and and pass exit code to caller.
3553 return inst.code
3554 return inst.code
3554 except:
3555 except:
3555 u.warn(_("** unknown exception encountered, details follow\n"))
3556 u.warn(_("** unknown exception encountered, details follow\n"))
3556 u.warn(_("** report bug details to "
3557 u.warn(_("** report bug details to "
3557 "http://www.selenic.com/mercurial/bts\n"))
3558 "http://www.selenic.com/mercurial/bts\n"))
3558 u.warn(_("** or mercurial@selenic.com\n"))
3559 u.warn(_("** or mercurial@selenic.com\n"))
3559 u.warn(_("** Mercurial Distributed SCM (version %s)\n")
3560 u.warn(_("** Mercurial Distributed SCM (version %s)\n")
3560 % version.get_version())
3561 % version.get_version())
3561 raise
3562 raise
3562
3563
3563 return -1
3564 return -1
@@ -1,969 +1,975 b''
1 # hgweb/hgweb_mod.py - Web interface for a repository.
1 # hgweb/hgweb_mod.py - Web interface for a repository.
2 #
2 #
3 # Copyright 21 May 2005 - (c) 2005 Jake Edge <jake@edge2.net>
3 # Copyright 21 May 2005 - (c) 2005 Jake Edge <jake@edge2.net>
4 # Copyright 2005 Matt Mackall <mpm@selenic.com>
4 # Copyright 2005 Matt Mackall <mpm@selenic.com>
5 #
5 #
6 # This software may be used and distributed according to the terms
6 # This software may be used and distributed according to the terms
7 # of the GNU General Public License, incorporated herein by reference.
7 # of the GNU General Public License, incorporated herein by reference.
8
8
9 import os
9 import os
10 import os.path
10 import os.path
11 import mimetypes
11 import mimetypes
12 from mercurial.demandload import demandload
12 from mercurial.demandload import demandload
13 demandload(globals(), "re zlib ConfigParser mimetools cStringIO sys tempfile")
13 demandload(globals(), "re zlib ConfigParser mimetools cStringIO sys tempfile")
14 demandload(globals(), "mercurial:mdiff,ui,hg,util,archival,streamclone")
14 demandload(globals(), "mercurial:mdiff,ui,hg,util,archival,streamclone")
15 demandload(globals(), "mercurial:templater")
15 demandload(globals(), "mercurial:templater")
16 demandload(globals(), "mercurial.hgweb.common:get_mtime,staticfile")
16 demandload(globals(), "mercurial.hgweb.common:get_mtime,staticfile")
17 from mercurial.node import *
17 from mercurial.node import *
18 from mercurial.i18n import gettext as _
18 from mercurial.i18n import gettext as _
19
19
20 def _up(p):
20 def _up(p):
21 if p[0] != "/":
21 if p[0] != "/":
22 p = "/" + p
22 p = "/" + p
23 if p[-1] == "/":
23 if p[-1] == "/":
24 p = p[:-1]
24 p = p[:-1]
25 up = os.path.dirname(p)
25 up = os.path.dirname(p)
26 if up == "/":
26 if up == "/":
27 return "/"
27 return "/"
28 return up + "/"
28 return up + "/"
29
29
30 class hgweb(object):
30 class hgweb(object):
31 def __init__(self, repo, name=None):
31 def __init__(self, repo, name=None):
32 if type(repo) == type(""):
32 if type(repo) == type(""):
33 self.repo = hg.repository(ui.ui(), repo)
33 self.repo = hg.repository(ui.ui(), repo)
34 else:
34 else:
35 self.repo = repo
35 self.repo = repo
36
36
37 self.mtime = -1
37 self.mtime = -1
38 self.reponame = name
38 self.reponame = name
39 self.archives = 'zip', 'gz', 'bz2'
39 self.archives = 'zip', 'gz', 'bz2'
40 self.stripecount = 1
40 self.stripecount = 1
41 self.templatepath = self.repo.ui.config("web", "templates",
41 self.templatepath = self.repo.ui.config("web", "templates",
42 templater.templatepath())
42 templater.templatepath())
43
43
44 def refresh(self):
44 def refresh(self):
45 mtime = get_mtime(self.repo.root)
45 mtime = get_mtime(self.repo.root)
46 if mtime != self.mtime:
46 if mtime != self.mtime:
47 self.mtime = mtime
47 self.mtime = mtime
48 self.repo = hg.repository(self.repo.ui, self.repo.root)
48 self.repo = hg.repository(self.repo.ui, self.repo.root)
49 self.maxchanges = int(self.repo.ui.config("web", "maxchanges", 10))
49 self.maxchanges = int(self.repo.ui.config("web", "maxchanges", 10))
50 self.stripecount = int(self.repo.ui.config("web", "stripes", 1))
50 self.stripecount = int(self.repo.ui.config("web", "stripes", 1))
51 self.maxfiles = int(self.repo.ui.config("web", "maxfiles", 10))
51 self.maxfiles = int(self.repo.ui.config("web", "maxfiles", 10))
52 self.allowpull = self.repo.ui.configbool("web", "allowpull", True)
52 self.allowpull = self.repo.ui.configbool("web", "allowpull", True)
53
53
54 def archivelist(self, nodeid):
54 def archivelist(self, nodeid):
55 allowed = self.repo.ui.configlist("web", "allow_archive")
55 allowed = self.repo.ui.configlist("web", "allow_archive")
56 for i in self.archives:
56 for i in self.archives:
57 if i in allowed or self.repo.ui.configbool("web", "allow" + i):
57 if i in allowed or self.repo.ui.configbool("web", "allow" + i):
58 yield {"type" : i, "node" : nodeid, "url": ""}
58 yield {"type" : i, "node" : nodeid, "url": ""}
59
59
60 def listfiles(self, files, mf):
60 def listfiles(self, files, mf):
61 for f in files[:self.maxfiles]:
61 for f in files[:self.maxfiles]:
62 yield self.t("filenodelink", node=hex(mf[f]), file=f)
62 yield self.t("filenodelink", node=hex(mf[f]), file=f)
63 if len(files) > self.maxfiles:
63 if len(files) > self.maxfiles:
64 yield self.t("fileellipses")
64 yield self.t("fileellipses")
65
65
66 def listfilediffs(self, files, changeset):
66 def listfilediffs(self, files, changeset):
67 for f in files[:self.maxfiles]:
67 for f in files[:self.maxfiles]:
68 yield self.t("filedifflink", node=hex(changeset), file=f)
68 yield self.t("filedifflink", node=hex(changeset), file=f)
69 if len(files) > self.maxfiles:
69 if len(files) > self.maxfiles:
70 yield self.t("fileellipses")
70 yield self.t("fileellipses")
71
71
72 def siblings(self, siblings=[], rev=None, hiderev=None, **args):
72 def siblings(self, siblings=[], rev=None, hiderev=None, **args):
73 if not rev:
73 if not rev:
74 rev = lambda x: ""
74 rev = lambda x: ""
75 siblings = [s for s in siblings if s != nullid]
75 siblings = [s for s in siblings if s != nullid]
76 if len(siblings) == 1 and rev(siblings[0]) == hiderev:
76 if len(siblings) == 1 and rev(siblings[0]) == hiderev:
77 return
77 return
78 for s in siblings:
78 for s in siblings:
79 yield dict(node=hex(s), rev=rev(s), **args)
79 yield dict(node=hex(s), rev=rev(s), **args)
80
80
81 def renamelink(self, fl, node):
81 def renamelink(self, fl, node):
82 r = fl.renamed(node)
82 r = fl.renamed(node)
83 if r:
83 if r:
84 return [dict(file=r[0], node=hex(r[1]))]
84 return [dict(file=r[0], node=hex(r[1]))]
85 return []
85 return []
86
86
87 def showtag(self, t1, node=nullid, **args):
87 def showtag(self, t1, node=nullid, **args):
88 for t in self.repo.nodetags(node):
88 for t in self.repo.nodetags(node):
89 yield self.t(t1, tag=t, **args)
89 yield self.t(t1, tag=t, **args)
90
90
91 def diff(self, node1, node2, files):
91 def diff(self, node1, node2, files):
92 def filterfiles(filters, files):
92 def filterfiles(filters, files):
93 l = [x for x in files if x in filters]
93 l = [x for x in files if x in filters]
94
94
95 for t in filters:
95 for t in filters:
96 if t and t[-1] != os.sep:
96 if t and t[-1] != os.sep:
97 t += os.sep
97 t += os.sep
98 l += [x for x in files if x.startswith(t)]
98 l += [x for x in files if x.startswith(t)]
99 return l
99 return l
100
100
101 parity = [0]
101 parity = [0]
102 def diffblock(diff, f, fn):
102 def diffblock(diff, f, fn):
103 yield self.t("diffblock",
103 yield self.t("diffblock",
104 lines=prettyprintlines(diff),
104 lines=prettyprintlines(diff),
105 parity=parity[0],
105 parity=parity[0],
106 file=f,
106 file=f,
107 filenode=hex(fn or nullid))
107 filenode=hex(fn or nullid))
108 parity[0] = 1 - parity[0]
108 parity[0] = 1 - parity[0]
109
109
110 def prettyprintlines(diff):
110 def prettyprintlines(diff):
111 for l in diff.splitlines(1):
111 for l in diff.splitlines(1):
112 if l.startswith('+'):
112 if l.startswith('+'):
113 yield self.t("difflineplus", line=l)
113 yield self.t("difflineplus", line=l)
114 elif l.startswith('-'):
114 elif l.startswith('-'):
115 yield self.t("difflineminus", line=l)
115 yield self.t("difflineminus", line=l)
116 elif l.startswith('@'):
116 elif l.startswith('@'):
117 yield self.t("difflineat", line=l)
117 yield self.t("difflineat", line=l)
118 else:
118 else:
119 yield self.t("diffline", line=l)
119 yield self.t("diffline", line=l)
120
120
121 r = self.repo
121 r = self.repo
122 cl = r.changelog
122 cl = r.changelog
123 mf = r.manifest
123 mf = r.manifest
124 change1 = cl.read(node1)
124 change1 = cl.read(node1)
125 change2 = cl.read(node2)
125 change2 = cl.read(node2)
126 mmap1 = mf.read(change1[0])
126 mmap1 = mf.read(change1[0])
127 mmap2 = mf.read(change2[0])
127 mmap2 = mf.read(change2[0])
128 date1 = util.datestr(change1[2])
128 date1 = util.datestr(change1[2])
129 date2 = util.datestr(change2[2])
129 date2 = util.datestr(change2[2])
130
130
131 modified, added, removed, deleted, unknown = r.changes(node1, node2)
131 modified, added, removed, deleted, unknown = r.changes(node1, node2)
132 if files:
132 if files:
133 modified, added, removed = map(lambda x: filterfiles(files, x),
133 modified, added, removed = map(lambda x: filterfiles(files, x),
134 (modified, added, removed))
134 (modified, added, removed))
135
135
136 diffopts = self.repo.ui.diffopts()
136 diffopts = self.repo.ui.diffopts()
137 showfunc = diffopts['showfunc']
137 showfunc = diffopts['showfunc']
138 ignorews = diffopts['ignorews']
138 ignorews = diffopts['ignorews']
139 ignorewsamount = diffopts['ignorewsamount']
139 ignorewsamount = diffopts['ignorewsamount']
140 ignoreblanklines = diffopts['ignoreblanklines']
140 ignoreblanklines = diffopts['ignoreblanklines']
141 for f in modified:
141 for f in modified:
142 to = r.file(f).read(mmap1[f])
142 to = r.file(f).read(mmap1[f])
143 tn = r.file(f).read(mmap2[f])
143 tn = r.file(f).read(mmap2[f])
144 yield diffblock(mdiff.unidiff(to, date1, tn, date2, f,
144 yield diffblock(mdiff.unidiff(to, date1, tn, date2, f,
145 showfunc=showfunc, ignorews=ignorews,
145 showfunc=showfunc, ignorews=ignorews,
146 ignorewsamount=ignorewsamount,
146 ignorewsamount=ignorewsamount,
147 ignoreblanklines=ignoreblanklines), f, tn)
147 ignoreblanklines=ignoreblanklines), f, tn)
148 for f in added:
148 for f in added:
149 to = None
149 to = None
150 tn = r.file(f).read(mmap2[f])
150 tn = r.file(f).read(mmap2[f])
151 yield diffblock(mdiff.unidiff(to, date1, tn, date2, f,
151 yield diffblock(mdiff.unidiff(to, date1, tn, date2, f,
152 showfunc=showfunc, ignorews=ignorews,
152 showfunc=showfunc, ignorews=ignorews,
153 ignorewsamount=ignorewsamount,
153 ignorewsamount=ignorewsamount,
154 ignoreblanklines=ignoreblanklines), f, tn)
154 ignoreblanklines=ignoreblanklines), f, tn)
155 for f in removed:
155 for f in removed:
156 to = r.file(f).read(mmap1[f])
156 to = r.file(f).read(mmap1[f])
157 tn = None
157 tn = None
158 yield diffblock(mdiff.unidiff(to, date1, tn, date2, f,
158 yield diffblock(mdiff.unidiff(to, date1, tn, date2, f,
159 showfunc=showfunc, ignorews=ignorews,
159 showfunc=showfunc, ignorews=ignorews,
160 ignorewsamount=ignorewsamount,
160 ignorewsamount=ignorewsamount,
161 ignoreblanklines=ignoreblanklines), f, tn)
161 ignoreblanklines=ignoreblanklines), f, tn)
162
162
163 def changelog(self, pos):
163 def changelog(self, pos):
164 def changenav(**map):
164 def changenav(**map):
165 def seq(factor, maxchanges=None):
165 def seq(factor, maxchanges=None):
166 if maxchanges:
166 if maxchanges:
167 yield maxchanges
167 yield maxchanges
168 if maxchanges >= 20 and maxchanges <= 40:
168 if maxchanges >= 20 and maxchanges <= 40:
169 yield 50
169 yield 50
170 else:
170 else:
171 yield 1 * factor
171 yield 1 * factor
172 yield 3 * factor
172 yield 3 * factor
173 for f in seq(factor * 10):
173 for f in seq(factor * 10):
174 yield f
174 yield f
175
175
176 l = []
176 l = []
177 last = 0
177 last = 0
178 for f in seq(1, self.maxchanges):
178 for f in seq(1, self.maxchanges):
179 if f < self.maxchanges or f <= last:
179 if f < self.maxchanges or f <= last:
180 continue
180 continue
181 if f > count:
181 if f > count:
182 break
182 break
183 last = f
183 last = f
184 r = "%d" % f
184 r = "%d" % f
185 if pos + f < count:
185 if pos + f < count:
186 l.append(("+" + r, pos + f))
186 l.append(("+" + r, pos + f))
187 if pos - f >= 0:
187 if pos - f >= 0:
188 l.insert(0, ("-" + r, pos - f))
188 l.insert(0, ("-" + r, pos - f))
189
189
190 yield {"rev": 0, "label": "(0)"}
190 yield {"rev": 0, "label": "(0)"}
191
191
192 for label, rev in l:
192 for label, rev in l:
193 yield {"label": label, "rev": rev}
193 yield {"label": label, "rev": rev}
194
194
195 yield {"label": "tip", "rev": "tip"}
195 yield {"label": "tip", "rev": "tip"}
196
196
197 def changelist(**map):
197 def changelist(**map):
198 parity = (start - end) & 1
198 parity = (start - end) & 1
199 cl = self.repo.changelog
199 cl = self.repo.changelog
200 l = [] # build a list in forward order for efficiency
200 l = [] # build a list in forward order for efficiency
201 for i in range(start, end):
201 for i in range(start, end):
202 n = cl.node(i)
202 n = cl.node(i)
203 changes = cl.read(n)
203 changes = cl.read(n)
204 hn = hex(n)
204 hn = hex(n)
205
205
206 l.insert(0, {"parity": parity,
206 l.insert(0, {"parity": parity,
207 "author": changes[1],
207 "author": changes[1],
208 "parent": self.siblings(cl.parents(n), cl.rev,
208 "parent": self.siblings(cl.parents(n), cl.rev,
209 cl.rev(n) - 1),
209 cl.rev(n) - 1),
210 "child": self.siblings(cl.children(n), cl.rev,
210 "child": self.siblings(cl.children(n), cl.rev,
211 cl.rev(n) + 1),
211 cl.rev(n) + 1),
212 "changelogtag": self.showtag("changelogtag",n),
212 "changelogtag": self.showtag("changelogtag",n),
213 "manifest": hex(changes[0]),
213 "manifest": hex(changes[0]),
214 "desc": changes[4],
214 "desc": changes[4],
215 "date": changes[2],
215 "date": changes[2],
216 "files": self.listfilediffs(changes[3], n),
216 "files": self.listfilediffs(changes[3], n),
217 "rev": i,
217 "rev": i,
218 "node": hn})
218 "node": hn})
219 parity = 1 - parity
219 parity = 1 - parity
220
220
221 for e in l:
221 for e in l:
222 yield e
222 yield e
223
223
224 cl = self.repo.changelog
224 cl = self.repo.changelog
225 mf = cl.read(cl.tip())[0]
225 mf = cl.read(cl.tip())[0]
226 count = cl.count()
226 count = cl.count()
227 start = max(0, pos - self.maxchanges + 1)
227 start = max(0, pos - self.maxchanges + 1)
228 end = min(count, start + self.maxchanges)
228 end = min(count, start + self.maxchanges)
229 pos = end - 1
229 pos = end - 1
230
230
231 yield self.t('changelog',
231 yield self.t('changelog',
232 changenav=changenav,
232 changenav=changenav,
233 manifest=hex(mf),
233 manifest=hex(mf),
234 rev=pos, changesets=count, entries=changelist,
234 rev=pos, changesets=count, entries=changelist,
235 archives=self.archivelist("tip"))
235 archives=self.archivelist("tip"))
236
236
237 def search(self, query):
237 def search(self, query):
238
238
239 def changelist(**map):
239 def changelist(**map):
240 cl = self.repo.changelog
240 cl = self.repo.changelog
241 count = 0
241 count = 0
242 qw = query.lower().split()
242 qw = query.lower().split()
243
243
244 def revgen():
244 def revgen():
245 for i in range(cl.count() - 1, 0, -100):
245 for i in range(cl.count() - 1, 0, -100):
246 l = []
246 l = []
247 for j in range(max(0, i - 100), i):
247 for j in range(max(0, i - 100), i):
248 n = cl.node(j)
248 n = cl.node(j)
249 changes = cl.read(n)
249 changes = cl.read(n)
250 l.append((n, j, changes))
250 l.append((n, j, changes))
251 l.reverse()
251 l.reverse()
252 for e in l:
252 for e in l:
253 yield e
253 yield e
254
254
255 for n, i, changes in revgen():
255 for n, i, changes in revgen():
256 miss = 0
256 miss = 0
257 for q in qw:
257 for q in qw:
258 if not (q in changes[1].lower() or
258 if not (q in changes[1].lower() or
259 q in changes[4].lower() or
259 q in changes[4].lower() or
260 q in " ".join(changes[3][:20]).lower()):
260 q in " ".join(changes[3][:20]).lower()):
261 miss = 1
261 miss = 1
262 break
262 break
263 if miss:
263 if miss:
264 continue
264 continue
265
265
266 count += 1
266 count += 1
267 hn = hex(n)
267 hn = hex(n)
268
268
269 yield self.t('searchentry',
269 yield self.t('searchentry',
270 parity=self.stripes(count),
270 parity=self.stripes(count),
271 author=changes[1],
271 author=changes[1],
272 parent=self.siblings(cl.parents(n), cl.rev),
272 parent=self.siblings(cl.parents(n), cl.rev),
273 child=self.siblings(cl.children(n), cl.rev),
273 child=self.siblings(cl.children(n), cl.rev),
274 changelogtag=self.showtag("changelogtag",n),
274 changelogtag=self.showtag("changelogtag",n),
275 manifest=hex(changes[0]),
275 manifest=hex(changes[0]),
276 desc=changes[4],
276 desc=changes[4],
277 date=changes[2],
277 date=changes[2],
278 files=self.listfilediffs(changes[3], n),
278 files=self.listfilediffs(changes[3], n),
279 rev=i,
279 rev=i,
280 node=hn)
280 node=hn)
281
281
282 if count >= self.maxchanges:
282 if count >= self.maxchanges:
283 break
283 break
284
284
285 cl = self.repo.changelog
285 cl = self.repo.changelog
286 mf = cl.read(cl.tip())[0]
286 mf = cl.read(cl.tip())[0]
287
287
288 yield self.t('search',
288 yield self.t('search',
289 query=query,
289 query=query,
290 manifest=hex(mf),
290 manifest=hex(mf),
291 entries=changelist)
291 entries=changelist)
292
292
293 def changeset(self, nodeid):
293 def changeset(self, nodeid):
294 cl = self.repo.changelog
294 cl = self.repo.changelog
295 n = self.repo.lookup(nodeid)
295 n = self.repo.lookup(nodeid)
296 nodeid = hex(n)
296 nodeid = hex(n)
297 changes = cl.read(n)
297 changes = cl.read(n)
298 p1 = cl.parents(n)[0]
298 p1 = cl.parents(n)[0]
299
299
300 files = []
300 files = []
301 mf = self.repo.manifest.read(changes[0])
301 mf = self.repo.manifest.read(changes[0])
302 for f in changes[3]:
302 for f in changes[3]:
303 files.append(self.t("filenodelink",
303 files.append(self.t("filenodelink",
304 filenode=hex(mf.get(f, nullid)), file=f))
304 filenode=hex(mf.get(f, nullid)), file=f))
305
305
306 def diff(**map):
306 def diff(**map):
307 yield self.diff(p1, n, None)
307 yield self.diff(p1, n, None)
308
308
309 yield self.t('changeset',
309 yield self.t('changeset',
310 diff=diff,
310 diff=diff,
311 rev=cl.rev(n),
311 rev=cl.rev(n),
312 node=nodeid,
312 node=nodeid,
313 parent=self.siblings(cl.parents(n), cl.rev),
313 parent=self.siblings(cl.parents(n), cl.rev),
314 child=self.siblings(cl.children(n), cl.rev),
314 child=self.siblings(cl.children(n), cl.rev),
315 changesettag=self.showtag("changesettag",n),
315 changesettag=self.showtag("changesettag",n),
316 manifest=hex(changes[0]),
316 manifest=hex(changes[0]),
317 author=changes[1],
317 author=changes[1],
318 desc=changes[4],
318 desc=changes[4],
319 date=changes[2],
319 date=changes[2],
320 files=files,
320 files=files,
321 archives=self.archivelist(nodeid))
321 archives=self.archivelist(nodeid))
322
322
323 def filelog(self, f, filenode):
323 def filelog(self, f, filenode):
324 cl = self.repo.changelog
324 cl = self.repo.changelog
325 fl = self.repo.file(f)
325 fl = self.repo.file(f)
326 filenode = hex(fl.lookup(filenode))
326 filenode = hex(fl.lookup(filenode))
327 count = fl.count()
327 count = fl.count()
328
328
329 def entries(**map):
329 def entries(**map):
330 l = []
330 l = []
331 parity = (count - 1) & 1
331 parity = (count - 1) & 1
332
332
333 for i in range(count):
333 for i in range(count):
334 n = fl.node(i)
334 n = fl.node(i)
335 lr = fl.linkrev(n)
335 lr = fl.linkrev(n)
336 cn = cl.node(lr)
336 cn = cl.node(lr)
337 cs = cl.read(cl.node(lr))
337 cs = cl.read(cl.node(lr))
338
338
339 l.insert(0, {"parity": parity,
339 l.insert(0, {"parity": parity,
340 "filenode": hex(n),
340 "filenode": hex(n),
341 "filerev": i,
341 "filerev": i,
342 "file": f,
342 "file": f,
343 "node": hex(cn),
343 "node": hex(cn),
344 "author": cs[1],
344 "author": cs[1],
345 "date": cs[2],
345 "date": cs[2],
346 "rename": self.renamelink(fl, n),
346 "rename": self.renamelink(fl, n),
347 "parent": self.siblings(fl.parents(n),
347 "parent": self.siblings(fl.parents(n),
348 fl.rev, file=f),
348 fl.rev, file=f),
349 "child": self.siblings(fl.children(n),
349 "child": self.siblings(fl.children(n),
350 fl.rev, file=f),
350 fl.rev, file=f),
351 "desc": cs[4]})
351 "desc": cs[4]})
352 parity = 1 - parity
352 parity = 1 - parity
353
353
354 for e in l:
354 for e in l:
355 yield e
355 yield e
356
356
357 yield self.t("filelog", file=f, filenode=filenode, entries=entries)
357 yield self.t("filelog", file=f, filenode=filenode, entries=entries)
358
358
359 def filerevision(self, f, node):
359 def filerevision(self, f, node):
360 fl = self.repo.file(f)
360 fl = self.repo.file(f)
361 n = fl.lookup(node)
361 n = fl.lookup(node)
362 node = hex(n)
362 node = hex(n)
363 text = fl.read(n)
363 text = fl.read(n)
364 changerev = fl.linkrev(n)
364 changerev = fl.linkrev(n)
365 cl = self.repo.changelog
365 cl = self.repo.changelog
366 cn = cl.node(changerev)
366 cn = cl.node(changerev)
367 cs = cl.read(cn)
367 cs = cl.read(cn)
368 mfn = cs[0]
368 mfn = cs[0]
369
369
370 mt = mimetypes.guess_type(f)[0]
370 mt = mimetypes.guess_type(f)[0]
371 rawtext = text
371 rawtext = text
372 if util.binary(text):
372 if util.binary(text):
373 mt = mt or 'application/octet-stream'
373 mt = mt or 'application/octet-stream'
374 text = "(binary:%s)" % mt
374 text = "(binary:%s)" % mt
375 mt = mt or 'text/plain'
375 mt = mt or 'text/plain'
376
376
377 def lines():
377 def lines():
378 for l, t in enumerate(text.splitlines(1)):
378 for l, t in enumerate(text.splitlines(1)):
379 yield {"line": t,
379 yield {"line": t,
380 "linenumber": "% 6d" % (l + 1),
380 "linenumber": "% 6d" % (l + 1),
381 "parity": self.stripes(l)}
381 "parity": self.stripes(l)}
382
382
383 yield self.t("filerevision",
383 yield self.t("filerevision",
384 file=f,
384 file=f,
385 filenode=node,
385 filenode=node,
386 path=_up(f),
386 path=_up(f),
387 text=lines(),
387 text=lines(),
388 raw=rawtext,
388 raw=rawtext,
389 mimetype=mt,
389 mimetype=mt,
390 rev=changerev,
390 rev=changerev,
391 node=hex(cn),
391 node=hex(cn),
392 manifest=hex(mfn),
392 manifest=hex(mfn),
393 author=cs[1],
393 author=cs[1],
394 date=cs[2],
394 date=cs[2],
395 parent=self.siblings(fl.parents(n), fl.rev, file=f),
395 parent=self.siblings(fl.parents(n), fl.rev, file=f),
396 child=self.siblings(fl.children(n), fl.rev, file=f),
396 child=self.siblings(fl.children(n), fl.rev, file=f),
397 rename=self.renamelink(fl, n),
397 rename=self.renamelink(fl, n),
398 permissions=self.repo.manifest.readflags(mfn)[f])
398 permissions=self.repo.manifest.readflags(mfn)[f])
399
399
400 def fileannotate(self, f, node):
400 def fileannotate(self, f, node):
401 bcache = {}
401 bcache = {}
402 ncache = {}
402 ncache = {}
403 fl = self.repo.file(f)
403 fl = self.repo.file(f)
404 n = fl.lookup(node)
404 n = fl.lookup(node)
405 node = hex(n)
405 node = hex(n)
406 changerev = fl.linkrev(n)
406 changerev = fl.linkrev(n)
407
407
408 cl = self.repo.changelog
408 cl = self.repo.changelog
409 cn = cl.node(changerev)
409 cn = cl.node(changerev)
410 cs = cl.read(cn)
410 cs = cl.read(cn)
411 mfn = cs[0]
411 mfn = cs[0]
412
412
413 def annotate(**map):
413 def annotate(**map):
414 parity = 0
414 parity = 0
415 last = None
415 last = None
416 for r, l in fl.annotate(n):
416 for r, l in fl.annotate(n):
417 try:
417 try:
418 cnode = ncache[r]
418 cnode = ncache[r]
419 except KeyError:
419 except KeyError:
420 cnode = ncache[r] = self.repo.changelog.node(r)
420 cnode = ncache[r] = self.repo.changelog.node(r)
421
421
422 try:
422 try:
423 name = bcache[r]
423 name = bcache[r]
424 except KeyError:
424 except KeyError:
425 cl = self.repo.changelog.read(cnode)
425 cl = self.repo.changelog.read(cnode)
426 bcache[r] = name = self.repo.ui.shortuser(cl[1])
426 bcache[r] = name = self.repo.ui.shortuser(cl[1])
427
427
428 if last != cnode:
428 if last != cnode:
429 parity = 1 - parity
429 parity = 1 - parity
430 last = cnode
430 last = cnode
431
431
432 yield {"parity": parity,
432 yield {"parity": parity,
433 "node": hex(cnode),
433 "node": hex(cnode),
434 "rev": r,
434 "rev": r,
435 "author": name,
435 "author": name,
436 "file": f,
436 "file": f,
437 "line": l}
437 "line": l}
438
438
439 yield self.t("fileannotate",
439 yield self.t("fileannotate",
440 file=f,
440 file=f,
441 filenode=node,
441 filenode=node,
442 annotate=annotate,
442 annotate=annotate,
443 path=_up(f),
443 path=_up(f),
444 rev=changerev,
444 rev=changerev,
445 node=hex(cn),
445 node=hex(cn),
446 manifest=hex(mfn),
446 manifest=hex(mfn),
447 author=cs[1],
447 author=cs[1],
448 date=cs[2],
448 date=cs[2],
449 rename=self.renamelink(fl, n),
449 rename=self.renamelink(fl, n),
450 parent=self.siblings(fl.parents(n), fl.rev, file=f),
450 parent=self.siblings(fl.parents(n), fl.rev, file=f),
451 child=self.siblings(fl.children(n), fl.rev, file=f),
451 child=self.siblings(fl.children(n), fl.rev, file=f),
452 permissions=self.repo.manifest.readflags(mfn)[f])
452 permissions=self.repo.manifest.readflags(mfn)[f])
453
453
454 def manifest(self, mnode, path):
454 def manifest(self, mnode, path):
455 man = self.repo.manifest
455 man = self.repo.manifest
456 mn = man.lookup(mnode)
456 mn = man.lookup(mnode)
457 mnode = hex(mn)
457 mnode = hex(mn)
458 mf = man.read(mn)
458 mf = man.read(mn)
459 rev = man.rev(mn)
459 rev = man.rev(mn)
460 changerev = man.linkrev(mn)
460 changerev = man.linkrev(mn)
461 node = self.repo.changelog.node(changerev)
461 node = self.repo.changelog.node(changerev)
462 mff = man.readflags(mn)
462 mff = man.readflags(mn)
463
463
464 files = {}
464 files = {}
465
465
466 p = path[1:]
466 p = path[1:]
467 if p and p[-1] != "/":
467 if p and p[-1] != "/":
468 p += "/"
468 p += "/"
469 l = len(p)
469 l = len(p)
470
470
471 for f,n in mf.items():
471 for f,n in mf.items():
472 if f[:l] != p:
472 if f[:l] != p:
473 continue
473 continue
474 remain = f[l:]
474 remain = f[l:]
475 if "/" in remain:
475 if "/" in remain:
476 short = remain[:remain.index("/") + 1] # bleah
476 short = remain[:remain.index("/") + 1] # bleah
477 files[short] = (f, None)
477 files[short] = (f, None)
478 else:
478 else:
479 short = os.path.basename(remain)
479 short = os.path.basename(remain)
480 files[short] = (f, n)
480 files[short] = (f, n)
481
481
482 def filelist(**map):
482 def filelist(**map):
483 parity = 0
483 parity = 0
484 fl = files.keys()
484 fl = files.keys()
485 fl.sort()
485 fl.sort()
486 for f in fl:
486 for f in fl:
487 full, fnode = files[f]
487 full, fnode = files[f]
488 if not fnode:
488 if not fnode:
489 continue
489 continue
490
490
491 yield {"file": full,
491 yield {"file": full,
492 "manifest": mnode,
492 "manifest": mnode,
493 "filenode": hex(fnode),
493 "filenode": hex(fnode),
494 "parity": self.stripes(parity),
494 "parity": self.stripes(parity),
495 "basename": f,
495 "basename": f,
496 "permissions": mff[full]}
496 "permissions": mff[full]}
497 parity += 1
497 parity += 1
498
498
499 def dirlist(**map):
499 def dirlist(**map):
500 parity = 0
500 parity = 0
501 fl = files.keys()
501 fl = files.keys()
502 fl.sort()
502 fl.sort()
503 for f in fl:
503 for f in fl:
504 full, fnode = files[f]
504 full, fnode = files[f]
505 if fnode:
505 if fnode:
506 continue
506 continue
507
507
508 yield {"parity": self.stripes(parity),
508 yield {"parity": self.stripes(parity),
509 "path": os.path.join(path, f),
509 "path": os.path.join(path, f),
510 "manifest": mnode,
510 "manifest": mnode,
511 "basename": f[:-1]}
511 "basename": f[:-1]}
512 parity += 1
512 parity += 1
513
513
514 yield self.t("manifest",
514 yield self.t("manifest",
515 manifest=mnode,
515 manifest=mnode,
516 rev=rev,
516 rev=rev,
517 node=hex(node),
517 node=hex(node),
518 path=path,
518 path=path,
519 up=_up(path),
519 up=_up(path),
520 fentries=filelist,
520 fentries=filelist,
521 dentries=dirlist,
521 dentries=dirlist,
522 archives=self.archivelist(hex(node)))
522 archives=self.archivelist(hex(node)))
523
523
524 def tags(self):
524 def tags(self):
525 cl = self.repo.changelog
525 cl = self.repo.changelog
526 mf = cl.read(cl.tip())[0]
526 mf = cl.read(cl.tip())[0]
527
527
528 i = self.repo.tagslist()
528 i = self.repo.tagslist()
529 i.reverse()
529 i.reverse()
530
530
531 def entries(notip=False, **map):
531 def entries(notip=False, **map):
532 parity = 0
532 parity = 0
533 for k,n in i:
533 for k,n in i:
534 if notip and k == "tip": continue
534 if notip and k == "tip": continue
535 yield {"parity": self.stripes(parity),
535 yield {"parity": self.stripes(parity),
536 "tag": k,
536 "tag": k,
537 "tagmanifest": hex(cl.read(n)[0]),
537 "tagmanifest": hex(cl.read(n)[0]),
538 "date": cl.read(n)[2],
538 "date": cl.read(n)[2],
539 "node": hex(n)}
539 "node": hex(n)}
540 parity += 1
540 parity += 1
541
541
542 yield self.t("tags",
542 yield self.t("tags",
543 manifest=hex(mf),
543 manifest=hex(mf),
544 entries=lambda **x: entries(False, **x),
544 entries=lambda **x: entries(False, **x),
545 entriesnotip=lambda **x: entries(True, **x))
545 entriesnotip=lambda **x: entries(True, **x))
546
546
547 def summary(self):
547 def summary(self):
548 cl = self.repo.changelog
548 cl = self.repo.changelog
549 mf = cl.read(cl.tip())[0]
549 mf = cl.read(cl.tip())[0]
550
550
551 i = self.repo.tagslist()
551 i = self.repo.tagslist()
552 i.reverse()
552 i.reverse()
553
553
554 def tagentries(**map):
554 def tagentries(**map):
555 parity = 0
555 parity = 0
556 count = 0
556 count = 0
557 for k,n in i:
557 for k,n in i:
558 if k == "tip": # skip tip
558 if k == "tip": # skip tip
559 continue;
559 continue;
560
560
561 count += 1
561 count += 1
562 if count > 10: # limit to 10 tags
562 if count > 10: # limit to 10 tags
563 break;
563 break;
564
564
565 c = cl.read(n)
565 c = cl.read(n)
566 m = c[0]
566 m = c[0]
567 t = c[2]
567 t = c[2]
568
568
569 yield self.t("tagentry",
569 yield self.t("tagentry",
570 parity = self.stripes(parity),
570 parity = self.stripes(parity),
571 tag = k,
571 tag = k,
572 node = hex(n),
572 node = hex(n),
573 date = t,
573 date = t,
574 tagmanifest = hex(m))
574 tagmanifest = hex(m))
575 parity += 1
575 parity += 1
576
576
577 def changelist(**map):
577 def changelist(**map):
578 parity = 0
578 parity = 0
579 cl = self.repo.changelog
579 cl = self.repo.changelog
580 l = [] # build a list in forward order for efficiency
580 l = [] # build a list in forward order for efficiency
581 for i in range(start, end):
581 for i in range(start, end):
582 n = cl.node(i)
582 n = cl.node(i)
583 changes = cl.read(n)
583 changes = cl.read(n)
584 hn = hex(n)
584 hn = hex(n)
585 t = changes[2]
585 t = changes[2]
586
586
587 l.insert(0, self.t(
587 l.insert(0, self.t(
588 'shortlogentry',
588 'shortlogentry',
589 parity = parity,
589 parity = parity,
590 author = changes[1],
590 author = changes[1],
591 manifest = hex(changes[0]),
591 manifest = hex(changes[0]),
592 desc = changes[4],
592 desc = changes[4],
593 date = t,
593 date = t,
594 rev = i,
594 rev = i,
595 node = hn))
595 node = hn))
596 parity = 1 - parity
596 parity = 1 - parity
597
597
598 yield l
598 yield l
599
599
600 cl = self.repo.changelog
600 cl = self.repo.changelog
601 mf = cl.read(cl.tip())[0]
601 mf = cl.read(cl.tip())[0]
602 count = cl.count()
602 count = cl.count()
603 start = max(0, count - self.maxchanges)
603 start = max(0, count - self.maxchanges)
604 end = min(count, start + self.maxchanges)
604 end = min(count, start + self.maxchanges)
605
605
606 yield self.t("summary",
606 yield self.t("summary",
607 desc = self.repo.ui.config("web", "description", "unknown"),
607 desc = self.repo.ui.config("web", "description", "unknown"),
608 owner = (self.repo.ui.config("ui", "username") or # preferred
608 owner = (self.repo.ui.config("ui", "username") or # preferred
609 self.repo.ui.config("web", "contact") or # deprecated
609 self.repo.ui.config("web", "contact") or # deprecated
610 self.repo.ui.config("web", "author", "unknown")), # also
610 self.repo.ui.config("web", "author", "unknown")), # also
611 lastchange = (0, 0), # FIXME
611 lastchange = (0, 0), # FIXME
612 manifest = hex(mf),
612 manifest = hex(mf),
613 tags = tagentries,
613 tags = tagentries,
614 shortlog = changelist)
614 shortlog = changelist)
615
615
616 def filediff(self, file, changeset):
616 def filediff(self, file, changeset):
617 cl = self.repo.changelog
617 cl = self.repo.changelog
618 n = self.repo.lookup(changeset)
618 n = self.repo.lookup(changeset)
619 changeset = hex(n)
619 changeset = hex(n)
620 p1 = cl.parents(n)[0]
620 p1 = cl.parents(n)[0]
621 cs = cl.read(n)
621 cs = cl.read(n)
622 mf = self.repo.manifest.read(cs[0])
622 mf = self.repo.manifest.read(cs[0])
623
623
624 def diff(**map):
624 def diff(**map):
625 yield self.diff(p1, n, [file])
625 yield self.diff(p1, n, [file])
626
626
627 yield self.t("filediff",
627 yield self.t("filediff",
628 file=file,
628 file=file,
629 filenode=hex(mf.get(file, nullid)),
629 filenode=hex(mf.get(file, nullid)),
630 node=changeset,
630 node=changeset,
631 rev=self.repo.changelog.rev(n),
631 rev=self.repo.changelog.rev(n),
632 parent=self.siblings(cl.parents(n), cl.rev),
632 parent=self.siblings(cl.parents(n), cl.rev),
633 child=self.siblings(cl.children(n), cl.rev),
633 child=self.siblings(cl.children(n), cl.rev),
634 diff=diff)
634 diff=diff)
635
635
636 archive_specs = {
636 archive_specs = {
637 'bz2': ('application/x-tar', 'tbz2', '.tar.bz2', None),
637 'bz2': ('application/x-tar', 'tbz2', '.tar.bz2', None),
638 'gz': ('application/x-tar', 'tgz', '.tar.gz', None),
638 'gz': ('application/x-tar', 'tgz', '.tar.gz', None),
639 'zip': ('application/zip', 'zip', '.zip', None),
639 'zip': ('application/zip', 'zip', '.zip', None),
640 }
640 }
641
641
642 def archive(self, req, cnode, type_):
642 def archive(self, req, cnode, type_):
643 reponame = re.sub(r"\W+", "-", os.path.basename(self.reponame))
643 reponame = re.sub(r"\W+", "-", os.path.basename(self.reponame))
644 name = "%s-%s" % (reponame, short(cnode))
644 name = "%s-%s" % (reponame, short(cnode))
645 mimetype, artype, extension, encoding = self.archive_specs[type_]
645 mimetype, artype, extension, encoding = self.archive_specs[type_]
646 headers = [('Content-type', mimetype),
646 headers = [('Content-type', mimetype),
647 ('Content-disposition', 'attachment; filename=%s%s' %
647 ('Content-disposition', 'attachment; filename=%s%s' %
648 (name, extension))]
648 (name, extension))]
649 if encoding:
649 if encoding:
650 headers.append(('Content-encoding', encoding))
650 headers.append(('Content-encoding', encoding))
651 req.header(headers)
651 req.header(headers)
652 archival.archive(self.repo, req.out, cnode, artype, prefix=name)
652 archival.archive(self.repo, req.out, cnode, artype, prefix=name)
653
653
654 # add tags to things
654 # add tags to things
655 # tags -> list of changesets corresponding to tags
655 # tags -> list of changesets corresponding to tags
656 # find tag, changeset, file
656 # find tag, changeset, file
657
657
658 def cleanpath(self, path):
658 def cleanpath(self, path):
659 p = util.normpath(path)
659 p = util.normpath(path)
660 if p[:2] == "..":
660 if p[:2] == "..":
661 raise Exception("suspicious path")
661 raise Exception("suspicious path")
662 return p
662 return p
663
663
664 def run(self):
664 def run(self):
665 if not os.environ.get('GATEWAY_INTERFACE', '').startswith("CGI/1."):
665 if not os.environ.get('GATEWAY_INTERFACE', '').startswith("CGI/1."):
666 raise RuntimeError("This function is only intended to be called while running as a CGI script.")
666 raise RuntimeError("This function is only intended to be called while running as a CGI script.")
667 import mercurial.hgweb.wsgicgi as wsgicgi
667 import mercurial.hgweb.wsgicgi as wsgicgi
668 from request import wsgiapplication
668 from request import wsgiapplication
669 def make_web_app():
669 def make_web_app():
670 return self
670 return self
671 wsgicgi.launch(wsgiapplication(make_web_app))
671 wsgicgi.launch(wsgiapplication(make_web_app))
672
672
673 def run_wsgi(self, req):
673 def run_wsgi(self, req):
674 def header(**map):
674 def header(**map):
675 header_file = cStringIO.StringIO(''.join(self.t("header", **map)))
675 header_file = cStringIO.StringIO(''.join(self.t("header", **map)))
676 msg = mimetools.Message(header_file, 0)
676 msg = mimetools.Message(header_file, 0)
677 req.header(msg.items())
677 req.header(msg.items())
678 yield header_file.read()
678 yield header_file.read()
679
679
680 def rawfileheader(**map):
680 def rawfileheader(**map):
681 req.header([('Content-type', map['mimetype']),
681 req.header([('Content-type', map['mimetype']),
682 ('Content-disposition', 'filename=%s' % map['file']),
682 ('Content-disposition', 'filename=%s' % map['file']),
683 ('Content-length', str(len(map['raw'])))])
683 ('Content-length', str(len(map['raw'])))])
684 yield ''
684 yield ''
685
685
686 def footer(**map):
686 def footer(**map):
687 yield self.t("footer",
687 yield self.t("footer",
688 motd=self.repo.ui.config("web", "motd", ""),
688 motd=self.repo.ui.config("web", "motd", ""),
689 **map)
689 **map)
690
690
691 def expand_form(form):
691 def expand_form(form):
692 shortcuts = {
692 shortcuts = {
693 'cl': [('cmd', ['changelog']), ('rev', None)],
693 'cl': [('cmd', ['changelog']), ('rev', None)],
694 'cs': [('cmd', ['changeset']), ('node', None)],
694 'cs': [('cmd', ['changeset']), ('node', None)],
695 'f': [('cmd', ['file']), ('filenode', None)],
695 'f': [('cmd', ['file']), ('filenode', None)],
696 'fl': [('cmd', ['filelog']), ('filenode', None)],
696 'fl': [('cmd', ['filelog']), ('filenode', None)],
697 'fd': [('cmd', ['filediff']), ('node', None)],
697 'fd': [('cmd', ['filediff']), ('node', None)],
698 'fa': [('cmd', ['annotate']), ('filenode', None)],
698 'fa': [('cmd', ['annotate']), ('filenode', None)],
699 'mf': [('cmd', ['manifest']), ('manifest', None)],
699 'mf': [('cmd', ['manifest']), ('manifest', None)],
700 'ca': [('cmd', ['archive']), ('node', None)],
700 'ca': [('cmd', ['archive']), ('node', None)],
701 'tags': [('cmd', ['tags'])],
701 'tags': [('cmd', ['tags'])],
702 'tip': [('cmd', ['changeset']), ('node', ['tip'])],
702 'tip': [('cmd', ['changeset']), ('node', ['tip'])],
703 'static': [('cmd', ['static']), ('file', None)]
703 'static': [('cmd', ['static']), ('file', None)]
704 }
704 }
705
705
706 for k in shortcuts.iterkeys():
706 for k in shortcuts.iterkeys():
707 if form.has_key(k):
707 if form.has_key(k):
708 for name, value in shortcuts[k]:
708 for name, value in shortcuts[k]:
709 if value is None:
709 if value is None:
710 value = form[k]
710 value = form[k]
711 form[name] = value
711 form[name] = value
712 del form[k]
712 del form[k]
713
713
714 self.refresh()
714 self.refresh()
715
715
716 expand_form(req.form)
716 expand_form(req.form)
717
717
718 m = os.path.join(self.templatepath, "map")
718 m = os.path.join(self.templatepath, "map")
719 style = self.repo.ui.config("web", "style", "")
719 style = self.repo.ui.config("web", "style", "")
720 if req.form.has_key('style'):
720 if req.form.has_key('style'):
721 style = req.form['style'][0]
721 style = req.form['style'][0]
722 if style:
722 if style:
723 b = os.path.basename("map-" + style)
723 b = os.path.basename("map-" + style)
724 p = os.path.join(self.templatepath, b)
724 p = os.path.join(self.templatepath, b)
725 if os.path.isfile(p):
725 if os.path.isfile(p):
726 m = p
726 m = p
727
727
728 port = req.env["SERVER_PORT"]
728 port = req.env["SERVER_PORT"]
729 port = port != "80" and (":" + port) or ""
729 port = port != "80" and (":" + port) or ""
730 uri = req.env["REQUEST_URI"]
730 uri = req.env["REQUEST_URI"]
731 if "?" in uri:
731 if "?" in uri:
732 uri = uri.split("?")[0]
732 uri = uri.split("?")[0]
733 url = "http://%s%s%s" % (req.env["SERVER_NAME"], port, uri)
733 url = "http://%s%s%s" % (req.env["SERVER_NAME"], port, uri)
734 if not self.reponame:
734 if not self.reponame:
735 self.reponame = (self.repo.ui.config("web", "name")
735 self.reponame = (self.repo.ui.config("web", "name")
736 or uri.strip('/') or self.repo.root)
736 or uri.strip('/') or self.repo.root)
737
737
738 self.t = templater.templater(m, templater.common_filters,
738 self.t = templater.templater(m, templater.common_filters,
739 defaults={"url": url,
739 defaults={"url": url,
740 "repo": self.reponame,
740 "repo": self.reponame,
741 "header": header,
741 "header": header,
742 "footer": footer,
742 "footer": footer,
743 "rawfileheader": rawfileheader,
743 "rawfileheader": rawfileheader,
744 })
744 })
745
745
746 if not req.form.has_key('cmd'):
746 if not req.form.has_key('cmd'):
747 req.form['cmd'] = [self.t.cache['default'],]
747 req.form['cmd'] = [self.t.cache['default'],]
748
748
749 cmd = req.form['cmd'][0]
749 cmd = req.form['cmd'][0]
750
750
751 method = getattr(self, 'do_' + cmd, None)
751 method = getattr(self, 'do_' + cmd, None)
752 if method:
752 if method:
753 method(req)
753 method(req)
754 else:
754 else:
755 req.write(self.t("error"))
755 req.write(self.t("error"))
756
756
757 def stripes(self, parity):
757 def stripes(self, parity):
758 "make horizontal stripes for easier reading"
758 "make horizontal stripes for easier reading"
759 if self.stripecount:
759 if self.stripecount:
760 return (1 + parity / self.stripecount) & 1
760 return (1 + parity / self.stripecount) & 1
761 else:
761 else:
762 return 0
762 return 0
763
763
764 def do_changelog(self, req):
764 def do_changelog(self, req):
765 hi = self.repo.changelog.count() - 1
765 hi = self.repo.changelog.count() - 1
766 if req.form.has_key('rev'):
766 if req.form.has_key('rev'):
767 hi = req.form['rev'][0]
767 hi = req.form['rev'][0]
768 try:
768 try:
769 hi = self.repo.changelog.rev(self.repo.lookup(hi))
769 hi = self.repo.changelog.rev(self.repo.lookup(hi))
770 except hg.RepoError:
770 except hg.RepoError:
771 req.write(self.search(hi)) # XXX redirect to 404 page?
771 req.write(self.search(hi)) # XXX redirect to 404 page?
772 return
772 return
773
773
774 req.write(self.changelog(hi))
774 req.write(self.changelog(hi))
775
775
776 def do_changeset(self, req):
776 def do_changeset(self, req):
777 req.write(self.changeset(req.form['node'][0]))
777 req.write(self.changeset(req.form['node'][0]))
778
778
779 def do_manifest(self, req):
779 def do_manifest(self, req):
780 req.write(self.manifest(req.form['manifest'][0],
780 req.write(self.manifest(req.form['manifest'][0],
781 self.cleanpath(req.form['path'][0])))
781 self.cleanpath(req.form['path'][0])))
782
782
783 def do_tags(self, req):
783 def do_tags(self, req):
784 req.write(self.tags())
784 req.write(self.tags())
785
785
786 def do_summary(self, req):
786 def do_summary(self, req):
787 req.write(self.summary())
787 req.write(self.summary())
788
788
789 def do_filediff(self, req):
789 def do_filediff(self, req):
790 req.write(self.filediff(self.cleanpath(req.form['file'][0]),
790 req.write(self.filediff(self.cleanpath(req.form['file'][0]),
791 req.form['node'][0]))
791 req.form['node'][0]))
792
792
793 def do_file(self, req):
793 def do_file(self, req):
794 req.write(self.filerevision(self.cleanpath(req.form['file'][0]),
794 req.write(self.filerevision(self.cleanpath(req.form['file'][0]),
795 req.form['filenode'][0]))
795 req.form['filenode'][0]))
796
796
797 def do_annotate(self, req):
797 def do_annotate(self, req):
798 req.write(self.fileannotate(self.cleanpath(req.form['file'][0]),
798 req.write(self.fileannotate(self.cleanpath(req.form['file'][0]),
799 req.form['filenode'][0]))
799 req.form['filenode'][0]))
800
800
801 def do_filelog(self, req):
801 def do_filelog(self, req):
802 req.write(self.filelog(self.cleanpath(req.form['file'][0]),
802 req.write(self.filelog(self.cleanpath(req.form['file'][0]),
803 req.form['filenode'][0]))
803 req.form['filenode'][0]))
804
804
805 def do_heads(self, req):
805 def do_heads(self, req):
806 resp = " ".join(map(hex, self.repo.heads())) + "\n"
806 resp = " ".join(map(hex, self.repo.heads())) + "\n"
807 req.httphdr("application/mercurial-0.1", length=len(resp))
807 req.httphdr("application/mercurial-0.1", length=len(resp))
808 req.write(resp)
808 req.write(resp)
809
809
810 def do_branches(self, req):
810 def do_branches(self, req):
811 nodes = []
811 nodes = []
812 if req.form.has_key('nodes'):
812 if req.form.has_key('nodes'):
813 nodes = map(bin, req.form['nodes'][0].split(" "))
813 nodes = map(bin, req.form['nodes'][0].split(" "))
814 resp = cStringIO.StringIO()
814 resp = cStringIO.StringIO()
815 for b in self.repo.branches(nodes):
815 for b in self.repo.branches(nodes):
816 resp.write(" ".join(map(hex, b)) + "\n")
816 resp.write(" ".join(map(hex, b)) + "\n")
817 resp = resp.getvalue()
817 resp = resp.getvalue()
818 req.httphdr("application/mercurial-0.1", length=len(resp))
818 req.httphdr("application/mercurial-0.1", length=len(resp))
819 req.write(resp)
819 req.write(resp)
820
820
821 def do_between(self, req):
821 def do_between(self, req):
822 nodes = []
822 nodes = []
823 if req.form.has_key('pairs'):
823 if req.form.has_key('pairs'):
824 pairs = [map(bin, p.split("-"))
824 pairs = [map(bin, p.split("-"))
825 for p in req.form['pairs'][0].split(" ")]
825 for p in req.form['pairs'][0].split(" ")]
826 resp = cStringIO.StringIO()
826 resp = cStringIO.StringIO()
827 for b in self.repo.between(pairs):
827 for b in self.repo.between(pairs):
828 resp.write(" ".join(map(hex, b)) + "\n")
828 resp.write(" ".join(map(hex, b)) + "\n")
829 resp = resp.getvalue()
829 resp = resp.getvalue()
830 req.httphdr("application/mercurial-0.1", length=len(resp))
830 req.httphdr("application/mercurial-0.1", length=len(resp))
831 req.write(resp)
831 req.write(resp)
832
832
833 def do_changegroup(self, req):
833 def do_changegroup(self, req):
834 req.httphdr("application/mercurial-0.1")
834 req.httphdr("application/mercurial-0.1")
835 nodes = []
835 nodes = []
836 if not self.allowpull:
836 if not self.allowpull:
837 return
837 return
838
838
839 if req.form.has_key('roots'):
839 if req.form.has_key('roots'):
840 nodes = map(bin, req.form['roots'][0].split(" "))
840 nodes = map(bin, req.form['roots'][0].split(" "))
841
841
842 z = zlib.compressobj()
842 z = zlib.compressobj()
843 f = self.repo.changegroup(nodes, 'serve')
843 f = self.repo.changegroup(nodes, 'serve')
844 while 1:
844 while 1:
845 chunk = f.read(4096)
845 chunk = f.read(4096)
846 if not chunk:
846 if not chunk:
847 break
847 break
848 req.write(z.compress(chunk))
848 req.write(z.compress(chunk))
849
849
850 req.write(z.flush())
850 req.write(z.flush())
851
851
852 def do_archive(self, req):
852 def do_archive(self, req):
853 changeset = self.repo.lookup(req.form['node'][0])
853 changeset = self.repo.lookup(req.form['node'][0])
854 type_ = req.form['type'][0]
854 type_ = req.form['type'][0]
855 allowed = self.repo.ui.configlist("web", "allow_archive")
855 allowed = self.repo.ui.configlist("web", "allow_archive")
856 if (type_ in self.archives and (type_ in allowed or
856 if (type_ in self.archives and (type_ in allowed or
857 self.repo.ui.configbool("web", "allow" + type_, False))):
857 self.repo.ui.configbool("web", "allow" + type_, False))):
858 self.archive(req, changeset, type_)
858 self.archive(req, changeset, type_)
859 return
859 return
860
860
861 req.write(self.t("error"))
861 req.write(self.t("error"))
862
862
863 def do_static(self, req):
863 def do_static(self, req):
864 fname = req.form['file'][0]
864 fname = req.form['file'][0]
865 static = self.repo.ui.config("web", "static",
865 static = self.repo.ui.config("web", "static",
866 os.path.join(self.templatepath,
866 os.path.join(self.templatepath,
867 "static"))
867 "static"))
868 req.write(staticfile(static, fname, req)
868 req.write(staticfile(static, fname, req)
869 or self.t("error", error="%r not found" % fname))
869 or self.t("error", error="%r not found" % fname))
870
870
871 def do_capabilities(self, req):
871 def do_capabilities(self, req):
872 caps = ['unbundle']
872 caps = ['unbundle']
873 if self.repo.ui.configbool('server', 'uncompressed'):
873 if self.repo.ui.configbool('server', 'uncompressed'):
874 caps.append('stream=%d' % self.repo.revlogversion)
874 caps.append('stream=%d' % self.repo.revlogversion)
875 resp = ' '.join(caps)
875 resp = ' '.join(caps)
876 req.httphdr("application/mercurial-0.1", length=len(resp))
876 req.httphdr("application/mercurial-0.1", length=len(resp))
877 req.write(resp)
877 req.write(resp)
878
878
879 def check_perm(self, req, op, default):
879 def check_perm(self, req, op, default):
880 '''check permission for operation based on user auth.
880 '''check permission for operation based on user auth.
881 return true if op allowed, else false.
881 return true if op allowed, else false.
882 default is policy to use if no config given.'''
882 default is policy to use if no config given.'''
883
883
884 user = req.env.get('REMOTE_USER')
884 user = req.env.get('REMOTE_USER')
885
885
886 deny = self.repo.ui.configlist('web', 'deny_' + op)
886 deny = self.repo.ui.configlist('web', 'deny_' + op)
887 if deny and (not user or deny == ['*'] or user in deny):
887 if deny and (not user or deny == ['*'] or user in deny):
888 return False
888 return False
889
889
890 allow = self.repo.ui.configlist('web', 'allow_' + op)
890 allow = self.repo.ui.configlist('web', 'allow_' + op)
891 return (allow and (allow == ['*'] or user in allow)) or default
891 return (allow and (allow == ['*'] or user in allow)) or default
892
892
893 def do_unbundle(self, req):
893 def do_unbundle(self, req):
894 def bail(response, headers={}):
894 def bail(response, headers={}):
895 length = int(req.env['CONTENT_LENGTH'])
895 length = int(req.env['CONTENT_LENGTH'])
896 for s in util.filechunkiter(req, limit=length):
896 for s in util.filechunkiter(req, limit=length):
897 # drain incoming bundle, else client will not see
897 # drain incoming bundle, else client will not see
898 # response when run outside cgi script
898 # response when run outside cgi script
899 pass
899 pass
900 req.httphdr("application/mercurial-0.1", headers=headers)
900 req.httphdr("application/mercurial-0.1", headers=headers)
901 req.write('0\n')
901 req.write('0\n')
902 req.write(response)
902 req.write(response)
903
903
904 # require ssl by default, auth info cannot be sniffed and
904 # require ssl by default, auth info cannot be sniffed and
905 # replayed
905 # replayed
906 ssl_req = self.repo.ui.configbool('web', 'push_ssl', True)
906 ssl_req = self.repo.ui.configbool('web', 'push_ssl', True)
907 if ssl_req and not req.env.get('HTTPS'):
907 if ssl_req:
908 bail(_('ssl required\n'))
908 if not req.env.get('HTTPS'):
909 return
909 bail(_('ssl required\n'))
910 return
911 proto = 'https'
912 else:
913 proto = 'http'
910
914
911 # do not allow push unless explicitly allowed
915 # do not allow push unless explicitly allowed
912 if not self.check_perm(req, 'push', False):
916 if not self.check_perm(req, 'push', False):
913 bail(_('push not authorized\n'),
917 bail(_('push not authorized\n'),
914 headers={'status': '401 Unauthorized'})
918 headers={'status': '401 Unauthorized'})
915 return
919 return
916
920
917 req.httphdr("application/mercurial-0.1")
921 req.httphdr("application/mercurial-0.1")
918
922
919 their_heads = req.form['heads'][0].split(' ')
923 their_heads = req.form['heads'][0].split(' ')
920
924
921 def check_heads():
925 def check_heads():
922 heads = map(hex, self.repo.heads())
926 heads = map(hex, self.repo.heads())
923 return their_heads == [hex('force')] or their_heads == heads
927 return their_heads == [hex('force')] or their_heads == heads
924
928
925 # fail early if possible
929 # fail early if possible
926 if not check_heads():
930 if not check_heads():
927 bail(_('unsynced changes\n'))
931 bail(_('unsynced changes\n'))
928 return
932 return
929
933
930 # do not lock repo until all changegroup data is
934 # do not lock repo until all changegroup data is
931 # streamed. save to temporary file.
935 # streamed. save to temporary file.
932
936
933 fd, tempname = tempfile.mkstemp(prefix='hg-unbundle-')
937 fd, tempname = tempfile.mkstemp(prefix='hg-unbundle-')
934 fp = os.fdopen(fd, 'wb+')
938 fp = os.fdopen(fd, 'wb+')
935 try:
939 try:
936 length = int(req.env['CONTENT_LENGTH'])
940 length = int(req.env['CONTENT_LENGTH'])
937 for s in util.filechunkiter(req, limit=length):
941 for s in util.filechunkiter(req, limit=length):
938 fp.write(s)
942 fp.write(s)
939
943
940 lock = self.repo.lock()
944 lock = self.repo.lock()
941 try:
945 try:
942 if not check_heads():
946 if not check_heads():
943 req.write('0\n')
947 req.write('0\n')
944 req.write(_('unsynced changes\n'))
948 req.write(_('unsynced changes\n'))
945 return
949 return
946
950
947 fp.seek(0)
951 fp.seek(0)
948
952
949 # send addchangegroup output to client
953 # send addchangegroup output to client
950
954
951 old_stdout = sys.stdout
955 old_stdout = sys.stdout
952 sys.stdout = cStringIO.StringIO()
956 sys.stdout = cStringIO.StringIO()
953
957
954 try:
958 try:
955 ret = self.repo.addchangegroup(fp, 'serve')
959 url = 'remote:%s:%s' % (proto,
960 req.env.get('REMOTE_HOST', ''))
961 ret = self.repo.addchangegroup(fp, 'serve', url)
956 finally:
962 finally:
957 val = sys.stdout.getvalue()
963 val = sys.stdout.getvalue()
958 sys.stdout = old_stdout
964 sys.stdout = old_stdout
959 req.write('%d\n' % ret)
965 req.write('%d\n' % ret)
960 req.write(val)
966 req.write(val)
961 finally:
967 finally:
962 lock.release()
968 lock.release()
963 finally:
969 finally:
964 fp.close()
970 fp.close()
965 os.unlink(tempname)
971 os.unlink(tempname)
966
972
967 def do_stream_out(self, req):
973 def do_stream_out(self, req):
968 req.httphdr("application/mercurial-0.1")
974 req.httphdr("application/mercurial-0.1")
969 streamclone.stream_out(self.repo, req)
975 streamclone.stream_out(self.repo, req)
@@ -1,337 +1,341 b''
1 # httprepo.py - HTTP repository proxy classes for mercurial
1 # httprepo.py - HTTP repository proxy classes for mercurial
2 #
2 #
3 # Copyright 2005 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms
5 # This software may be used and distributed according to the terms
6 # of the GNU General Public License, incorporated herein by reference.
6 # of the GNU General Public License, incorporated herein by reference.
7
7
8 from node import *
8 from node import *
9 from remoterepo import *
9 from remoterepo import *
10 from i18n import gettext as _
10 from i18n import gettext as _
11 from demandload import *
11 from demandload import *
12 demandload(globals(), "hg os urllib urllib2 urlparse zlib util httplib")
12 demandload(globals(), "hg os urllib urllib2 urlparse zlib util httplib")
13 demandload(globals(), "errno keepalive tempfile socket")
13 demandload(globals(), "errno keepalive tempfile socket")
14
14
15 class passwordmgr(urllib2.HTTPPasswordMgrWithDefaultRealm):
15 class passwordmgr(urllib2.HTTPPasswordMgrWithDefaultRealm):
16 def __init__(self, ui):
16 def __init__(self, ui):
17 urllib2.HTTPPasswordMgrWithDefaultRealm.__init__(self)
17 urllib2.HTTPPasswordMgrWithDefaultRealm.__init__(self)
18 self.ui = ui
18 self.ui = ui
19
19
20 def find_user_password(self, realm, authuri):
20 def find_user_password(self, realm, authuri):
21 authinfo = urllib2.HTTPPasswordMgrWithDefaultRealm.find_user_password(
21 authinfo = urllib2.HTTPPasswordMgrWithDefaultRealm.find_user_password(
22 self, realm, authuri)
22 self, realm, authuri)
23 user, passwd = authinfo
23 user, passwd = authinfo
24 if user and passwd:
24 if user and passwd:
25 return (user, passwd)
25 return (user, passwd)
26
26
27 if not self.ui.interactive:
27 if not self.ui.interactive:
28 raise util.Abort(_('http authorization required'))
28 raise util.Abort(_('http authorization required'))
29
29
30 self.ui.write(_("http authorization required\n"))
30 self.ui.write(_("http authorization required\n"))
31 self.ui.status(_("realm: %s\n") % realm)
31 self.ui.status(_("realm: %s\n") % realm)
32 if user:
32 if user:
33 self.ui.status(_("user: %s\n") % user)
33 self.ui.status(_("user: %s\n") % user)
34 else:
34 else:
35 user = self.ui.prompt(_("user:"), default=None)
35 user = self.ui.prompt(_("user:"), default=None)
36
36
37 if not passwd:
37 if not passwd:
38 passwd = self.ui.getpass()
38 passwd = self.ui.getpass()
39
39
40 self.add_password(realm, authuri, user, passwd)
40 self.add_password(realm, authuri, user, passwd)
41 return (user, passwd)
41 return (user, passwd)
42
42
43 def netlocsplit(netloc):
43 def netlocsplit(netloc):
44 '''split [user[:passwd]@]host[:port] into 4-tuple.'''
44 '''split [user[:passwd]@]host[:port] into 4-tuple.'''
45
45
46 a = netloc.find('@')
46 a = netloc.find('@')
47 if a == -1:
47 if a == -1:
48 user, passwd = None, None
48 user, passwd = None, None
49 else:
49 else:
50 userpass, netloc = netloc[:a], netloc[a+1:]
50 userpass, netloc = netloc[:a], netloc[a+1:]
51 c = userpass.find(':')
51 c = userpass.find(':')
52 if c == -1:
52 if c == -1:
53 user, passwd = urllib.unquote(userpass), None
53 user, passwd = urllib.unquote(userpass), None
54 else:
54 else:
55 user = urllib.unquote(userpass[:c])
55 user = urllib.unquote(userpass[:c])
56 passwd = urllib.unquote(userpass[c+1:])
56 passwd = urllib.unquote(userpass[c+1:])
57 c = netloc.find(':')
57 c = netloc.find(':')
58 if c == -1:
58 if c == -1:
59 host, port = netloc, None
59 host, port = netloc, None
60 else:
60 else:
61 host, port = netloc[:c], netloc[c+1:]
61 host, port = netloc[:c], netloc[c+1:]
62 return host, port, user, passwd
62 return host, port, user, passwd
63
63
64 def netlocunsplit(host, port, user=None, passwd=None):
64 def netlocunsplit(host, port, user=None, passwd=None):
65 '''turn host, port, user, passwd into [user[:passwd]@]host[:port].'''
65 '''turn host, port, user, passwd into [user[:passwd]@]host[:port].'''
66 if port:
66 if port:
67 hostport = host + ':' + port
67 hostport = host + ':' + port
68 else:
68 else:
69 hostport = host
69 hostport = host
70 if user:
70 if user:
71 if passwd:
71 if passwd:
72 userpass = urllib.quote(user) + ':' + urllib.quote(passwd)
72 userpass = urllib.quote(user) + ':' + urllib.quote(passwd)
73 else:
73 else:
74 userpass = urllib.quote(user)
74 userpass = urllib.quote(user)
75 return userpass + '@' + hostport
75 return userpass + '@' + hostport
76 return hostport
76 return hostport
77
77
78 class httpconnection(keepalive.HTTPConnection):
78 class httpconnection(keepalive.HTTPConnection):
79 # must be able to send big bundle as stream.
79 # must be able to send big bundle as stream.
80
80
81 def send(self, data):
81 def send(self, data):
82 if isinstance(data, str):
82 if isinstance(data, str):
83 keepalive.HTTPConnection.send(self, data)
83 keepalive.HTTPConnection.send(self, data)
84 else:
84 else:
85 # if auth required, some data sent twice, so rewind here
85 # if auth required, some data sent twice, so rewind here
86 data.seek(0)
86 data.seek(0)
87 for chunk in util.filechunkiter(data):
87 for chunk in util.filechunkiter(data):
88 keepalive.HTTPConnection.send(self, chunk)
88 keepalive.HTTPConnection.send(self, chunk)
89
89
90 class basehttphandler(keepalive.HTTPHandler):
90 class basehttphandler(keepalive.HTTPHandler):
91 def http_open(self, req):
91 def http_open(self, req):
92 return self.do_open(httpconnection, req)
92 return self.do_open(httpconnection, req)
93
93
94 has_https = hasattr(urllib2, 'HTTPSHandler')
94 has_https = hasattr(urllib2, 'HTTPSHandler')
95 if has_https:
95 if has_https:
96 class httpsconnection(httplib.HTTPSConnection):
96 class httpsconnection(httplib.HTTPSConnection):
97 response_class = keepalive.HTTPResponse
97 response_class = keepalive.HTTPResponse
98 # must be able to send big bundle as stream.
98 # must be able to send big bundle as stream.
99
99
100 def send(self, data):
100 def send(self, data):
101 if isinstance(data, str):
101 if isinstance(data, str):
102 httplib.HTTPSConnection.send(self, data)
102 httplib.HTTPSConnection.send(self, data)
103 else:
103 else:
104 # if auth required, some data sent twice, so rewind here
104 # if auth required, some data sent twice, so rewind here
105 data.seek(0)
105 data.seek(0)
106 for chunk in util.filechunkiter(data):
106 for chunk in util.filechunkiter(data):
107 httplib.HTTPSConnection.send(self, chunk)
107 httplib.HTTPSConnection.send(self, chunk)
108
108
109 class httphandler(basehttphandler, urllib2.HTTPSHandler):
109 class httphandler(basehttphandler, urllib2.HTTPSHandler):
110 def https_open(self, req):
110 def https_open(self, req):
111 return self.do_open(httpsconnection, req)
111 return self.do_open(httpsconnection, req)
112 else:
112 else:
113 class httphandler(basehttphandler):
113 class httphandler(basehttphandler):
114 pass
114 pass
115
115
116 class httprepository(remoterepository):
116 class httprepository(remoterepository):
117 def __init__(self, ui, path):
117 def __init__(self, ui, path):
118 self.path = path
118 self.caps = None
119 self.caps = None
119 scheme, netloc, urlpath, query, frag = urlparse.urlsplit(path)
120 scheme, netloc, urlpath, query, frag = urlparse.urlsplit(path)
120 if query or frag:
121 if query or frag:
121 raise util.Abort(_('unsupported URL component: "%s"') %
122 raise util.Abort(_('unsupported URL component: "%s"') %
122 (query or frag))
123 (query or frag))
123 if not urlpath: urlpath = '/'
124 if not urlpath: urlpath = '/'
124 host, port, user, passwd = netlocsplit(netloc)
125 host, port, user, passwd = netlocsplit(netloc)
125
126
126 # urllib cannot handle URLs with embedded user or passwd
127 # urllib cannot handle URLs with embedded user or passwd
127 self.url = urlparse.urlunsplit((scheme, netlocunsplit(host, port),
128 self._url = urlparse.urlunsplit((scheme, netlocunsplit(host, port),
128 urlpath, '', ''))
129 urlpath, '', ''))
129 self.ui = ui
130 self.ui = ui
130
131
131 proxyurl = ui.config("http_proxy", "host") or os.getenv('http_proxy')
132 proxyurl = ui.config("http_proxy", "host") or os.getenv('http_proxy')
132 proxyauthinfo = None
133 proxyauthinfo = None
133 handler = httphandler()
134 handler = httphandler()
134
135
135 if proxyurl:
136 if proxyurl:
136 # proxy can be proper url or host[:port]
137 # proxy can be proper url or host[:port]
137 if not (proxyurl.startswith('http:') or
138 if not (proxyurl.startswith('http:') or
138 proxyurl.startswith('https:')):
139 proxyurl.startswith('https:')):
139 proxyurl = 'http://' + proxyurl + '/'
140 proxyurl = 'http://' + proxyurl + '/'
140 snpqf = urlparse.urlsplit(proxyurl)
141 snpqf = urlparse.urlsplit(proxyurl)
141 proxyscheme, proxynetloc, proxypath, proxyquery, proxyfrag = snpqf
142 proxyscheme, proxynetloc, proxypath, proxyquery, proxyfrag = snpqf
142 hpup = netlocsplit(proxynetloc)
143 hpup = netlocsplit(proxynetloc)
143
144
144 proxyhost, proxyport, proxyuser, proxypasswd = hpup
145 proxyhost, proxyport, proxyuser, proxypasswd = hpup
145 if not proxyuser:
146 if not proxyuser:
146 proxyuser = ui.config("http_proxy", "user")
147 proxyuser = ui.config("http_proxy", "user")
147 proxypasswd = ui.config("http_proxy", "passwd")
148 proxypasswd = ui.config("http_proxy", "passwd")
148
149
149 # see if we should use a proxy for this url
150 # see if we should use a proxy for this url
150 no_list = [ "localhost", "127.0.0.1" ]
151 no_list = [ "localhost", "127.0.0.1" ]
151 no_list.extend([p.lower() for
152 no_list.extend([p.lower() for
152 p in ui.configlist("http_proxy", "no")])
153 p in ui.configlist("http_proxy", "no")])
153 no_list.extend([p.strip().lower() for
154 no_list.extend([p.strip().lower() for
154 p in os.getenv("no_proxy", '').split(',')
155 p in os.getenv("no_proxy", '').split(',')
155 if p.strip()])
156 if p.strip()])
156 # "http_proxy.always" config is for running tests on localhost
157 # "http_proxy.always" config is for running tests on localhost
157 if (not ui.configbool("http_proxy", "always") and
158 if (not ui.configbool("http_proxy", "always") and
158 host.lower() in no_list):
159 host.lower() in no_list):
159 ui.debug(_('disabling proxy for %s\n') % host)
160 ui.debug(_('disabling proxy for %s\n') % host)
160 else:
161 else:
161 proxyurl = urlparse.urlunsplit((
162 proxyurl = urlparse.urlunsplit((
162 proxyscheme, netlocunsplit(proxyhost, proxyport,
163 proxyscheme, netlocunsplit(proxyhost, proxyport,
163 proxyuser, proxypasswd or ''),
164 proxyuser, proxypasswd or ''),
164 proxypath, proxyquery, proxyfrag))
165 proxypath, proxyquery, proxyfrag))
165 handler = urllib2.ProxyHandler({scheme: proxyurl})
166 handler = urllib2.ProxyHandler({scheme: proxyurl})
166 ui.debug(_('proxying through %s\n') % proxyurl)
167 ui.debug(_('proxying through %s\n') % proxyurl)
167
168
168 # urllib2 takes proxy values from the environment and those
169 # urllib2 takes proxy values from the environment and those
169 # will take precedence if found, so drop them
170 # will take precedence if found, so drop them
170 for env in ["HTTP_PROXY", "http_proxy", "no_proxy"]:
171 for env in ["HTTP_PROXY", "http_proxy", "no_proxy"]:
171 try:
172 try:
172 if os.environ.has_key(env):
173 if os.environ.has_key(env):
173 del os.environ[env]
174 del os.environ[env]
174 except OSError:
175 except OSError:
175 pass
176 pass
176
177
177 passmgr = passwordmgr(ui)
178 passmgr = passwordmgr(ui)
178 if user:
179 if user:
179 ui.debug(_('http auth: user %s, password %s\n') %
180 ui.debug(_('http auth: user %s, password %s\n') %
180 (user, passwd and '*' * len(passwd) or 'not set'))
181 (user, passwd and '*' * len(passwd) or 'not set'))
181 passmgr.add_password(None, host, user, passwd or '')
182 passmgr.add_password(None, host, user, passwd or '')
182
183
183 opener = urllib2.build_opener(
184 opener = urllib2.build_opener(
184 handler,
185 handler,
185 urllib2.HTTPBasicAuthHandler(passmgr),
186 urllib2.HTTPBasicAuthHandler(passmgr),
186 urllib2.HTTPDigestAuthHandler(passmgr))
187 urllib2.HTTPDigestAuthHandler(passmgr))
187
188
188 # 1.0 here is the _protocol_ version
189 # 1.0 here is the _protocol_ version
189 opener.addheaders = [('User-agent', 'mercurial/proto-1.0')]
190 opener.addheaders = [('User-agent', 'mercurial/proto-1.0')]
190 urllib2.install_opener(opener)
191 urllib2.install_opener(opener)
191
192
193 def url(self):
194 return self.path
195
192 # look up capabilities only when needed
196 # look up capabilities only when needed
193
197
194 def get_caps(self):
198 def get_caps(self):
195 if self.caps is None:
199 if self.caps is None:
196 try:
200 try:
197 self.caps = self.do_read('capabilities').split()
201 self.caps = self.do_read('capabilities').split()
198 except hg.RepoError:
202 except hg.RepoError:
199 self.caps = ()
203 self.caps = ()
200 self.ui.debug(_('capabilities: %s\n') %
204 self.ui.debug(_('capabilities: %s\n') %
201 (' '.join(self.caps or ['none'])))
205 (' '.join(self.caps or ['none'])))
202 return self.caps
206 return self.caps
203
207
204 capabilities = property(get_caps)
208 capabilities = property(get_caps)
205
209
206 def lock(self):
210 def lock(self):
207 raise util.Abort(_('operation not supported over http'))
211 raise util.Abort(_('operation not supported over http'))
208
212
209 def do_cmd(self, cmd, **args):
213 def do_cmd(self, cmd, **args):
210 data = args.pop('data', None)
214 data = args.pop('data', None)
211 headers = args.pop('headers', {})
215 headers = args.pop('headers', {})
212 self.ui.debug(_("sending %s command\n") % cmd)
216 self.ui.debug(_("sending %s command\n") % cmd)
213 q = {"cmd": cmd}
217 q = {"cmd": cmd}
214 q.update(args)
218 q.update(args)
215 qs = urllib.urlencode(q)
219 qs = urllib.urlencode(q)
216 cu = "%s?%s" % (self.url, qs)
220 cu = "%s?%s" % (self._url, qs)
217 try:
221 try:
218 resp = urllib2.urlopen(urllib2.Request(cu, data, headers))
222 resp = urllib2.urlopen(urllib2.Request(cu, data, headers))
219 except urllib2.HTTPError, inst:
223 except urllib2.HTTPError, inst:
220 if inst.code == 401:
224 if inst.code == 401:
221 raise util.Abort(_('authorization failed'))
225 raise util.Abort(_('authorization failed'))
222 raise
226 raise
223 except httplib.HTTPException, inst:
227 except httplib.HTTPException, inst:
224 self.ui.debug(_('http error while sending %s command\n') % cmd)
228 self.ui.debug(_('http error while sending %s command\n') % cmd)
225 self.ui.print_exc()
229 self.ui.print_exc()
226 raise IOError(None, inst)
230 raise IOError(None, inst)
227 try:
231 try:
228 proto = resp.getheader('content-type')
232 proto = resp.getheader('content-type')
229 except AttributeError:
233 except AttributeError:
230 proto = resp.headers['content-type']
234 proto = resp.headers['content-type']
231
235
232 # accept old "text/plain" and "application/hg-changegroup" for now
236 # accept old "text/plain" and "application/hg-changegroup" for now
233 if not proto.startswith('application/mercurial') and \
237 if not proto.startswith('application/mercurial') and \
234 not proto.startswith('text/plain') and \
238 not proto.startswith('text/plain') and \
235 not proto.startswith('application/hg-changegroup'):
239 not proto.startswith('application/hg-changegroup'):
236 raise hg.RepoError(_("'%s' does not appear to be an hg repository") %
240 raise hg.RepoError(_("'%s' does not appear to be an hg repository") %
237 self.url)
241 self._url)
238
242
239 if proto.startswith('application/mercurial'):
243 if proto.startswith('application/mercurial'):
240 version = proto[22:]
244 version = proto[22:]
241 if float(version) > 0.1:
245 if float(version) > 0.1:
242 raise hg.RepoError(_("'%s' uses newer protocol %s") %
246 raise hg.RepoError(_("'%s' uses newer protocol %s") %
243 (self.url, version))
247 (self._url, version))
244
248
245 return resp
249 return resp
246
250
247 def do_read(self, cmd, **args):
251 def do_read(self, cmd, **args):
248 fp = self.do_cmd(cmd, **args)
252 fp = self.do_cmd(cmd, **args)
249 try:
253 try:
250 return fp.read()
254 return fp.read()
251 finally:
255 finally:
252 # if using keepalive, allow connection to be reused
256 # if using keepalive, allow connection to be reused
253 fp.close()
257 fp.close()
254
258
255 def heads(self):
259 def heads(self):
256 d = self.do_read("heads")
260 d = self.do_read("heads")
257 try:
261 try:
258 return map(bin, d[:-1].split(" "))
262 return map(bin, d[:-1].split(" "))
259 except:
263 except:
260 self.ui.warn(_("unexpected response:\n") + d[:400] + "\n...\n")
264 self.ui.warn(_("unexpected response:\n") + d[:400] + "\n...\n")
261 raise
265 raise
262
266
263 def branches(self, nodes):
267 def branches(self, nodes):
264 n = " ".join(map(hex, nodes))
268 n = " ".join(map(hex, nodes))
265 d = self.do_read("branches", nodes=n)
269 d = self.do_read("branches", nodes=n)
266 try:
270 try:
267 br = [ tuple(map(bin, b.split(" "))) for b in d.splitlines() ]
271 br = [ tuple(map(bin, b.split(" "))) for b in d.splitlines() ]
268 return br
272 return br
269 except:
273 except:
270 self.ui.warn(_("unexpected response:\n") + d[:400] + "\n...\n")
274 self.ui.warn(_("unexpected response:\n") + d[:400] + "\n...\n")
271 raise
275 raise
272
276
273 def between(self, pairs):
277 def between(self, pairs):
274 n = "\n".join(["-".join(map(hex, p)) for p in pairs])
278 n = "\n".join(["-".join(map(hex, p)) for p in pairs])
275 d = self.do_read("between", pairs=n)
279 d = self.do_read("between", pairs=n)
276 try:
280 try:
277 p = [ l and map(bin, l.split(" ")) or [] for l in d.splitlines() ]
281 p = [ l and map(bin, l.split(" ")) or [] for l in d.splitlines() ]
278 return p
282 return p
279 except:
283 except:
280 self.ui.warn(_("unexpected response:\n") + d[:400] + "\n...\n")
284 self.ui.warn(_("unexpected response:\n") + d[:400] + "\n...\n")
281 raise
285 raise
282
286
283 def changegroup(self, nodes, kind):
287 def changegroup(self, nodes, kind):
284 n = " ".join(map(hex, nodes))
288 n = " ".join(map(hex, nodes))
285 f = self.do_cmd("changegroup", roots=n)
289 f = self.do_cmd("changegroup", roots=n)
286 bytes = 0
290 bytes = 0
287
291
288 def zgenerator(f):
292 def zgenerator(f):
289 zd = zlib.decompressobj()
293 zd = zlib.decompressobj()
290 try:
294 try:
291 for chnk in f:
295 for chnk in f:
292 yield zd.decompress(chnk)
296 yield zd.decompress(chnk)
293 except httplib.HTTPException, inst:
297 except httplib.HTTPException, inst:
294 raise IOError(None, _('connection ended unexpectedly'))
298 raise IOError(None, _('connection ended unexpectedly'))
295 yield zd.flush()
299 yield zd.flush()
296
300
297 return util.chunkbuffer(zgenerator(util.filechunkiter(f)))
301 return util.chunkbuffer(zgenerator(util.filechunkiter(f)))
298
302
299 def unbundle(self, cg, heads, source):
303 def unbundle(self, cg, heads, source):
300 # have to stream bundle to a temp file because we do not have
304 # have to stream bundle to a temp file because we do not have
301 # http 1.1 chunked transfer.
305 # http 1.1 chunked transfer.
302
306
303 fd, tempname = tempfile.mkstemp(prefix='hg-unbundle-')
307 fd, tempname = tempfile.mkstemp(prefix='hg-unbundle-')
304 fp = os.fdopen(fd, 'wb+')
308 fp = os.fdopen(fd, 'wb+')
305 try:
309 try:
306 for chunk in util.filechunkiter(cg):
310 for chunk in util.filechunkiter(cg):
307 fp.write(chunk)
311 fp.write(chunk)
308 length = fp.tell()
312 length = fp.tell()
309 try:
313 try:
310 rfp = self.do_cmd(
314 rfp = self.do_cmd(
311 'unbundle', data=fp,
315 'unbundle', data=fp,
312 headers={'content-length': length,
316 headers={'content-length': length,
313 'content-type': 'application/octet-stream'},
317 'content-type': 'application/octet-stream'},
314 heads=' '.join(map(hex, heads)))
318 heads=' '.join(map(hex, heads)))
315 try:
319 try:
316 ret = int(rfp.readline())
320 ret = int(rfp.readline())
317 self.ui.write(rfp.read())
321 self.ui.write(rfp.read())
318 return ret
322 return ret
319 finally:
323 finally:
320 rfp.close()
324 rfp.close()
321 except socket.error, err:
325 except socket.error, err:
322 if err[0] in (errno.ECONNRESET, errno.EPIPE):
326 if err[0] in (errno.ECONNRESET, errno.EPIPE):
323 raise util.Abort(_('push failed: %s'), err[1])
327 raise util.Abort(_('push failed: %s'), err[1])
324 raise util.Abort(err[1])
328 raise util.Abort(err[1])
325 finally:
329 finally:
326 fp.close()
330 fp.close()
327 os.unlink(tempname)
331 os.unlink(tempname)
328
332
329 def stream_out(self):
333 def stream_out(self):
330 return self.do_cmd('stream_out')
334 return self.do_cmd('stream_out')
331
335
332 class httpsrepository(httprepository):
336 class httpsrepository(httprepository):
333 def __init__(self, ui, path):
337 def __init__(self, ui, path):
334 if not has_https:
338 if not has_https:
335 raise util.Abort(_('Python support for SSL and HTTPS '
339 raise util.Abort(_('Python support for SSL and HTTPS '
336 'is not installed'))
340 'is not installed'))
337 httprepository.__init__(self, ui, path)
341 httprepository.__init__(self, ui, path)
@@ -1,2269 +1,2273 b''
1 # localrepo.py - read/write repository class for mercurial
1 # localrepo.py - read/write repository class for mercurial
2 #
2 #
3 # Copyright 2005 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms
5 # This software may be used and distributed according to the terms
6 # of the GNU General Public License, incorporated herein by reference.
6 # of the GNU General Public License, incorporated herein by reference.
7
7
8 from node import *
8 from node import *
9 from i18n import gettext as _
9 from i18n import gettext as _
10 from demandload import *
10 from demandload import *
11 import repo
11 import repo
12 demandload(globals(), "appendfile changegroup")
12 demandload(globals(), "appendfile changegroup")
13 demandload(globals(), "changelog dirstate filelog manifest context")
13 demandload(globals(), "changelog dirstate filelog manifest context")
14 demandload(globals(), "re lock transaction tempfile stat mdiff errno ui")
14 demandload(globals(), "re lock transaction tempfile stat mdiff errno ui")
15 demandload(globals(), "os revlog time util")
15 demandload(globals(), "os revlog time util")
16
16
17 class localrepository(repo.repository):
17 class localrepository(repo.repository):
18 capabilities = ()
18 capabilities = ()
19
19
20 def __del__(self):
20 def __del__(self):
21 self.transhandle = None
21 self.transhandle = None
22 def __init__(self, parentui, path=None, create=0):
22 def __init__(self, parentui, path=None, create=0):
23 repo.repository.__init__(self)
23 repo.repository.__init__(self)
24 if not path:
24 if not path:
25 p = os.getcwd()
25 p = os.getcwd()
26 while not os.path.isdir(os.path.join(p, ".hg")):
26 while not os.path.isdir(os.path.join(p, ".hg")):
27 oldp = p
27 oldp = p
28 p = os.path.dirname(p)
28 p = os.path.dirname(p)
29 if p == oldp:
29 if p == oldp:
30 raise repo.RepoError(_("no repo found"))
30 raise repo.RepoError(_("no repo found"))
31 path = p
31 path = p
32 self.path = os.path.join(path, ".hg")
32 self.path = os.path.join(path, ".hg")
33
33
34 if not create and not os.path.isdir(self.path):
34 if not create and not os.path.isdir(self.path):
35 raise repo.RepoError(_("repository %s not found") % path)
35 raise repo.RepoError(_("repository %s not found") % path)
36
36
37 self.root = os.path.abspath(path)
37 self.root = os.path.abspath(path)
38 self.origroot = path
38 self.origroot = path
39 self.ui = ui.ui(parentui=parentui)
39 self.ui = ui.ui(parentui=parentui)
40 self.opener = util.opener(self.path)
40 self.opener = util.opener(self.path)
41 self.wopener = util.opener(self.root)
41 self.wopener = util.opener(self.root)
42
42
43 try:
43 try:
44 self.ui.readconfig(self.join("hgrc"), self.root)
44 self.ui.readconfig(self.join("hgrc"), self.root)
45 except IOError:
45 except IOError:
46 pass
46 pass
47
47
48 v = self.ui.revlogopts
48 v = self.ui.revlogopts
49 self.revlogversion = int(v.get('format', revlog.REVLOG_DEFAULT_FORMAT))
49 self.revlogversion = int(v.get('format', revlog.REVLOG_DEFAULT_FORMAT))
50 self.revlogv1 = self.revlogversion != revlog.REVLOGV0
50 self.revlogv1 = self.revlogversion != revlog.REVLOGV0
51 fl = v.get('flags', None)
51 fl = v.get('flags', None)
52 flags = 0
52 flags = 0
53 if fl != None:
53 if fl != None:
54 for x in fl.split():
54 for x in fl.split():
55 flags |= revlog.flagstr(x)
55 flags |= revlog.flagstr(x)
56 elif self.revlogv1:
56 elif self.revlogv1:
57 flags = revlog.REVLOG_DEFAULT_FLAGS
57 flags = revlog.REVLOG_DEFAULT_FLAGS
58
58
59 v = self.revlogversion | flags
59 v = self.revlogversion | flags
60 self.manifest = manifest.manifest(self.opener, v)
60 self.manifest = manifest.manifest(self.opener, v)
61 self.changelog = changelog.changelog(self.opener, v)
61 self.changelog = changelog.changelog(self.opener, v)
62
62
63 # the changelog might not have the inline index flag
63 # the changelog might not have the inline index flag
64 # on. If the format of the changelog is the same as found in
64 # on. If the format of the changelog is the same as found in
65 # .hgrc, apply any flags found in the .hgrc as well.
65 # .hgrc, apply any flags found in the .hgrc as well.
66 # Otherwise, just version from the changelog
66 # Otherwise, just version from the changelog
67 v = self.changelog.version
67 v = self.changelog.version
68 if v == self.revlogversion:
68 if v == self.revlogversion:
69 v |= flags
69 v |= flags
70 self.revlogversion = v
70 self.revlogversion = v
71
71
72 self.tagscache = None
72 self.tagscache = None
73 self.nodetagscache = None
73 self.nodetagscache = None
74 self.encodepats = None
74 self.encodepats = None
75 self.decodepats = None
75 self.decodepats = None
76 self.transhandle = None
76 self.transhandle = None
77
77
78 if create:
78 if create:
79 if not os.path.exists(path):
79 if not os.path.exists(path):
80 os.mkdir(path)
80 os.mkdir(path)
81 os.mkdir(self.path)
81 os.mkdir(self.path)
82 os.mkdir(self.join("data"))
82 os.mkdir(self.join("data"))
83
83
84 self.dirstate = dirstate.dirstate(self.opener, self.ui, self.root)
84 self.dirstate = dirstate.dirstate(self.opener, self.ui, self.root)
85
85
86 def url(self):
87 return 'file:' + self.root
88
86 def hook(self, name, throw=False, **args):
89 def hook(self, name, throw=False, **args):
87 def callhook(hname, funcname):
90 def callhook(hname, funcname):
88 '''call python hook. hook is callable object, looked up as
91 '''call python hook. hook is callable object, looked up as
89 name in python module. if callable returns "true", hook
92 name in python module. if callable returns "true", hook
90 fails, else passes. if hook raises exception, treated as
93 fails, else passes. if hook raises exception, treated as
91 hook failure. exception propagates if throw is "true".
94 hook failure. exception propagates if throw is "true".
92
95
93 reason for "true" meaning "hook failed" is so that
96 reason for "true" meaning "hook failed" is so that
94 unmodified commands (e.g. mercurial.commands.update) can
97 unmodified commands (e.g. mercurial.commands.update) can
95 be run as hooks without wrappers to convert return values.'''
98 be run as hooks without wrappers to convert return values.'''
96
99
97 self.ui.note(_("calling hook %s: %s\n") % (hname, funcname))
100 self.ui.note(_("calling hook %s: %s\n") % (hname, funcname))
98 d = funcname.rfind('.')
101 d = funcname.rfind('.')
99 if d == -1:
102 if d == -1:
100 raise util.Abort(_('%s hook is invalid ("%s" not in a module)')
103 raise util.Abort(_('%s hook is invalid ("%s" not in a module)')
101 % (hname, funcname))
104 % (hname, funcname))
102 modname = funcname[:d]
105 modname = funcname[:d]
103 try:
106 try:
104 obj = __import__(modname)
107 obj = __import__(modname)
105 except ImportError:
108 except ImportError:
106 try:
109 try:
107 # extensions are loaded with hgext_ prefix
110 # extensions are loaded with hgext_ prefix
108 obj = __import__("hgext_%s" % modname)
111 obj = __import__("hgext_%s" % modname)
109 except ImportError:
112 except ImportError:
110 raise util.Abort(_('%s hook is invalid '
113 raise util.Abort(_('%s hook is invalid '
111 '(import of "%s" failed)') %
114 '(import of "%s" failed)') %
112 (hname, modname))
115 (hname, modname))
113 try:
116 try:
114 for p in funcname.split('.')[1:]:
117 for p in funcname.split('.')[1:]:
115 obj = getattr(obj, p)
118 obj = getattr(obj, p)
116 except AttributeError, err:
119 except AttributeError, err:
117 raise util.Abort(_('%s hook is invalid '
120 raise util.Abort(_('%s hook is invalid '
118 '("%s" is not defined)') %
121 '("%s" is not defined)') %
119 (hname, funcname))
122 (hname, funcname))
120 if not callable(obj):
123 if not callable(obj):
121 raise util.Abort(_('%s hook is invalid '
124 raise util.Abort(_('%s hook is invalid '
122 '("%s" is not callable)') %
125 '("%s" is not callable)') %
123 (hname, funcname))
126 (hname, funcname))
124 try:
127 try:
125 r = obj(ui=self.ui, repo=self, hooktype=name, **args)
128 r = obj(ui=self.ui, repo=self, hooktype=name, **args)
126 except (KeyboardInterrupt, util.SignalInterrupt):
129 except (KeyboardInterrupt, util.SignalInterrupt):
127 raise
130 raise
128 except Exception, exc:
131 except Exception, exc:
129 if isinstance(exc, util.Abort):
132 if isinstance(exc, util.Abort):
130 self.ui.warn(_('error: %s hook failed: %s\n') %
133 self.ui.warn(_('error: %s hook failed: %s\n') %
131 (hname, exc.args[0] % exc.args[1:]))
134 (hname, exc.args[0] % exc.args[1:]))
132 else:
135 else:
133 self.ui.warn(_('error: %s hook raised an exception: '
136 self.ui.warn(_('error: %s hook raised an exception: '
134 '%s\n') % (hname, exc))
137 '%s\n') % (hname, exc))
135 if throw:
138 if throw:
136 raise
139 raise
137 self.ui.print_exc()
140 self.ui.print_exc()
138 return True
141 return True
139 if r:
142 if r:
140 if throw:
143 if throw:
141 raise util.Abort(_('%s hook failed') % hname)
144 raise util.Abort(_('%s hook failed') % hname)
142 self.ui.warn(_('warning: %s hook failed\n') % hname)
145 self.ui.warn(_('warning: %s hook failed\n') % hname)
143 return r
146 return r
144
147
145 def runhook(name, cmd):
148 def runhook(name, cmd):
146 self.ui.note(_("running hook %s: %s\n") % (name, cmd))
149 self.ui.note(_("running hook %s: %s\n") % (name, cmd))
147 env = dict([('HG_' + k.upper(), v) for k, v in args.iteritems()])
150 env = dict([('HG_' + k.upper(), v) for k, v in args.iteritems()])
148 r = util.system(cmd, environ=env, cwd=self.root)
151 r = util.system(cmd, environ=env, cwd=self.root)
149 if r:
152 if r:
150 desc, r = util.explain_exit(r)
153 desc, r = util.explain_exit(r)
151 if throw:
154 if throw:
152 raise util.Abort(_('%s hook %s') % (name, desc))
155 raise util.Abort(_('%s hook %s') % (name, desc))
153 self.ui.warn(_('warning: %s hook %s\n') % (name, desc))
156 self.ui.warn(_('warning: %s hook %s\n') % (name, desc))
154 return r
157 return r
155
158
156 r = False
159 r = False
157 hooks = [(hname, cmd) for hname, cmd in self.ui.configitems("hooks")
160 hooks = [(hname, cmd) for hname, cmd in self.ui.configitems("hooks")
158 if hname.split(".", 1)[0] == name and cmd]
161 if hname.split(".", 1)[0] == name and cmd]
159 hooks.sort()
162 hooks.sort()
160 for hname, cmd in hooks:
163 for hname, cmd in hooks:
161 if cmd.startswith('python:'):
164 if cmd.startswith('python:'):
162 r = callhook(hname, cmd[7:].strip()) or r
165 r = callhook(hname, cmd[7:].strip()) or r
163 else:
166 else:
164 r = runhook(hname, cmd) or r
167 r = runhook(hname, cmd) or r
165 return r
168 return r
166
169
167 tag_disallowed = ':\r\n'
170 tag_disallowed = ':\r\n'
168
171
169 def tag(self, name, node, local=False, message=None, user=None, date=None):
172 def tag(self, name, node, local=False, message=None, user=None, date=None):
170 '''tag a revision with a symbolic name.
173 '''tag a revision with a symbolic name.
171
174
172 if local is True, the tag is stored in a per-repository file.
175 if local is True, the tag is stored in a per-repository file.
173 otherwise, it is stored in the .hgtags file, and a new
176 otherwise, it is stored in the .hgtags file, and a new
174 changeset is committed with the change.
177 changeset is committed with the change.
175
178
176 keyword arguments:
179 keyword arguments:
177
180
178 local: whether to store tag in non-version-controlled file
181 local: whether to store tag in non-version-controlled file
179 (default False)
182 (default False)
180
183
181 message: commit message to use if committing
184 message: commit message to use if committing
182
185
183 user: name of user to use if committing
186 user: name of user to use if committing
184
187
185 date: date tuple to use if committing'''
188 date: date tuple to use if committing'''
186
189
187 for c in self.tag_disallowed:
190 for c in self.tag_disallowed:
188 if c in name:
191 if c in name:
189 raise util.Abort(_('%r cannot be used in a tag name') % c)
192 raise util.Abort(_('%r cannot be used in a tag name') % c)
190
193
191 self.hook('pretag', throw=True, node=node, tag=name, local=local)
194 self.hook('pretag', throw=True, node=node, tag=name, local=local)
192
195
193 if local:
196 if local:
194 self.opener('localtags', 'a').write('%s %s\n' % (node, name))
197 self.opener('localtags', 'a').write('%s %s\n' % (node, name))
195 self.hook('tag', node=node, tag=name, local=local)
198 self.hook('tag', node=node, tag=name, local=local)
196 return
199 return
197
200
198 for x in self.changes():
201 for x in self.changes():
199 if '.hgtags' in x:
202 if '.hgtags' in x:
200 raise util.Abort(_('working copy of .hgtags is changed '
203 raise util.Abort(_('working copy of .hgtags is changed '
201 '(please commit .hgtags manually)'))
204 '(please commit .hgtags manually)'))
202
205
203 self.wfile('.hgtags', 'ab').write('%s %s\n' % (node, name))
206 self.wfile('.hgtags', 'ab').write('%s %s\n' % (node, name))
204 if self.dirstate.state('.hgtags') == '?':
207 if self.dirstate.state('.hgtags') == '?':
205 self.add(['.hgtags'])
208 self.add(['.hgtags'])
206
209
207 if not message:
210 if not message:
208 message = _('Added tag %s for changeset %s') % (name, node)
211 message = _('Added tag %s for changeset %s') % (name, node)
209
212
210 self.commit(['.hgtags'], message, user, date)
213 self.commit(['.hgtags'], message, user, date)
211 self.hook('tag', node=node, tag=name, local=local)
214 self.hook('tag', node=node, tag=name, local=local)
212
215
213 def tags(self):
216 def tags(self):
214 '''return a mapping of tag to node'''
217 '''return a mapping of tag to node'''
215 if not self.tagscache:
218 if not self.tagscache:
216 self.tagscache = {}
219 self.tagscache = {}
217
220
218 def parsetag(line, context):
221 def parsetag(line, context):
219 if not line:
222 if not line:
220 return
223 return
221 s = l.split(" ", 1)
224 s = l.split(" ", 1)
222 if len(s) != 2:
225 if len(s) != 2:
223 self.ui.warn(_("%s: cannot parse entry\n") % context)
226 self.ui.warn(_("%s: cannot parse entry\n") % context)
224 return
227 return
225 node, key = s
228 node, key = s
226 key = key.strip()
229 key = key.strip()
227 try:
230 try:
228 bin_n = bin(node)
231 bin_n = bin(node)
229 except TypeError:
232 except TypeError:
230 self.ui.warn(_("%s: node '%s' is not well formed\n") %
233 self.ui.warn(_("%s: node '%s' is not well formed\n") %
231 (context, node))
234 (context, node))
232 return
235 return
233 if bin_n not in self.changelog.nodemap:
236 if bin_n not in self.changelog.nodemap:
234 self.ui.warn(_("%s: tag '%s' refers to unknown node\n") %
237 self.ui.warn(_("%s: tag '%s' refers to unknown node\n") %
235 (context, key))
238 (context, key))
236 return
239 return
237 self.tagscache[key] = bin_n
240 self.tagscache[key] = bin_n
238
241
239 # read the tags file from each head, ending with the tip,
242 # read the tags file from each head, ending with the tip,
240 # and add each tag found to the map, with "newer" ones
243 # and add each tag found to the map, with "newer" ones
241 # taking precedence
244 # taking precedence
242 heads = self.heads()
245 heads = self.heads()
243 heads.reverse()
246 heads.reverse()
244 fl = self.file(".hgtags")
247 fl = self.file(".hgtags")
245 for node in heads:
248 for node in heads:
246 change = self.changelog.read(node)
249 change = self.changelog.read(node)
247 rev = self.changelog.rev(node)
250 rev = self.changelog.rev(node)
248 fn, ff = self.manifest.find(change[0], '.hgtags')
251 fn, ff = self.manifest.find(change[0], '.hgtags')
249 if fn is None: continue
252 if fn is None: continue
250 count = 0
253 count = 0
251 for l in fl.read(fn).splitlines():
254 for l in fl.read(fn).splitlines():
252 count += 1
255 count += 1
253 parsetag(l, _(".hgtags (rev %d:%s), line %d") %
256 parsetag(l, _(".hgtags (rev %d:%s), line %d") %
254 (rev, short(node), count))
257 (rev, short(node), count))
255 try:
258 try:
256 f = self.opener("localtags")
259 f = self.opener("localtags")
257 count = 0
260 count = 0
258 for l in f:
261 for l in f:
259 count += 1
262 count += 1
260 parsetag(l, _("localtags, line %d") % count)
263 parsetag(l, _("localtags, line %d") % count)
261 except IOError:
264 except IOError:
262 pass
265 pass
263
266
264 self.tagscache['tip'] = self.changelog.tip()
267 self.tagscache['tip'] = self.changelog.tip()
265
268
266 return self.tagscache
269 return self.tagscache
267
270
268 def tagslist(self):
271 def tagslist(self):
269 '''return a list of tags ordered by revision'''
272 '''return a list of tags ordered by revision'''
270 l = []
273 l = []
271 for t, n in self.tags().items():
274 for t, n in self.tags().items():
272 try:
275 try:
273 r = self.changelog.rev(n)
276 r = self.changelog.rev(n)
274 except:
277 except:
275 r = -2 # sort to the beginning of the list if unknown
278 r = -2 # sort to the beginning of the list if unknown
276 l.append((r, t, n))
279 l.append((r, t, n))
277 l.sort()
280 l.sort()
278 return [(t, n) for r, t, n in l]
281 return [(t, n) for r, t, n in l]
279
282
280 def nodetags(self, node):
283 def nodetags(self, node):
281 '''return the tags associated with a node'''
284 '''return the tags associated with a node'''
282 if not self.nodetagscache:
285 if not self.nodetagscache:
283 self.nodetagscache = {}
286 self.nodetagscache = {}
284 for t, n in self.tags().items():
287 for t, n in self.tags().items():
285 self.nodetagscache.setdefault(n, []).append(t)
288 self.nodetagscache.setdefault(n, []).append(t)
286 return self.nodetagscache.get(node, [])
289 return self.nodetagscache.get(node, [])
287
290
288 def lookup(self, key):
291 def lookup(self, key):
289 try:
292 try:
290 return self.tags()[key]
293 return self.tags()[key]
291 except KeyError:
294 except KeyError:
292 try:
295 try:
293 return self.changelog.lookup(key)
296 return self.changelog.lookup(key)
294 except:
297 except:
295 raise repo.RepoError(_("unknown revision '%s'") % key)
298 raise repo.RepoError(_("unknown revision '%s'") % key)
296
299
297 def dev(self):
300 def dev(self):
298 return os.lstat(self.path).st_dev
301 return os.lstat(self.path).st_dev
299
302
300 def local(self):
303 def local(self):
301 return True
304 return True
302
305
303 def join(self, f):
306 def join(self, f):
304 return os.path.join(self.path, f)
307 return os.path.join(self.path, f)
305
308
306 def wjoin(self, f):
309 def wjoin(self, f):
307 return os.path.join(self.root, f)
310 return os.path.join(self.root, f)
308
311
309 def file(self, f):
312 def file(self, f):
310 if f[0] == '/':
313 if f[0] == '/':
311 f = f[1:]
314 f = f[1:]
312 return filelog.filelog(self.opener, f, self.revlogversion)
315 return filelog.filelog(self.opener, f, self.revlogversion)
313
316
314 def changectx(self, changeid):
317 def changectx(self, changeid):
315 return context.changectx(self, changeid)
318 return context.changectx(self, changeid)
316
319
317 def filectx(self, path, changeid=None, fileid=None):
320 def filectx(self, path, changeid=None, fileid=None):
318 """changeid can be a changeset revision, node, or tag.
321 """changeid can be a changeset revision, node, or tag.
319 fileid can be a file revision or node."""
322 fileid can be a file revision or node."""
320 return context.filectx(self, path, changeid, fileid)
323 return context.filectx(self, path, changeid, fileid)
321
324
322 def getcwd(self):
325 def getcwd(self):
323 return self.dirstate.getcwd()
326 return self.dirstate.getcwd()
324
327
325 def wfile(self, f, mode='r'):
328 def wfile(self, f, mode='r'):
326 return self.wopener(f, mode)
329 return self.wopener(f, mode)
327
330
328 def wread(self, filename):
331 def wread(self, filename):
329 if self.encodepats == None:
332 if self.encodepats == None:
330 l = []
333 l = []
331 for pat, cmd in self.ui.configitems("encode"):
334 for pat, cmd in self.ui.configitems("encode"):
332 mf = util.matcher(self.root, "", [pat], [], [])[1]
335 mf = util.matcher(self.root, "", [pat], [], [])[1]
333 l.append((mf, cmd))
336 l.append((mf, cmd))
334 self.encodepats = l
337 self.encodepats = l
335
338
336 data = self.wopener(filename, 'r').read()
339 data = self.wopener(filename, 'r').read()
337
340
338 for mf, cmd in self.encodepats:
341 for mf, cmd in self.encodepats:
339 if mf(filename):
342 if mf(filename):
340 self.ui.debug(_("filtering %s through %s\n") % (filename, cmd))
343 self.ui.debug(_("filtering %s through %s\n") % (filename, cmd))
341 data = util.filter(data, cmd)
344 data = util.filter(data, cmd)
342 break
345 break
343
346
344 return data
347 return data
345
348
346 def wwrite(self, filename, data, fd=None):
349 def wwrite(self, filename, data, fd=None):
347 if self.decodepats == None:
350 if self.decodepats == None:
348 l = []
351 l = []
349 for pat, cmd in self.ui.configitems("decode"):
352 for pat, cmd in self.ui.configitems("decode"):
350 mf = util.matcher(self.root, "", [pat], [], [])[1]
353 mf = util.matcher(self.root, "", [pat], [], [])[1]
351 l.append((mf, cmd))
354 l.append((mf, cmd))
352 self.decodepats = l
355 self.decodepats = l
353
356
354 for mf, cmd in self.decodepats:
357 for mf, cmd in self.decodepats:
355 if mf(filename):
358 if mf(filename):
356 self.ui.debug(_("filtering %s through %s\n") % (filename, cmd))
359 self.ui.debug(_("filtering %s through %s\n") % (filename, cmd))
357 data = util.filter(data, cmd)
360 data = util.filter(data, cmd)
358 break
361 break
359
362
360 if fd:
363 if fd:
361 return fd.write(data)
364 return fd.write(data)
362 return self.wopener(filename, 'w').write(data)
365 return self.wopener(filename, 'w').write(data)
363
366
364 def transaction(self):
367 def transaction(self):
365 tr = self.transhandle
368 tr = self.transhandle
366 if tr != None and tr.running():
369 if tr != None and tr.running():
367 return tr.nest()
370 return tr.nest()
368
371
369 # save dirstate for rollback
372 # save dirstate for rollback
370 try:
373 try:
371 ds = self.opener("dirstate").read()
374 ds = self.opener("dirstate").read()
372 except IOError:
375 except IOError:
373 ds = ""
376 ds = ""
374 self.opener("journal.dirstate", "w").write(ds)
377 self.opener("journal.dirstate", "w").write(ds)
375
378
376 tr = transaction.transaction(self.ui.warn, self.opener,
379 tr = transaction.transaction(self.ui.warn, self.opener,
377 self.join("journal"),
380 self.join("journal"),
378 aftertrans(self.path))
381 aftertrans(self.path))
379 self.transhandle = tr
382 self.transhandle = tr
380 return tr
383 return tr
381
384
382 def recover(self):
385 def recover(self):
383 l = self.lock()
386 l = self.lock()
384 if os.path.exists(self.join("journal")):
387 if os.path.exists(self.join("journal")):
385 self.ui.status(_("rolling back interrupted transaction\n"))
388 self.ui.status(_("rolling back interrupted transaction\n"))
386 transaction.rollback(self.opener, self.join("journal"))
389 transaction.rollback(self.opener, self.join("journal"))
387 self.reload()
390 self.reload()
388 return True
391 return True
389 else:
392 else:
390 self.ui.warn(_("no interrupted transaction available\n"))
393 self.ui.warn(_("no interrupted transaction available\n"))
391 return False
394 return False
392
395
393 def rollback(self, wlock=None):
396 def rollback(self, wlock=None):
394 if not wlock:
397 if not wlock:
395 wlock = self.wlock()
398 wlock = self.wlock()
396 l = self.lock()
399 l = self.lock()
397 if os.path.exists(self.join("undo")):
400 if os.path.exists(self.join("undo")):
398 self.ui.status(_("rolling back last transaction\n"))
401 self.ui.status(_("rolling back last transaction\n"))
399 transaction.rollback(self.opener, self.join("undo"))
402 transaction.rollback(self.opener, self.join("undo"))
400 util.rename(self.join("undo.dirstate"), self.join("dirstate"))
403 util.rename(self.join("undo.dirstate"), self.join("dirstate"))
401 self.reload()
404 self.reload()
402 self.wreload()
405 self.wreload()
403 else:
406 else:
404 self.ui.warn(_("no rollback information available\n"))
407 self.ui.warn(_("no rollback information available\n"))
405
408
406 def wreload(self):
409 def wreload(self):
407 self.dirstate.read()
410 self.dirstate.read()
408
411
409 def reload(self):
412 def reload(self):
410 self.changelog.load()
413 self.changelog.load()
411 self.manifest.load()
414 self.manifest.load()
412 self.tagscache = None
415 self.tagscache = None
413 self.nodetagscache = None
416 self.nodetagscache = None
414
417
415 def do_lock(self, lockname, wait, releasefn=None, acquirefn=None,
418 def do_lock(self, lockname, wait, releasefn=None, acquirefn=None,
416 desc=None):
419 desc=None):
417 try:
420 try:
418 l = lock.lock(self.join(lockname), 0, releasefn, desc=desc)
421 l = lock.lock(self.join(lockname), 0, releasefn, desc=desc)
419 except lock.LockHeld, inst:
422 except lock.LockHeld, inst:
420 if not wait:
423 if not wait:
421 raise
424 raise
422 self.ui.warn(_("waiting for lock on %s held by %s\n") %
425 self.ui.warn(_("waiting for lock on %s held by %s\n") %
423 (desc, inst.args[0]))
426 (desc, inst.args[0]))
424 # default to 600 seconds timeout
427 # default to 600 seconds timeout
425 l = lock.lock(self.join(lockname),
428 l = lock.lock(self.join(lockname),
426 int(self.ui.config("ui", "timeout") or 600),
429 int(self.ui.config("ui", "timeout") or 600),
427 releasefn, desc=desc)
430 releasefn, desc=desc)
428 if acquirefn:
431 if acquirefn:
429 acquirefn()
432 acquirefn()
430 return l
433 return l
431
434
432 def lock(self, wait=1):
435 def lock(self, wait=1):
433 return self.do_lock("lock", wait, acquirefn=self.reload,
436 return self.do_lock("lock", wait, acquirefn=self.reload,
434 desc=_('repository %s') % self.origroot)
437 desc=_('repository %s') % self.origroot)
435
438
436 def wlock(self, wait=1):
439 def wlock(self, wait=1):
437 return self.do_lock("wlock", wait, self.dirstate.write,
440 return self.do_lock("wlock", wait, self.dirstate.write,
438 self.wreload,
441 self.wreload,
439 desc=_('working directory of %s') % self.origroot)
442 desc=_('working directory of %s') % self.origroot)
440
443
441 def checkfilemerge(self, filename, text, filelog, manifest1, manifest2):
444 def checkfilemerge(self, filename, text, filelog, manifest1, manifest2):
442 "determine whether a new filenode is needed"
445 "determine whether a new filenode is needed"
443 fp1 = manifest1.get(filename, nullid)
446 fp1 = manifest1.get(filename, nullid)
444 fp2 = manifest2.get(filename, nullid)
447 fp2 = manifest2.get(filename, nullid)
445
448
446 if fp2 != nullid:
449 if fp2 != nullid:
447 # is one parent an ancestor of the other?
450 # is one parent an ancestor of the other?
448 fpa = filelog.ancestor(fp1, fp2)
451 fpa = filelog.ancestor(fp1, fp2)
449 if fpa == fp1:
452 if fpa == fp1:
450 fp1, fp2 = fp2, nullid
453 fp1, fp2 = fp2, nullid
451 elif fpa == fp2:
454 elif fpa == fp2:
452 fp2 = nullid
455 fp2 = nullid
453
456
454 # is the file unmodified from the parent? report existing entry
457 # is the file unmodified from the parent? report existing entry
455 if fp2 == nullid and text == filelog.read(fp1):
458 if fp2 == nullid and text == filelog.read(fp1):
456 return (fp1, None, None)
459 return (fp1, None, None)
457
460
458 return (None, fp1, fp2)
461 return (None, fp1, fp2)
459
462
460 def rawcommit(self, files, text, user, date, p1=None, p2=None, wlock=None):
463 def rawcommit(self, files, text, user, date, p1=None, p2=None, wlock=None):
461 orig_parent = self.dirstate.parents()[0] or nullid
464 orig_parent = self.dirstate.parents()[0] or nullid
462 p1 = p1 or self.dirstate.parents()[0] or nullid
465 p1 = p1 or self.dirstate.parents()[0] or nullid
463 p2 = p2 or self.dirstate.parents()[1] or nullid
466 p2 = p2 or self.dirstate.parents()[1] or nullid
464 c1 = self.changelog.read(p1)
467 c1 = self.changelog.read(p1)
465 c2 = self.changelog.read(p2)
468 c2 = self.changelog.read(p2)
466 m1 = self.manifest.read(c1[0])
469 m1 = self.manifest.read(c1[0])
467 mf1 = self.manifest.readflags(c1[0])
470 mf1 = self.manifest.readflags(c1[0])
468 m2 = self.manifest.read(c2[0])
471 m2 = self.manifest.read(c2[0])
469 changed = []
472 changed = []
470
473
471 if orig_parent == p1:
474 if orig_parent == p1:
472 update_dirstate = 1
475 update_dirstate = 1
473 else:
476 else:
474 update_dirstate = 0
477 update_dirstate = 0
475
478
476 if not wlock:
479 if not wlock:
477 wlock = self.wlock()
480 wlock = self.wlock()
478 l = self.lock()
481 l = self.lock()
479 tr = self.transaction()
482 tr = self.transaction()
480 mm = m1.copy()
483 mm = m1.copy()
481 mfm = mf1.copy()
484 mfm = mf1.copy()
482 linkrev = self.changelog.count()
485 linkrev = self.changelog.count()
483 for f in files:
486 for f in files:
484 try:
487 try:
485 t = self.wread(f)
488 t = self.wread(f)
486 tm = util.is_exec(self.wjoin(f), mfm.get(f, False))
489 tm = util.is_exec(self.wjoin(f), mfm.get(f, False))
487 r = self.file(f)
490 r = self.file(f)
488 mfm[f] = tm
491 mfm[f] = tm
489
492
490 (entry, fp1, fp2) = self.checkfilemerge(f, t, r, m1, m2)
493 (entry, fp1, fp2) = self.checkfilemerge(f, t, r, m1, m2)
491 if entry:
494 if entry:
492 mm[f] = entry
495 mm[f] = entry
493 continue
496 continue
494
497
495 mm[f] = r.add(t, {}, tr, linkrev, fp1, fp2)
498 mm[f] = r.add(t, {}, tr, linkrev, fp1, fp2)
496 changed.append(f)
499 changed.append(f)
497 if update_dirstate:
500 if update_dirstate:
498 self.dirstate.update([f], "n")
501 self.dirstate.update([f], "n")
499 except IOError:
502 except IOError:
500 try:
503 try:
501 del mm[f]
504 del mm[f]
502 del mfm[f]
505 del mfm[f]
503 if update_dirstate:
506 if update_dirstate:
504 self.dirstate.forget([f])
507 self.dirstate.forget([f])
505 except:
508 except:
506 # deleted from p2?
509 # deleted from p2?
507 pass
510 pass
508
511
509 mnode = self.manifest.add(mm, mfm, tr, linkrev, c1[0], c2[0])
512 mnode = self.manifest.add(mm, mfm, tr, linkrev, c1[0], c2[0])
510 user = user or self.ui.username()
513 user = user or self.ui.username()
511 n = self.changelog.add(mnode, changed, text, tr, p1, p2, user, date)
514 n = self.changelog.add(mnode, changed, text, tr, p1, p2, user, date)
512 tr.close()
515 tr.close()
513 if update_dirstate:
516 if update_dirstate:
514 self.dirstate.setparents(n, nullid)
517 self.dirstate.setparents(n, nullid)
515
518
516 def commit(self, files=None, text="", user=None, date=None,
519 def commit(self, files=None, text="", user=None, date=None,
517 match=util.always, force=False, lock=None, wlock=None,
520 match=util.always, force=False, lock=None, wlock=None,
518 force_editor=False):
521 force_editor=False):
519 commit = []
522 commit = []
520 remove = []
523 remove = []
521 changed = []
524 changed = []
522
525
523 if files:
526 if files:
524 for f in files:
527 for f in files:
525 s = self.dirstate.state(f)
528 s = self.dirstate.state(f)
526 if s in 'nmai':
529 if s in 'nmai':
527 commit.append(f)
530 commit.append(f)
528 elif s == 'r':
531 elif s == 'r':
529 remove.append(f)
532 remove.append(f)
530 else:
533 else:
531 self.ui.warn(_("%s not tracked!\n") % f)
534 self.ui.warn(_("%s not tracked!\n") % f)
532 else:
535 else:
533 modified, added, removed, deleted, unknown = self.changes(match=match)
536 modified, added, removed, deleted, unknown = self.changes(match=match)
534 commit = modified + added
537 commit = modified + added
535 remove = removed
538 remove = removed
536
539
537 p1, p2 = self.dirstate.parents()
540 p1, p2 = self.dirstate.parents()
538 c1 = self.changelog.read(p1)
541 c1 = self.changelog.read(p1)
539 c2 = self.changelog.read(p2)
542 c2 = self.changelog.read(p2)
540 m1 = self.manifest.read(c1[0])
543 m1 = self.manifest.read(c1[0])
541 mf1 = self.manifest.readflags(c1[0])
544 mf1 = self.manifest.readflags(c1[0])
542 m2 = self.manifest.read(c2[0])
545 m2 = self.manifest.read(c2[0])
543
546
544 if not commit and not remove and not force and p2 == nullid:
547 if not commit and not remove and not force and p2 == nullid:
545 self.ui.status(_("nothing changed\n"))
548 self.ui.status(_("nothing changed\n"))
546 return None
549 return None
547
550
548 xp1 = hex(p1)
551 xp1 = hex(p1)
549 if p2 == nullid: xp2 = ''
552 if p2 == nullid: xp2 = ''
550 else: xp2 = hex(p2)
553 else: xp2 = hex(p2)
551
554
552 self.hook("precommit", throw=True, parent1=xp1, parent2=xp2)
555 self.hook("precommit", throw=True, parent1=xp1, parent2=xp2)
553
556
554 if not wlock:
557 if not wlock:
555 wlock = self.wlock()
558 wlock = self.wlock()
556 if not lock:
559 if not lock:
557 lock = self.lock()
560 lock = self.lock()
558 tr = self.transaction()
561 tr = self.transaction()
559
562
560 # check in files
563 # check in files
561 new = {}
564 new = {}
562 linkrev = self.changelog.count()
565 linkrev = self.changelog.count()
563 commit.sort()
566 commit.sort()
564 for f in commit:
567 for f in commit:
565 self.ui.note(f + "\n")
568 self.ui.note(f + "\n")
566 try:
569 try:
567 mf1[f] = util.is_exec(self.wjoin(f), mf1.get(f, False))
570 mf1[f] = util.is_exec(self.wjoin(f), mf1.get(f, False))
568 t = self.wread(f)
571 t = self.wread(f)
569 except IOError:
572 except IOError:
570 self.ui.warn(_("trouble committing %s!\n") % f)
573 self.ui.warn(_("trouble committing %s!\n") % f)
571 raise
574 raise
572
575
573 r = self.file(f)
576 r = self.file(f)
574
577
575 meta = {}
578 meta = {}
576 cp = self.dirstate.copied(f)
579 cp = self.dirstate.copied(f)
577 if cp:
580 if cp:
578 meta["copy"] = cp
581 meta["copy"] = cp
579 meta["copyrev"] = hex(m1.get(cp, m2.get(cp, nullid)))
582 meta["copyrev"] = hex(m1.get(cp, m2.get(cp, nullid)))
580 self.ui.debug(_(" %s: copy %s:%s\n") % (f, cp, meta["copyrev"]))
583 self.ui.debug(_(" %s: copy %s:%s\n") % (f, cp, meta["copyrev"]))
581 fp1, fp2 = nullid, nullid
584 fp1, fp2 = nullid, nullid
582 else:
585 else:
583 entry, fp1, fp2 = self.checkfilemerge(f, t, r, m1, m2)
586 entry, fp1, fp2 = self.checkfilemerge(f, t, r, m1, m2)
584 if entry:
587 if entry:
585 new[f] = entry
588 new[f] = entry
586 continue
589 continue
587
590
588 new[f] = r.add(t, meta, tr, linkrev, fp1, fp2)
591 new[f] = r.add(t, meta, tr, linkrev, fp1, fp2)
589 # remember what we've added so that we can later calculate
592 # remember what we've added so that we can later calculate
590 # the files to pull from a set of changesets
593 # the files to pull from a set of changesets
591 changed.append(f)
594 changed.append(f)
592
595
593 # update manifest
596 # update manifest
594 m1 = m1.copy()
597 m1 = m1.copy()
595 m1.update(new)
598 m1.update(new)
596 for f in remove:
599 for f in remove:
597 if f in m1:
600 if f in m1:
598 del m1[f]
601 del m1[f]
599 mn = self.manifest.add(m1, mf1, tr, linkrev, c1[0], c2[0],
602 mn = self.manifest.add(m1, mf1, tr, linkrev, c1[0], c2[0],
600 (new, remove))
603 (new, remove))
601
604
602 # add changeset
605 # add changeset
603 new = new.keys()
606 new = new.keys()
604 new.sort()
607 new.sort()
605
608
606 user = user or self.ui.username()
609 user = user or self.ui.username()
607 if not text or force_editor:
610 if not text or force_editor:
608 edittext = []
611 edittext = []
609 if text:
612 if text:
610 edittext.append(text)
613 edittext.append(text)
611 edittext.append("")
614 edittext.append("")
612 if p2 != nullid:
615 if p2 != nullid:
613 edittext.append("HG: branch merge")
616 edittext.append("HG: branch merge")
614 edittext.extend(["HG: changed %s" % f for f in changed])
617 edittext.extend(["HG: changed %s" % f for f in changed])
615 edittext.extend(["HG: removed %s" % f for f in remove])
618 edittext.extend(["HG: removed %s" % f for f in remove])
616 if not changed and not remove:
619 if not changed and not remove:
617 edittext.append("HG: no files changed")
620 edittext.append("HG: no files changed")
618 edittext.append("")
621 edittext.append("")
619 # run editor in the repository root
622 # run editor in the repository root
620 olddir = os.getcwd()
623 olddir = os.getcwd()
621 os.chdir(self.root)
624 os.chdir(self.root)
622 text = self.ui.edit("\n".join(edittext), user)
625 text = self.ui.edit("\n".join(edittext), user)
623 os.chdir(olddir)
626 os.chdir(olddir)
624
627
625 lines = [line.rstrip() for line in text.rstrip().splitlines()]
628 lines = [line.rstrip() for line in text.rstrip().splitlines()]
626 while lines and not lines[0]:
629 while lines and not lines[0]:
627 del lines[0]
630 del lines[0]
628 if not lines:
631 if not lines:
629 return None
632 return None
630 text = '\n'.join(lines)
633 text = '\n'.join(lines)
631 n = self.changelog.add(mn, changed + remove, text, tr, p1, p2, user, date)
634 n = self.changelog.add(mn, changed + remove, text, tr, p1, p2, user, date)
632 self.hook('pretxncommit', throw=True, node=hex(n), parent1=xp1,
635 self.hook('pretxncommit', throw=True, node=hex(n), parent1=xp1,
633 parent2=xp2)
636 parent2=xp2)
634 tr.close()
637 tr.close()
635
638
636 self.dirstate.setparents(n)
639 self.dirstate.setparents(n)
637 self.dirstate.update(new, "n")
640 self.dirstate.update(new, "n")
638 self.dirstate.forget(remove)
641 self.dirstate.forget(remove)
639
642
640 self.hook("commit", node=hex(n), parent1=xp1, parent2=xp2)
643 self.hook("commit", node=hex(n), parent1=xp1, parent2=xp2)
641 return n
644 return n
642
645
643 def walk(self, node=None, files=[], match=util.always, badmatch=None):
646 def walk(self, node=None, files=[], match=util.always, badmatch=None):
644 if node:
647 if node:
645 fdict = dict.fromkeys(files)
648 fdict = dict.fromkeys(files)
646 for fn in self.manifest.read(self.changelog.read(node)[0]):
649 for fn in self.manifest.read(self.changelog.read(node)[0]):
647 fdict.pop(fn, None)
650 fdict.pop(fn, None)
648 if match(fn):
651 if match(fn):
649 yield 'm', fn
652 yield 'm', fn
650 for fn in fdict:
653 for fn in fdict:
651 if badmatch and badmatch(fn):
654 if badmatch and badmatch(fn):
652 if match(fn):
655 if match(fn):
653 yield 'b', fn
656 yield 'b', fn
654 else:
657 else:
655 self.ui.warn(_('%s: No such file in rev %s\n') % (
658 self.ui.warn(_('%s: No such file in rev %s\n') % (
656 util.pathto(self.getcwd(), fn), short(node)))
659 util.pathto(self.getcwd(), fn), short(node)))
657 else:
660 else:
658 for src, fn in self.dirstate.walk(files, match, badmatch=badmatch):
661 for src, fn in self.dirstate.walk(files, match, badmatch=badmatch):
659 yield src, fn
662 yield src, fn
660
663
661 def status(self, node1=None, node2=None, files=[], match=util.always,
664 def status(self, node1=None, node2=None, files=[], match=util.always,
662 wlock=None, list_ignored=False, list_clean=False):
665 wlock=None, list_ignored=False, list_clean=False):
663 """return status of files between two nodes or node and working directory
666 """return status of files between two nodes or node and working directory
664
667
665 If node1 is None, use the first dirstate parent instead.
668 If node1 is None, use the first dirstate parent instead.
666 If node2 is None, compare node1 with working directory.
669 If node2 is None, compare node1 with working directory.
667 """
670 """
668
671
669 def fcmp(fn, mf):
672 def fcmp(fn, mf):
670 t1 = self.wread(fn)
673 t1 = self.wread(fn)
671 t2 = self.file(fn).read(mf.get(fn, nullid))
674 t2 = self.file(fn).read(mf.get(fn, nullid))
672 return cmp(t1, t2)
675 return cmp(t1, t2)
673
676
674 def mfmatches(node):
677 def mfmatches(node):
675 change = self.changelog.read(node)
678 change = self.changelog.read(node)
676 mf = dict(self.manifest.read(change[0]))
679 mf = dict(self.manifest.read(change[0]))
677 for fn in mf.keys():
680 for fn in mf.keys():
678 if not match(fn):
681 if not match(fn):
679 del mf[fn]
682 del mf[fn]
680 return mf
683 return mf
681
684
682 modified, added, removed, deleted, unknown = [], [], [], [], []
685 modified, added, removed, deleted, unknown = [], [], [], [], []
683 ignored, clean = [], []
686 ignored, clean = [], []
684
687
685 compareworking = False
688 compareworking = False
686 if not node1 or (not node2 and node1 == self.dirstate.parents()[0]):
689 if not node1 or (not node2 and node1 == self.dirstate.parents()[0]):
687 compareworking = True
690 compareworking = True
688
691
689 if not compareworking:
692 if not compareworking:
690 # read the manifest from node1 before the manifest from node2,
693 # read the manifest from node1 before the manifest from node2,
691 # so that we'll hit the manifest cache if we're going through
694 # so that we'll hit the manifest cache if we're going through
692 # all the revisions in parent->child order.
695 # all the revisions in parent->child order.
693 mf1 = mfmatches(node1)
696 mf1 = mfmatches(node1)
694
697
695 # are we comparing the working directory?
698 # are we comparing the working directory?
696 if not node2:
699 if not node2:
697 if not wlock:
700 if not wlock:
698 try:
701 try:
699 wlock = self.wlock(wait=0)
702 wlock = self.wlock(wait=0)
700 except lock.LockException:
703 except lock.LockException:
701 wlock = None
704 wlock = None
702 (lookup, modified, added, removed, deleted, unknown,
705 (lookup, modified, added, removed, deleted, unknown,
703 ignored, clean) = self.dirstate.status(files, match,
706 ignored, clean) = self.dirstate.status(files, match,
704 list_ignored, list_clean)
707 list_ignored, list_clean)
705
708
706 # are we comparing working dir against its parent?
709 # are we comparing working dir against its parent?
707 if compareworking:
710 if compareworking:
708 if lookup:
711 if lookup:
709 # do a full compare of any files that might have changed
712 # do a full compare of any files that might have changed
710 mf2 = mfmatches(self.dirstate.parents()[0])
713 mf2 = mfmatches(self.dirstate.parents()[0])
711 for f in lookup:
714 for f in lookup:
712 if fcmp(f, mf2):
715 if fcmp(f, mf2):
713 modified.append(f)
716 modified.append(f)
714 elif wlock is not None:
717 elif wlock is not None:
715 self.dirstate.update([f], "n")
718 self.dirstate.update([f], "n")
716 else:
719 else:
717 # we are comparing working dir against non-parent
720 # we are comparing working dir against non-parent
718 # generate a pseudo-manifest for the working dir
721 # generate a pseudo-manifest for the working dir
719 mf2 = mfmatches(self.dirstate.parents()[0])
722 mf2 = mfmatches(self.dirstate.parents()[0])
720 for f in lookup + modified + added:
723 for f in lookup + modified + added:
721 mf2[f] = ""
724 mf2[f] = ""
722 for f in removed:
725 for f in removed:
723 if f in mf2:
726 if f in mf2:
724 del mf2[f]
727 del mf2[f]
725 else:
728 else:
726 # we are comparing two revisions
729 # we are comparing two revisions
727 mf2 = mfmatches(node2)
730 mf2 = mfmatches(node2)
728
731
729 if not compareworking:
732 if not compareworking:
730 # flush lists from dirstate before comparing manifests
733 # flush lists from dirstate before comparing manifests
731 modified, added, clean = [], [], []
734 modified, added, clean = [], [], []
732
735
733 # make sure to sort the files so we talk to the disk in a
736 # make sure to sort the files so we talk to the disk in a
734 # reasonable order
737 # reasonable order
735 mf2keys = mf2.keys()
738 mf2keys = mf2.keys()
736 mf2keys.sort()
739 mf2keys.sort()
737 for fn in mf2keys:
740 for fn in mf2keys:
738 if mf1.has_key(fn):
741 if mf1.has_key(fn):
739 if mf1[fn] != mf2[fn] and (mf2[fn] != "" or fcmp(fn, mf1)):
742 if mf1[fn] != mf2[fn] and (mf2[fn] != "" or fcmp(fn, mf1)):
740 modified.append(fn)
743 modified.append(fn)
741 elif list_clean:
744 elif list_clean:
742 clean.append(fn)
745 clean.append(fn)
743 del mf1[fn]
746 del mf1[fn]
744 else:
747 else:
745 added.append(fn)
748 added.append(fn)
746
749
747 removed = mf1.keys()
750 removed = mf1.keys()
748
751
749 # sort and return results:
752 # sort and return results:
750 for l in modified, added, removed, deleted, unknown, ignored, clean:
753 for l in modified, added, removed, deleted, unknown, ignored, clean:
751 l.sort()
754 l.sort()
752 return (modified, added, removed, deleted, unknown, ignored, clean)
755 return (modified, added, removed, deleted, unknown, ignored, clean)
753
756
754 def changes(self, node1=None, node2=None, files=[], match=util.always,
757 def changes(self, node1=None, node2=None, files=[], match=util.always,
755 wlock=None, list_ignored=False, list_clean=False):
758 wlock=None, list_ignored=False, list_clean=False):
756 '''DEPRECATED - use status instead'''
759 '''DEPRECATED - use status instead'''
757 marduit = self.status(node1, node2, files, match, wlock,
760 marduit = self.status(node1, node2, files, match, wlock,
758 list_ignored, list_clean)
761 list_ignored, list_clean)
759 if list_ignored:
762 if list_ignored:
760 return marduit[:-1]
763 return marduit[:-1]
761 else:
764 else:
762 return marduit[:-2]
765 return marduit[:-2]
763
766
764 def add(self, list, wlock=None):
767 def add(self, list, wlock=None):
765 if not wlock:
768 if not wlock:
766 wlock = self.wlock()
769 wlock = self.wlock()
767 for f in list:
770 for f in list:
768 p = self.wjoin(f)
771 p = self.wjoin(f)
769 if not os.path.exists(p):
772 if not os.path.exists(p):
770 self.ui.warn(_("%s does not exist!\n") % f)
773 self.ui.warn(_("%s does not exist!\n") % f)
771 elif not os.path.isfile(p):
774 elif not os.path.isfile(p):
772 self.ui.warn(_("%s not added: only files supported currently\n")
775 self.ui.warn(_("%s not added: only files supported currently\n")
773 % f)
776 % f)
774 elif self.dirstate.state(f) in 'an':
777 elif self.dirstate.state(f) in 'an':
775 self.ui.warn(_("%s already tracked!\n") % f)
778 self.ui.warn(_("%s already tracked!\n") % f)
776 else:
779 else:
777 self.dirstate.update([f], "a")
780 self.dirstate.update([f], "a")
778
781
779 def forget(self, list, wlock=None):
782 def forget(self, list, wlock=None):
780 if not wlock:
783 if not wlock:
781 wlock = self.wlock()
784 wlock = self.wlock()
782 for f in list:
785 for f in list:
783 if self.dirstate.state(f) not in 'ai':
786 if self.dirstate.state(f) not in 'ai':
784 self.ui.warn(_("%s not added!\n") % f)
787 self.ui.warn(_("%s not added!\n") % f)
785 else:
788 else:
786 self.dirstate.forget([f])
789 self.dirstate.forget([f])
787
790
788 def remove(self, list, unlink=False, wlock=None):
791 def remove(self, list, unlink=False, wlock=None):
789 if unlink:
792 if unlink:
790 for f in list:
793 for f in list:
791 try:
794 try:
792 util.unlink(self.wjoin(f))
795 util.unlink(self.wjoin(f))
793 except OSError, inst:
796 except OSError, inst:
794 if inst.errno != errno.ENOENT:
797 if inst.errno != errno.ENOENT:
795 raise
798 raise
796 if not wlock:
799 if not wlock:
797 wlock = self.wlock()
800 wlock = self.wlock()
798 for f in list:
801 for f in list:
799 p = self.wjoin(f)
802 p = self.wjoin(f)
800 if os.path.exists(p):
803 if os.path.exists(p):
801 self.ui.warn(_("%s still exists!\n") % f)
804 self.ui.warn(_("%s still exists!\n") % f)
802 elif self.dirstate.state(f) == 'a':
805 elif self.dirstate.state(f) == 'a':
803 self.dirstate.forget([f])
806 self.dirstate.forget([f])
804 elif f not in self.dirstate:
807 elif f not in self.dirstate:
805 self.ui.warn(_("%s not tracked!\n") % f)
808 self.ui.warn(_("%s not tracked!\n") % f)
806 else:
809 else:
807 self.dirstate.update([f], "r")
810 self.dirstate.update([f], "r")
808
811
809 def undelete(self, list, wlock=None):
812 def undelete(self, list, wlock=None):
810 p = self.dirstate.parents()[0]
813 p = self.dirstate.parents()[0]
811 mn = self.changelog.read(p)[0]
814 mn = self.changelog.read(p)[0]
812 mf = self.manifest.readflags(mn)
815 mf = self.manifest.readflags(mn)
813 m = self.manifest.read(mn)
816 m = self.manifest.read(mn)
814 if not wlock:
817 if not wlock:
815 wlock = self.wlock()
818 wlock = self.wlock()
816 for f in list:
819 for f in list:
817 if self.dirstate.state(f) not in "r":
820 if self.dirstate.state(f) not in "r":
818 self.ui.warn("%s not removed!\n" % f)
821 self.ui.warn("%s not removed!\n" % f)
819 else:
822 else:
820 t = self.file(f).read(m[f])
823 t = self.file(f).read(m[f])
821 self.wwrite(f, t)
824 self.wwrite(f, t)
822 util.set_exec(self.wjoin(f), mf[f])
825 util.set_exec(self.wjoin(f), mf[f])
823 self.dirstate.update([f], "n")
826 self.dirstate.update([f], "n")
824
827
825 def copy(self, source, dest, wlock=None):
828 def copy(self, source, dest, wlock=None):
826 p = self.wjoin(dest)
829 p = self.wjoin(dest)
827 if not os.path.exists(p):
830 if not os.path.exists(p):
828 self.ui.warn(_("%s does not exist!\n") % dest)
831 self.ui.warn(_("%s does not exist!\n") % dest)
829 elif not os.path.isfile(p):
832 elif not os.path.isfile(p):
830 self.ui.warn(_("copy failed: %s is not a file\n") % dest)
833 self.ui.warn(_("copy failed: %s is not a file\n") % dest)
831 else:
834 else:
832 if not wlock:
835 if not wlock:
833 wlock = self.wlock()
836 wlock = self.wlock()
834 if self.dirstate.state(dest) == '?':
837 if self.dirstate.state(dest) == '?':
835 self.dirstate.update([dest], "a")
838 self.dirstate.update([dest], "a")
836 self.dirstate.copy(source, dest)
839 self.dirstate.copy(source, dest)
837
840
838 def heads(self, start=None):
841 def heads(self, start=None):
839 heads = self.changelog.heads(start)
842 heads = self.changelog.heads(start)
840 # sort the output in rev descending order
843 # sort the output in rev descending order
841 heads = [(-self.changelog.rev(h), h) for h in heads]
844 heads = [(-self.changelog.rev(h), h) for h in heads]
842 heads.sort()
845 heads.sort()
843 return [n for (r, n) in heads]
846 return [n for (r, n) in heads]
844
847
845 # branchlookup returns a dict giving a list of branches for
848 # branchlookup returns a dict giving a list of branches for
846 # each head. A branch is defined as the tag of a node or
849 # each head. A branch is defined as the tag of a node or
847 # the branch of the node's parents. If a node has multiple
850 # the branch of the node's parents. If a node has multiple
848 # branch tags, tags are eliminated if they are visible from other
851 # branch tags, tags are eliminated if they are visible from other
849 # branch tags.
852 # branch tags.
850 #
853 #
851 # So, for this graph: a->b->c->d->e
854 # So, for this graph: a->b->c->d->e
852 # \ /
855 # \ /
853 # aa -----/
856 # aa -----/
854 # a has tag 2.6.12
857 # a has tag 2.6.12
855 # d has tag 2.6.13
858 # d has tag 2.6.13
856 # e would have branch tags for 2.6.12 and 2.6.13. Because the node
859 # e would have branch tags for 2.6.12 and 2.6.13. Because the node
857 # for 2.6.12 can be reached from the node 2.6.13, that is eliminated
860 # for 2.6.12 can be reached from the node 2.6.13, that is eliminated
858 # from the list.
861 # from the list.
859 #
862 #
860 # It is possible that more than one head will have the same branch tag.
863 # It is possible that more than one head will have the same branch tag.
861 # callers need to check the result for multiple heads under the same
864 # callers need to check the result for multiple heads under the same
862 # branch tag if that is a problem for them (ie checkout of a specific
865 # branch tag if that is a problem for them (ie checkout of a specific
863 # branch).
866 # branch).
864 #
867 #
865 # passing in a specific branch will limit the depth of the search
868 # passing in a specific branch will limit the depth of the search
866 # through the parents. It won't limit the branches returned in the
869 # through the parents. It won't limit the branches returned in the
867 # result though.
870 # result though.
868 def branchlookup(self, heads=None, branch=None):
871 def branchlookup(self, heads=None, branch=None):
869 if not heads:
872 if not heads:
870 heads = self.heads()
873 heads = self.heads()
871 headt = [ h for h in heads ]
874 headt = [ h for h in heads ]
872 chlog = self.changelog
875 chlog = self.changelog
873 branches = {}
876 branches = {}
874 merges = []
877 merges = []
875 seenmerge = {}
878 seenmerge = {}
876
879
877 # traverse the tree once for each head, recording in the branches
880 # traverse the tree once for each head, recording in the branches
878 # dict which tags are visible from this head. The branches
881 # dict which tags are visible from this head. The branches
879 # dict also records which tags are visible from each tag
882 # dict also records which tags are visible from each tag
880 # while we traverse.
883 # while we traverse.
881 while headt or merges:
884 while headt or merges:
882 if merges:
885 if merges:
883 n, found = merges.pop()
886 n, found = merges.pop()
884 visit = [n]
887 visit = [n]
885 else:
888 else:
886 h = headt.pop()
889 h = headt.pop()
887 visit = [h]
890 visit = [h]
888 found = [h]
891 found = [h]
889 seen = {}
892 seen = {}
890 while visit:
893 while visit:
891 n = visit.pop()
894 n = visit.pop()
892 if n in seen:
895 if n in seen:
893 continue
896 continue
894 pp = chlog.parents(n)
897 pp = chlog.parents(n)
895 tags = self.nodetags(n)
898 tags = self.nodetags(n)
896 if tags:
899 if tags:
897 for x in tags:
900 for x in tags:
898 if x == 'tip':
901 if x == 'tip':
899 continue
902 continue
900 for f in found:
903 for f in found:
901 branches.setdefault(f, {})[n] = 1
904 branches.setdefault(f, {})[n] = 1
902 branches.setdefault(n, {})[n] = 1
905 branches.setdefault(n, {})[n] = 1
903 break
906 break
904 if n not in found:
907 if n not in found:
905 found.append(n)
908 found.append(n)
906 if branch in tags:
909 if branch in tags:
907 continue
910 continue
908 seen[n] = 1
911 seen[n] = 1
909 if pp[1] != nullid and n not in seenmerge:
912 if pp[1] != nullid and n not in seenmerge:
910 merges.append((pp[1], [x for x in found]))
913 merges.append((pp[1], [x for x in found]))
911 seenmerge[n] = 1
914 seenmerge[n] = 1
912 if pp[0] != nullid:
915 if pp[0] != nullid:
913 visit.append(pp[0])
916 visit.append(pp[0])
914 # traverse the branches dict, eliminating branch tags from each
917 # traverse the branches dict, eliminating branch tags from each
915 # head that are visible from another branch tag for that head.
918 # head that are visible from another branch tag for that head.
916 out = {}
919 out = {}
917 viscache = {}
920 viscache = {}
918 for h in heads:
921 for h in heads:
919 def visible(node):
922 def visible(node):
920 if node in viscache:
923 if node in viscache:
921 return viscache[node]
924 return viscache[node]
922 ret = {}
925 ret = {}
923 visit = [node]
926 visit = [node]
924 while visit:
927 while visit:
925 x = visit.pop()
928 x = visit.pop()
926 if x in viscache:
929 if x in viscache:
927 ret.update(viscache[x])
930 ret.update(viscache[x])
928 elif x not in ret:
931 elif x not in ret:
929 ret[x] = 1
932 ret[x] = 1
930 if x in branches:
933 if x in branches:
931 visit[len(visit):] = branches[x].keys()
934 visit[len(visit):] = branches[x].keys()
932 viscache[node] = ret
935 viscache[node] = ret
933 return ret
936 return ret
934 if h not in branches:
937 if h not in branches:
935 continue
938 continue
936 # O(n^2), but somewhat limited. This only searches the
939 # O(n^2), but somewhat limited. This only searches the
937 # tags visible from a specific head, not all the tags in the
940 # tags visible from a specific head, not all the tags in the
938 # whole repo.
941 # whole repo.
939 for b in branches[h]:
942 for b in branches[h]:
940 vis = False
943 vis = False
941 for bb in branches[h].keys():
944 for bb in branches[h].keys():
942 if b != bb:
945 if b != bb:
943 if b in visible(bb):
946 if b in visible(bb):
944 vis = True
947 vis = True
945 break
948 break
946 if not vis:
949 if not vis:
947 l = out.setdefault(h, [])
950 l = out.setdefault(h, [])
948 l[len(l):] = self.nodetags(b)
951 l[len(l):] = self.nodetags(b)
949 return out
952 return out
950
953
951 def branches(self, nodes):
954 def branches(self, nodes):
952 if not nodes:
955 if not nodes:
953 nodes = [self.changelog.tip()]
956 nodes = [self.changelog.tip()]
954 b = []
957 b = []
955 for n in nodes:
958 for n in nodes:
956 t = n
959 t = n
957 while 1:
960 while 1:
958 p = self.changelog.parents(n)
961 p = self.changelog.parents(n)
959 if p[1] != nullid or p[0] == nullid:
962 if p[1] != nullid or p[0] == nullid:
960 b.append((t, n, p[0], p[1]))
963 b.append((t, n, p[0], p[1]))
961 break
964 break
962 n = p[0]
965 n = p[0]
963 return b
966 return b
964
967
965 def between(self, pairs):
968 def between(self, pairs):
966 r = []
969 r = []
967
970
968 for top, bottom in pairs:
971 for top, bottom in pairs:
969 n, l, i = top, [], 0
972 n, l, i = top, [], 0
970 f = 1
973 f = 1
971
974
972 while n != bottom:
975 while n != bottom:
973 p = self.changelog.parents(n)[0]
976 p = self.changelog.parents(n)[0]
974 if i == f:
977 if i == f:
975 l.append(n)
978 l.append(n)
976 f = f * 2
979 f = f * 2
977 n = p
980 n = p
978 i += 1
981 i += 1
979
982
980 r.append(l)
983 r.append(l)
981
984
982 return r
985 return r
983
986
984 def findincoming(self, remote, base=None, heads=None, force=False):
987 def findincoming(self, remote, base=None, heads=None, force=False):
985 """Return list of roots of the subsets of missing nodes from remote
988 """Return list of roots of the subsets of missing nodes from remote
986
989
987 If base dict is specified, assume that these nodes and their parents
990 If base dict is specified, assume that these nodes and their parents
988 exist on the remote side and that no child of a node of base exists
991 exist on the remote side and that no child of a node of base exists
989 in both remote and self.
992 in both remote and self.
990 Furthermore base will be updated to include the nodes that exists
993 Furthermore base will be updated to include the nodes that exists
991 in self and remote but no children exists in self and remote.
994 in self and remote but no children exists in self and remote.
992 If a list of heads is specified, return only nodes which are heads
995 If a list of heads is specified, return only nodes which are heads
993 or ancestors of these heads.
996 or ancestors of these heads.
994
997
995 All the ancestors of base are in self and in remote.
998 All the ancestors of base are in self and in remote.
996 All the descendants of the list returned are missing in self.
999 All the descendants of the list returned are missing in self.
997 (and so we know that the rest of the nodes are missing in remote, see
1000 (and so we know that the rest of the nodes are missing in remote, see
998 outgoing)
1001 outgoing)
999 """
1002 """
1000 m = self.changelog.nodemap
1003 m = self.changelog.nodemap
1001 search = []
1004 search = []
1002 fetch = {}
1005 fetch = {}
1003 seen = {}
1006 seen = {}
1004 seenbranch = {}
1007 seenbranch = {}
1005 if base == None:
1008 if base == None:
1006 base = {}
1009 base = {}
1007
1010
1008 if not heads:
1011 if not heads:
1009 heads = remote.heads()
1012 heads = remote.heads()
1010
1013
1011 if self.changelog.tip() == nullid:
1014 if self.changelog.tip() == nullid:
1012 base[nullid] = 1
1015 base[nullid] = 1
1013 if heads != [nullid]:
1016 if heads != [nullid]:
1014 return [nullid]
1017 return [nullid]
1015 return []
1018 return []
1016
1019
1017 # assume we're closer to the tip than the root
1020 # assume we're closer to the tip than the root
1018 # and start by examining the heads
1021 # and start by examining the heads
1019 self.ui.status(_("searching for changes\n"))
1022 self.ui.status(_("searching for changes\n"))
1020
1023
1021 unknown = []
1024 unknown = []
1022 for h in heads:
1025 for h in heads:
1023 if h not in m:
1026 if h not in m:
1024 unknown.append(h)
1027 unknown.append(h)
1025 else:
1028 else:
1026 base[h] = 1
1029 base[h] = 1
1027
1030
1028 if not unknown:
1031 if not unknown:
1029 return []
1032 return []
1030
1033
1031 req = dict.fromkeys(unknown)
1034 req = dict.fromkeys(unknown)
1032 reqcnt = 0
1035 reqcnt = 0
1033
1036
1034 # search through remote branches
1037 # search through remote branches
1035 # a 'branch' here is a linear segment of history, with four parts:
1038 # a 'branch' here is a linear segment of history, with four parts:
1036 # head, root, first parent, second parent
1039 # head, root, first parent, second parent
1037 # (a branch always has two parents (or none) by definition)
1040 # (a branch always has two parents (or none) by definition)
1038 unknown = remote.branches(unknown)
1041 unknown = remote.branches(unknown)
1039 while unknown:
1042 while unknown:
1040 r = []
1043 r = []
1041 while unknown:
1044 while unknown:
1042 n = unknown.pop(0)
1045 n = unknown.pop(0)
1043 if n[0] in seen:
1046 if n[0] in seen:
1044 continue
1047 continue
1045
1048
1046 self.ui.debug(_("examining %s:%s\n")
1049 self.ui.debug(_("examining %s:%s\n")
1047 % (short(n[0]), short(n[1])))
1050 % (short(n[0]), short(n[1])))
1048 if n[0] == nullid: # found the end of the branch
1051 if n[0] == nullid: # found the end of the branch
1049 pass
1052 pass
1050 elif n in seenbranch:
1053 elif n in seenbranch:
1051 self.ui.debug(_("branch already found\n"))
1054 self.ui.debug(_("branch already found\n"))
1052 continue
1055 continue
1053 elif n[1] and n[1] in m: # do we know the base?
1056 elif n[1] and n[1] in m: # do we know the base?
1054 self.ui.debug(_("found incomplete branch %s:%s\n")
1057 self.ui.debug(_("found incomplete branch %s:%s\n")
1055 % (short(n[0]), short(n[1])))
1058 % (short(n[0]), short(n[1])))
1056 search.append(n) # schedule branch range for scanning
1059 search.append(n) # schedule branch range for scanning
1057 seenbranch[n] = 1
1060 seenbranch[n] = 1
1058 else:
1061 else:
1059 if n[1] not in seen and n[1] not in fetch:
1062 if n[1] not in seen and n[1] not in fetch:
1060 if n[2] in m and n[3] in m:
1063 if n[2] in m and n[3] in m:
1061 self.ui.debug(_("found new changeset %s\n") %
1064 self.ui.debug(_("found new changeset %s\n") %
1062 short(n[1]))
1065 short(n[1]))
1063 fetch[n[1]] = 1 # earliest unknown
1066 fetch[n[1]] = 1 # earliest unknown
1064 for p in n[2:4]:
1067 for p in n[2:4]:
1065 if p in m:
1068 if p in m:
1066 base[p] = 1 # latest known
1069 base[p] = 1 # latest known
1067
1070
1068 for p in n[2:4]:
1071 for p in n[2:4]:
1069 if p not in req and p not in m:
1072 if p not in req and p not in m:
1070 r.append(p)
1073 r.append(p)
1071 req[p] = 1
1074 req[p] = 1
1072 seen[n[0]] = 1
1075 seen[n[0]] = 1
1073
1076
1074 if r:
1077 if r:
1075 reqcnt += 1
1078 reqcnt += 1
1076 self.ui.debug(_("request %d: %s\n") %
1079 self.ui.debug(_("request %d: %s\n") %
1077 (reqcnt, " ".join(map(short, r))))
1080 (reqcnt, " ".join(map(short, r))))
1078 for p in range(0, len(r), 10):
1081 for p in range(0, len(r), 10):
1079 for b in remote.branches(r[p:p+10]):
1082 for b in remote.branches(r[p:p+10]):
1080 self.ui.debug(_("received %s:%s\n") %
1083 self.ui.debug(_("received %s:%s\n") %
1081 (short(b[0]), short(b[1])))
1084 (short(b[0]), short(b[1])))
1082 unknown.append(b)
1085 unknown.append(b)
1083
1086
1084 # do binary search on the branches we found
1087 # do binary search on the branches we found
1085 while search:
1088 while search:
1086 n = search.pop(0)
1089 n = search.pop(0)
1087 reqcnt += 1
1090 reqcnt += 1
1088 l = remote.between([(n[0], n[1])])[0]
1091 l = remote.between([(n[0], n[1])])[0]
1089 l.append(n[1])
1092 l.append(n[1])
1090 p = n[0]
1093 p = n[0]
1091 f = 1
1094 f = 1
1092 for i in l:
1095 for i in l:
1093 self.ui.debug(_("narrowing %d:%d %s\n") % (f, len(l), short(i)))
1096 self.ui.debug(_("narrowing %d:%d %s\n") % (f, len(l), short(i)))
1094 if i in m:
1097 if i in m:
1095 if f <= 2:
1098 if f <= 2:
1096 self.ui.debug(_("found new branch changeset %s\n") %
1099 self.ui.debug(_("found new branch changeset %s\n") %
1097 short(p))
1100 short(p))
1098 fetch[p] = 1
1101 fetch[p] = 1
1099 base[i] = 1
1102 base[i] = 1
1100 else:
1103 else:
1101 self.ui.debug(_("narrowed branch search to %s:%s\n")
1104 self.ui.debug(_("narrowed branch search to %s:%s\n")
1102 % (short(p), short(i)))
1105 % (short(p), short(i)))
1103 search.append((p, i))
1106 search.append((p, i))
1104 break
1107 break
1105 p, f = i, f * 2
1108 p, f = i, f * 2
1106
1109
1107 # sanity check our fetch list
1110 # sanity check our fetch list
1108 for f in fetch.keys():
1111 for f in fetch.keys():
1109 if f in m:
1112 if f in m:
1110 raise repo.RepoError(_("already have changeset ") + short(f[:4]))
1113 raise repo.RepoError(_("already have changeset ") + short(f[:4]))
1111
1114
1112 if base.keys() == [nullid]:
1115 if base.keys() == [nullid]:
1113 if force:
1116 if force:
1114 self.ui.warn(_("warning: repository is unrelated\n"))
1117 self.ui.warn(_("warning: repository is unrelated\n"))
1115 else:
1118 else:
1116 raise util.Abort(_("repository is unrelated"))
1119 raise util.Abort(_("repository is unrelated"))
1117
1120
1118 self.ui.note(_("found new changesets starting at ") +
1121 self.ui.note(_("found new changesets starting at ") +
1119 " ".join([short(f) for f in fetch]) + "\n")
1122 " ".join([short(f) for f in fetch]) + "\n")
1120
1123
1121 self.ui.debug(_("%d total queries\n") % reqcnt)
1124 self.ui.debug(_("%d total queries\n") % reqcnt)
1122
1125
1123 return fetch.keys()
1126 return fetch.keys()
1124
1127
1125 def findoutgoing(self, remote, base=None, heads=None, force=False):
1128 def findoutgoing(self, remote, base=None, heads=None, force=False):
1126 """Return list of nodes that are roots of subsets not in remote
1129 """Return list of nodes that are roots of subsets not in remote
1127
1130
1128 If base dict is specified, assume that these nodes and their parents
1131 If base dict is specified, assume that these nodes and their parents
1129 exist on the remote side.
1132 exist on the remote side.
1130 If a list of heads is specified, return only nodes which are heads
1133 If a list of heads is specified, return only nodes which are heads
1131 or ancestors of these heads, and return a second element which
1134 or ancestors of these heads, and return a second element which
1132 contains all remote heads which get new children.
1135 contains all remote heads which get new children.
1133 """
1136 """
1134 if base == None:
1137 if base == None:
1135 base = {}
1138 base = {}
1136 self.findincoming(remote, base, heads, force=force)
1139 self.findincoming(remote, base, heads, force=force)
1137
1140
1138 self.ui.debug(_("common changesets up to ")
1141 self.ui.debug(_("common changesets up to ")
1139 + " ".join(map(short, base.keys())) + "\n")
1142 + " ".join(map(short, base.keys())) + "\n")
1140
1143
1141 remain = dict.fromkeys(self.changelog.nodemap)
1144 remain = dict.fromkeys(self.changelog.nodemap)
1142
1145
1143 # prune everything remote has from the tree
1146 # prune everything remote has from the tree
1144 del remain[nullid]
1147 del remain[nullid]
1145 remove = base.keys()
1148 remove = base.keys()
1146 while remove:
1149 while remove:
1147 n = remove.pop(0)
1150 n = remove.pop(0)
1148 if n in remain:
1151 if n in remain:
1149 del remain[n]
1152 del remain[n]
1150 for p in self.changelog.parents(n):
1153 for p in self.changelog.parents(n):
1151 remove.append(p)
1154 remove.append(p)
1152
1155
1153 # find every node whose parents have been pruned
1156 # find every node whose parents have been pruned
1154 subset = []
1157 subset = []
1155 # find every remote head that will get new children
1158 # find every remote head that will get new children
1156 updated_heads = {}
1159 updated_heads = {}
1157 for n in remain:
1160 for n in remain:
1158 p1, p2 = self.changelog.parents(n)
1161 p1, p2 = self.changelog.parents(n)
1159 if p1 not in remain and p2 not in remain:
1162 if p1 not in remain and p2 not in remain:
1160 subset.append(n)
1163 subset.append(n)
1161 if heads:
1164 if heads:
1162 if p1 in heads:
1165 if p1 in heads:
1163 updated_heads[p1] = True
1166 updated_heads[p1] = True
1164 if p2 in heads:
1167 if p2 in heads:
1165 updated_heads[p2] = True
1168 updated_heads[p2] = True
1166
1169
1167 # this is the set of all roots we have to push
1170 # this is the set of all roots we have to push
1168 if heads:
1171 if heads:
1169 return subset, updated_heads.keys()
1172 return subset, updated_heads.keys()
1170 else:
1173 else:
1171 return subset
1174 return subset
1172
1175
1173 def pull(self, remote, heads=None, force=False):
1176 def pull(self, remote, heads=None, force=False):
1174 l = self.lock()
1177 l = self.lock()
1175
1178
1176 fetch = self.findincoming(remote, force=force)
1179 fetch = self.findincoming(remote, force=force)
1177 if fetch == [nullid]:
1180 if fetch == [nullid]:
1178 self.ui.status(_("requesting all changes\n"))
1181 self.ui.status(_("requesting all changes\n"))
1179
1182
1180 if not fetch:
1183 if not fetch:
1181 self.ui.status(_("no changes found\n"))
1184 self.ui.status(_("no changes found\n"))
1182 return 0
1185 return 0
1183
1186
1184 if heads is None:
1187 if heads is None:
1185 cg = remote.changegroup(fetch, 'pull')
1188 cg = remote.changegroup(fetch, 'pull')
1186 else:
1189 else:
1187 cg = remote.changegroupsubset(fetch, heads, 'pull')
1190 cg = remote.changegroupsubset(fetch, heads, 'pull')
1188 return self.addchangegroup(cg, 'pull')
1191 return self.addchangegroup(cg, 'pull', remote.url())
1189
1192
1190 def push(self, remote, force=False, revs=None):
1193 def push(self, remote, force=False, revs=None):
1191 # there are two ways to push to remote repo:
1194 # there are two ways to push to remote repo:
1192 #
1195 #
1193 # addchangegroup assumes local user can lock remote
1196 # addchangegroup assumes local user can lock remote
1194 # repo (local filesystem, old ssh servers).
1197 # repo (local filesystem, old ssh servers).
1195 #
1198 #
1196 # unbundle assumes local user cannot lock remote repo (new ssh
1199 # unbundle assumes local user cannot lock remote repo (new ssh
1197 # servers, http servers).
1200 # servers, http servers).
1198
1201
1199 if remote.capable('unbundle'):
1202 if remote.capable('unbundle'):
1200 return self.push_unbundle(remote, force, revs)
1203 return self.push_unbundle(remote, force, revs)
1201 return self.push_addchangegroup(remote, force, revs)
1204 return self.push_addchangegroup(remote, force, revs)
1202
1205
1203 def prepush(self, remote, force, revs):
1206 def prepush(self, remote, force, revs):
1204 base = {}
1207 base = {}
1205 remote_heads = remote.heads()
1208 remote_heads = remote.heads()
1206 inc = self.findincoming(remote, base, remote_heads, force=force)
1209 inc = self.findincoming(remote, base, remote_heads, force=force)
1207 if not force and inc:
1210 if not force and inc:
1208 self.ui.warn(_("abort: unsynced remote changes!\n"))
1211 self.ui.warn(_("abort: unsynced remote changes!\n"))
1209 self.ui.status(_("(did you forget to sync?"
1212 self.ui.status(_("(did you forget to sync?"
1210 " use push -f to force)\n"))
1213 " use push -f to force)\n"))
1211 return None, 1
1214 return None, 1
1212
1215
1213 update, updated_heads = self.findoutgoing(remote, base, remote_heads)
1216 update, updated_heads = self.findoutgoing(remote, base, remote_heads)
1214 if revs is not None:
1217 if revs is not None:
1215 msng_cl, bases, heads = self.changelog.nodesbetween(update, revs)
1218 msng_cl, bases, heads = self.changelog.nodesbetween(update, revs)
1216 else:
1219 else:
1217 bases, heads = update, self.changelog.heads()
1220 bases, heads = update, self.changelog.heads()
1218
1221
1219 if not bases:
1222 if not bases:
1220 self.ui.status(_("no changes found\n"))
1223 self.ui.status(_("no changes found\n"))
1221 return None, 1
1224 return None, 1
1222 elif not force:
1225 elif not force:
1223 # FIXME we don't properly detect creation of new heads
1226 # FIXME we don't properly detect creation of new heads
1224 # in the push -r case, assume the user knows what he's doing
1227 # in the push -r case, assume the user knows what he's doing
1225 if not revs and len(remote_heads) < len(heads) \
1228 if not revs and len(remote_heads) < len(heads) \
1226 and remote_heads != [nullid]:
1229 and remote_heads != [nullid]:
1227 self.ui.warn(_("abort: push creates new remote branches!\n"))
1230 self.ui.warn(_("abort: push creates new remote branches!\n"))
1228 self.ui.status(_("(did you forget to merge?"
1231 self.ui.status(_("(did you forget to merge?"
1229 " use push -f to force)\n"))
1232 " use push -f to force)\n"))
1230 return None, 1
1233 return None, 1
1231
1234
1232 if revs is None:
1235 if revs is None:
1233 cg = self.changegroup(update, 'push')
1236 cg = self.changegroup(update, 'push')
1234 else:
1237 else:
1235 cg = self.changegroupsubset(update, revs, 'push')
1238 cg = self.changegroupsubset(update, revs, 'push')
1236 return cg, remote_heads
1239 return cg, remote_heads
1237
1240
1238 def push_addchangegroup(self, remote, force, revs):
1241 def push_addchangegroup(self, remote, force, revs):
1239 lock = remote.lock()
1242 lock = remote.lock()
1240
1243
1241 ret = self.prepush(remote, force, revs)
1244 ret = self.prepush(remote, force, revs)
1242 if ret[0] is not None:
1245 if ret[0] is not None:
1243 cg, remote_heads = ret
1246 cg, remote_heads = ret
1244 return remote.addchangegroup(cg, 'push')
1247 return remote.addchangegroup(cg, 'push', self.url())
1245 return ret[1]
1248 return ret[1]
1246
1249
1247 def push_unbundle(self, remote, force, revs):
1250 def push_unbundle(self, remote, force, revs):
1248 # local repo finds heads on server, finds out what revs it
1251 # local repo finds heads on server, finds out what revs it
1249 # must push. once revs transferred, if server finds it has
1252 # must push. once revs transferred, if server finds it has
1250 # different heads (someone else won commit/push race), server
1253 # different heads (someone else won commit/push race), server
1251 # aborts.
1254 # aborts.
1252
1255
1253 ret = self.prepush(remote, force, revs)
1256 ret = self.prepush(remote, force, revs)
1254 if ret[0] is not None:
1257 if ret[0] is not None:
1255 cg, remote_heads = ret
1258 cg, remote_heads = ret
1256 if force: remote_heads = ['force']
1259 if force: remote_heads = ['force']
1257 return remote.unbundle(cg, remote_heads, 'push')
1260 return remote.unbundle(cg, remote_heads, 'push')
1258 return ret[1]
1261 return ret[1]
1259
1262
1260 def changegroupsubset(self, bases, heads, source):
1263 def changegroupsubset(self, bases, heads, source):
1261 """This function generates a changegroup consisting of all the nodes
1264 """This function generates a changegroup consisting of all the nodes
1262 that are descendents of any of the bases, and ancestors of any of
1265 that are descendents of any of the bases, and ancestors of any of
1263 the heads.
1266 the heads.
1264
1267
1265 It is fairly complex as determining which filenodes and which
1268 It is fairly complex as determining which filenodes and which
1266 manifest nodes need to be included for the changeset to be complete
1269 manifest nodes need to be included for the changeset to be complete
1267 is non-trivial.
1270 is non-trivial.
1268
1271
1269 Another wrinkle is doing the reverse, figuring out which changeset in
1272 Another wrinkle is doing the reverse, figuring out which changeset in
1270 the changegroup a particular filenode or manifestnode belongs to."""
1273 the changegroup a particular filenode or manifestnode belongs to."""
1271
1274
1272 self.hook('preoutgoing', throw=True, source=source)
1275 self.hook('preoutgoing', throw=True, source=source)
1273
1276
1274 # Set up some initial variables
1277 # Set up some initial variables
1275 # Make it easy to refer to self.changelog
1278 # Make it easy to refer to self.changelog
1276 cl = self.changelog
1279 cl = self.changelog
1277 # msng is short for missing - compute the list of changesets in this
1280 # msng is short for missing - compute the list of changesets in this
1278 # changegroup.
1281 # changegroup.
1279 msng_cl_lst, bases, heads = cl.nodesbetween(bases, heads)
1282 msng_cl_lst, bases, heads = cl.nodesbetween(bases, heads)
1280 # Some bases may turn out to be superfluous, and some heads may be
1283 # Some bases may turn out to be superfluous, and some heads may be
1281 # too. nodesbetween will return the minimal set of bases and heads
1284 # too. nodesbetween will return the minimal set of bases and heads
1282 # necessary to re-create the changegroup.
1285 # necessary to re-create the changegroup.
1283
1286
1284 # Known heads are the list of heads that it is assumed the recipient
1287 # Known heads are the list of heads that it is assumed the recipient
1285 # of this changegroup will know about.
1288 # of this changegroup will know about.
1286 knownheads = {}
1289 knownheads = {}
1287 # We assume that all parents of bases are known heads.
1290 # We assume that all parents of bases are known heads.
1288 for n in bases:
1291 for n in bases:
1289 for p in cl.parents(n):
1292 for p in cl.parents(n):
1290 if p != nullid:
1293 if p != nullid:
1291 knownheads[p] = 1
1294 knownheads[p] = 1
1292 knownheads = knownheads.keys()
1295 knownheads = knownheads.keys()
1293 if knownheads:
1296 if knownheads:
1294 # Now that we know what heads are known, we can compute which
1297 # Now that we know what heads are known, we can compute which
1295 # changesets are known. The recipient must know about all
1298 # changesets are known. The recipient must know about all
1296 # changesets required to reach the known heads from the null
1299 # changesets required to reach the known heads from the null
1297 # changeset.
1300 # changeset.
1298 has_cl_set, junk, junk = cl.nodesbetween(None, knownheads)
1301 has_cl_set, junk, junk = cl.nodesbetween(None, knownheads)
1299 junk = None
1302 junk = None
1300 # Transform the list into an ersatz set.
1303 # Transform the list into an ersatz set.
1301 has_cl_set = dict.fromkeys(has_cl_set)
1304 has_cl_set = dict.fromkeys(has_cl_set)
1302 else:
1305 else:
1303 # If there were no known heads, the recipient cannot be assumed to
1306 # If there were no known heads, the recipient cannot be assumed to
1304 # know about any changesets.
1307 # know about any changesets.
1305 has_cl_set = {}
1308 has_cl_set = {}
1306
1309
1307 # Make it easy to refer to self.manifest
1310 # Make it easy to refer to self.manifest
1308 mnfst = self.manifest
1311 mnfst = self.manifest
1309 # We don't know which manifests are missing yet
1312 # We don't know which manifests are missing yet
1310 msng_mnfst_set = {}
1313 msng_mnfst_set = {}
1311 # Nor do we know which filenodes are missing.
1314 # Nor do we know which filenodes are missing.
1312 msng_filenode_set = {}
1315 msng_filenode_set = {}
1313
1316
1314 junk = mnfst.index[mnfst.count() - 1] # Get around a bug in lazyindex
1317 junk = mnfst.index[mnfst.count() - 1] # Get around a bug in lazyindex
1315 junk = None
1318 junk = None
1316
1319
1317 # A changeset always belongs to itself, so the changenode lookup
1320 # A changeset always belongs to itself, so the changenode lookup
1318 # function for a changenode is identity.
1321 # function for a changenode is identity.
1319 def identity(x):
1322 def identity(x):
1320 return x
1323 return x
1321
1324
1322 # A function generating function. Sets up an environment for the
1325 # A function generating function. Sets up an environment for the
1323 # inner function.
1326 # inner function.
1324 def cmp_by_rev_func(revlog):
1327 def cmp_by_rev_func(revlog):
1325 # Compare two nodes by their revision number in the environment's
1328 # Compare two nodes by their revision number in the environment's
1326 # revision history. Since the revision number both represents the
1329 # revision history. Since the revision number both represents the
1327 # most efficient order to read the nodes in, and represents a
1330 # most efficient order to read the nodes in, and represents a
1328 # topological sorting of the nodes, this function is often useful.
1331 # topological sorting of the nodes, this function is often useful.
1329 def cmp_by_rev(a, b):
1332 def cmp_by_rev(a, b):
1330 return cmp(revlog.rev(a), revlog.rev(b))
1333 return cmp(revlog.rev(a), revlog.rev(b))
1331 return cmp_by_rev
1334 return cmp_by_rev
1332
1335
1333 # If we determine that a particular file or manifest node must be a
1336 # If we determine that a particular file or manifest node must be a
1334 # node that the recipient of the changegroup will already have, we can
1337 # node that the recipient of the changegroup will already have, we can
1335 # also assume the recipient will have all the parents. This function
1338 # also assume the recipient will have all the parents. This function
1336 # prunes them from the set of missing nodes.
1339 # prunes them from the set of missing nodes.
1337 def prune_parents(revlog, hasset, msngset):
1340 def prune_parents(revlog, hasset, msngset):
1338 haslst = hasset.keys()
1341 haslst = hasset.keys()
1339 haslst.sort(cmp_by_rev_func(revlog))
1342 haslst.sort(cmp_by_rev_func(revlog))
1340 for node in haslst:
1343 for node in haslst:
1341 parentlst = [p for p in revlog.parents(node) if p != nullid]
1344 parentlst = [p for p in revlog.parents(node) if p != nullid]
1342 while parentlst:
1345 while parentlst:
1343 n = parentlst.pop()
1346 n = parentlst.pop()
1344 if n not in hasset:
1347 if n not in hasset:
1345 hasset[n] = 1
1348 hasset[n] = 1
1346 p = [p for p in revlog.parents(n) if p != nullid]
1349 p = [p for p in revlog.parents(n) if p != nullid]
1347 parentlst.extend(p)
1350 parentlst.extend(p)
1348 for n in hasset:
1351 for n in hasset:
1349 msngset.pop(n, None)
1352 msngset.pop(n, None)
1350
1353
1351 # This is a function generating function used to set up an environment
1354 # This is a function generating function used to set up an environment
1352 # for the inner function to execute in.
1355 # for the inner function to execute in.
1353 def manifest_and_file_collector(changedfileset):
1356 def manifest_and_file_collector(changedfileset):
1354 # This is an information gathering function that gathers
1357 # This is an information gathering function that gathers
1355 # information from each changeset node that goes out as part of
1358 # information from each changeset node that goes out as part of
1356 # the changegroup. The information gathered is a list of which
1359 # the changegroup. The information gathered is a list of which
1357 # manifest nodes are potentially required (the recipient may
1360 # manifest nodes are potentially required (the recipient may
1358 # already have them) and total list of all files which were
1361 # already have them) and total list of all files which were
1359 # changed in any changeset in the changegroup.
1362 # changed in any changeset in the changegroup.
1360 #
1363 #
1361 # We also remember the first changenode we saw any manifest
1364 # We also remember the first changenode we saw any manifest
1362 # referenced by so we can later determine which changenode 'owns'
1365 # referenced by so we can later determine which changenode 'owns'
1363 # the manifest.
1366 # the manifest.
1364 def collect_manifests_and_files(clnode):
1367 def collect_manifests_and_files(clnode):
1365 c = cl.read(clnode)
1368 c = cl.read(clnode)
1366 for f in c[3]:
1369 for f in c[3]:
1367 # This is to make sure we only have one instance of each
1370 # This is to make sure we only have one instance of each
1368 # filename string for each filename.
1371 # filename string for each filename.
1369 changedfileset.setdefault(f, f)
1372 changedfileset.setdefault(f, f)
1370 msng_mnfst_set.setdefault(c[0], clnode)
1373 msng_mnfst_set.setdefault(c[0], clnode)
1371 return collect_manifests_and_files
1374 return collect_manifests_and_files
1372
1375
1373 # Figure out which manifest nodes (of the ones we think might be part
1376 # Figure out which manifest nodes (of the ones we think might be part
1374 # of the changegroup) the recipient must know about and remove them
1377 # of the changegroup) the recipient must know about and remove them
1375 # from the changegroup.
1378 # from the changegroup.
1376 def prune_manifests():
1379 def prune_manifests():
1377 has_mnfst_set = {}
1380 has_mnfst_set = {}
1378 for n in msng_mnfst_set:
1381 for n in msng_mnfst_set:
1379 # If a 'missing' manifest thinks it belongs to a changenode
1382 # If a 'missing' manifest thinks it belongs to a changenode
1380 # the recipient is assumed to have, obviously the recipient
1383 # the recipient is assumed to have, obviously the recipient
1381 # must have that manifest.
1384 # must have that manifest.
1382 linknode = cl.node(mnfst.linkrev(n))
1385 linknode = cl.node(mnfst.linkrev(n))
1383 if linknode in has_cl_set:
1386 if linknode in has_cl_set:
1384 has_mnfst_set[n] = 1
1387 has_mnfst_set[n] = 1
1385 prune_parents(mnfst, has_mnfst_set, msng_mnfst_set)
1388 prune_parents(mnfst, has_mnfst_set, msng_mnfst_set)
1386
1389
1387 # Use the information collected in collect_manifests_and_files to say
1390 # Use the information collected in collect_manifests_and_files to say
1388 # which changenode any manifestnode belongs to.
1391 # which changenode any manifestnode belongs to.
1389 def lookup_manifest_link(mnfstnode):
1392 def lookup_manifest_link(mnfstnode):
1390 return msng_mnfst_set[mnfstnode]
1393 return msng_mnfst_set[mnfstnode]
1391
1394
1392 # A function generating function that sets up the initial environment
1395 # A function generating function that sets up the initial environment
1393 # the inner function.
1396 # the inner function.
1394 def filenode_collector(changedfiles):
1397 def filenode_collector(changedfiles):
1395 next_rev = [0]
1398 next_rev = [0]
1396 # This gathers information from each manifestnode included in the
1399 # This gathers information from each manifestnode included in the
1397 # changegroup about which filenodes the manifest node references
1400 # changegroup about which filenodes the manifest node references
1398 # so we can include those in the changegroup too.
1401 # so we can include those in the changegroup too.
1399 #
1402 #
1400 # It also remembers which changenode each filenode belongs to. It
1403 # It also remembers which changenode each filenode belongs to. It
1401 # does this by assuming the a filenode belongs to the changenode
1404 # does this by assuming the a filenode belongs to the changenode
1402 # the first manifest that references it belongs to.
1405 # the first manifest that references it belongs to.
1403 def collect_msng_filenodes(mnfstnode):
1406 def collect_msng_filenodes(mnfstnode):
1404 r = mnfst.rev(mnfstnode)
1407 r = mnfst.rev(mnfstnode)
1405 if r == next_rev[0]:
1408 if r == next_rev[0]:
1406 # If the last rev we looked at was the one just previous,
1409 # If the last rev we looked at was the one just previous,
1407 # we only need to see a diff.
1410 # we only need to see a diff.
1408 delta = mdiff.patchtext(mnfst.delta(mnfstnode))
1411 delta = mdiff.patchtext(mnfst.delta(mnfstnode))
1409 # For each line in the delta
1412 # For each line in the delta
1410 for dline in delta.splitlines():
1413 for dline in delta.splitlines():
1411 # get the filename and filenode for that line
1414 # get the filename and filenode for that line
1412 f, fnode = dline.split('\0')
1415 f, fnode = dline.split('\0')
1413 fnode = bin(fnode[:40])
1416 fnode = bin(fnode[:40])
1414 f = changedfiles.get(f, None)
1417 f = changedfiles.get(f, None)
1415 # And if the file is in the list of files we care
1418 # And if the file is in the list of files we care
1416 # about.
1419 # about.
1417 if f is not None:
1420 if f is not None:
1418 # Get the changenode this manifest belongs to
1421 # Get the changenode this manifest belongs to
1419 clnode = msng_mnfst_set[mnfstnode]
1422 clnode = msng_mnfst_set[mnfstnode]
1420 # Create the set of filenodes for the file if
1423 # Create the set of filenodes for the file if
1421 # there isn't one already.
1424 # there isn't one already.
1422 ndset = msng_filenode_set.setdefault(f, {})
1425 ndset = msng_filenode_set.setdefault(f, {})
1423 # And set the filenode's changelog node to the
1426 # And set the filenode's changelog node to the
1424 # manifest's if it hasn't been set already.
1427 # manifest's if it hasn't been set already.
1425 ndset.setdefault(fnode, clnode)
1428 ndset.setdefault(fnode, clnode)
1426 else:
1429 else:
1427 # Otherwise we need a full manifest.
1430 # Otherwise we need a full manifest.
1428 m = mnfst.read(mnfstnode)
1431 m = mnfst.read(mnfstnode)
1429 # For every file in we care about.
1432 # For every file in we care about.
1430 for f in changedfiles:
1433 for f in changedfiles:
1431 fnode = m.get(f, None)
1434 fnode = m.get(f, None)
1432 # If it's in the manifest
1435 # If it's in the manifest
1433 if fnode is not None:
1436 if fnode is not None:
1434 # See comments above.
1437 # See comments above.
1435 clnode = msng_mnfst_set[mnfstnode]
1438 clnode = msng_mnfst_set[mnfstnode]
1436 ndset = msng_filenode_set.setdefault(f, {})
1439 ndset = msng_filenode_set.setdefault(f, {})
1437 ndset.setdefault(fnode, clnode)
1440 ndset.setdefault(fnode, clnode)
1438 # Remember the revision we hope to see next.
1441 # Remember the revision we hope to see next.
1439 next_rev[0] = r + 1
1442 next_rev[0] = r + 1
1440 return collect_msng_filenodes
1443 return collect_msng_filenodes
1441
1444
1442 # We have a list of filenodes we think we need for a file, lets remove
1445 # We have a list of filenodes we think we need for a file, lets remove
1443 # all those we now the recipient must have.
1446 # all those we now the recipient must have.
1444 def prune_filenodes(f, filerevlog):
1447 def prune_filenodes(f, filerevlog):
1445 msngset = msng_filenode_set[f]
1448 msngset = msng_filenode_set[f]
1446 hasset = {}
1449 hasset = {}
1447 # If a 'missing' filenode thinks it belongs to a changenode we
1450 # If a 'missing' filenode thinks it belongs to a changenode we
1448 # assume the recipient must have, then the recipient must have
1451 # assume the recipient must have, then the recipient must have
1449 # that filenode.
1452 # that filenode.
1450 for n in msngset:
1453 for n in msngset:
1451 clnode = cl.node(filerevlog.linkrev(n))
1454 clnode = cl.node(filerevlog.linkrev(n))
1452 if clnode in has_cl_set:
1455 if clnode in has_cl_set:
1453 hasset[n] = 1
1456 hasset[n] = 1
1454 prune_parents(filerevlog, hasset, msngset)
1457 prune_parents(filerevlog, hasset, msngset)
1455
1458
1456 # A function generator function that sets up the a context for the
1459 # A function generator function that sets up the a context for the
1457 # inner function.
1460 # inner function.
1458 def lookup_filenode_link_func(fname):
1461 def lookup_filenode_link_func(fname):
1459 msngset = msng_filenode_set[fname]
1462 msngset = msng_filenode_set[fname]
1460 # Lookup the changenode the filenode belongs to.
1463 # Lookup the changenode the filenode belongs to.
1461 def lookup_filenode_link(fnode):
1464 def lookup_filenode_link(fnode):
1462 return msngset[fnode]
1465 return msngset[fnode]
1463 return lookup_filenode_link
1466 return lookup_filenode_link
1464
1467
1465 # Now that we have all theses utility functions to help out and
1468 # Now that we have all theses utility functions to help out and
1466 # logically divide up the task, generate the group.
1469 # logically divide up the task, generate the group.
1467 def gengroup():
1470 def gengroup():
1468 # The set of changed files starts empty.
1471 # The set of changed files starts empty.
1469 changedfiles = {}
1472 changedfiles = {}
1470 # Create a changenode group generator that will call our functions
1473 # Create a changenode group generator that will call our functions
1471 # back to lookup the owning changenode and collect information.
1474 # back to lookup the owning changenode and collect information.
1472 group = cl.group(msng_cl_lst, identity,
1475 group = cl.group(msng_cl_lst, identity,
1473 manifest_and_file_collector(changedfiles))
1476 manifest_and_file_collector(changedfiles))
1474 for chnk in group:
1477 for chnk in group:
1475 yield chnk
1478 yield chnk
1476
1479
1477 # The list of manifests has been collected by the generator
1480 # The list of manifests has been collected by the generator
1478 # calling our functions back.
1481 # calling our functions back.
1479 prune_manifests()
1482 prune_manifests()
1480 msng_mnfst_lst = msng_mnfst_set.keys()
1483 msng_mnfst_lst = msng_mnfst_set.keys()
1481 # Sort the manifestnodes by revision number.
1484 # Sort the manifestnodes by revision number.
1482 msng_mnfst_lst.sort(cmp_by_rev_func(mnfst))
1485 msng_mnfst_lst.sort(cmp_by_rev_func(mnfst))
1483 # Create a generator for the manifestnodes that calls our lookup
1486 # Create a generator for the manifestnodes that calls our lookup
1484 # and data collection functions back.
1487 # and data collection functions back.
1485 group = mnfst.group(msng_mnfst_lst, lookup_manifest_link,
1488 group = mnfst.group(msng_mnfst_lst, lookup_manifest_link,
1486 filenode_collector(changedfiles))
1489 filenode_collector(changedfiles))
1487 for chnk in group:
1490 for chnk in group:
1488 yield chnk
1491 yield chnk
1489
1492
1490 # These are no longer needed, dereference and toss the memory for
1493 # These are no longer needed, dereference and toss the memory for
1491 # them.
1494 # them.
1492 msng_mnfst_lst = None
1495 msng_mnfst_lst = None
1493 msng_mnfst_set.clear()
1496 msng_mnfst_set.clear()
1494
1497
1495 changedfiles = changedfiles.keys()
1498 changedfiles = changedfiles.keys()
1496 changedfiles.sort()
1499 changedfiles.sort()
1497 # Go through all our files in order sorted by name.
1500 # Go through all our files in order sorted by name.
1498 for fname in changedfiles:
1501 for fname in changedfiles:
1499 filerevlog = self.file(fname)
1502 filerevlog = self.file(fname)
1500 # Toss out the filenodes that the recipient isn't really
1503 # Toss out the filenodes that the recipient isn't really
1501 # missing.
1504 # missing.
1502 if msng_filenode_set.has_key(fname):
1505 if msng_filenode_set.has_key(fname):
1503 prune_filenodes(fname, filerevlog)
1506 prune_filenodes(fname, filerevlog)
1504 msng_filenode_lst = msng_filenode_set[fname].keys()
1507 msng_filenode_lst = msng_filenode_set[fname].keys()
1505 else:
1508 else:
1506 msng_filenode_lst = []
1509 msng_filenode_lst = []
1507 # If any filenodes are left, generate the group for them,
1510 # If any filenodes are left, generate the group for them,
1508 # otherwise don't bother.
1511 # otherwise don't bother.
1509 if len(msng_filenode_lst) > 0:
1512 if len(msng_filenode_lst) > 0:
1510 yield changegroup.genchunk(fname)
1513 yield changegroup.genchunk(fname)
1511 # Sort the filenodes by their revision #
1514 # Sort the filenodes by their revision #
1512 msng_filenode_lst.sort(cmp_by_rev_func(filerevlog))
1515 msng_filenode_lst.sort(cmp_by_rev_func(filerevlog))
1513 # Create a group generator and only pass in a changenode
1516 # Create a group generator and only pass in a changenode
1514 # lookup function as we need to collect no information
1517 # lookup function as we need to collect no information
1515 # from filenodes.
1518 # from filenodes.
1516 group = filerevlog.group(msng_filenode_lst,
1519 group = filerevlog.group(msng_filenode_lst,
1517 lookup_filenode_link_func(fname))
1520 lookup_filenode_link_func(fname))
1518 for chnk in group:
1521 for chnk in group:
1519 yield chnk
1522 yield chnk
1520 if msng_filenode_set.has_key(fname):
1523 if msng_filenode_set.has_key(fname):
1521 # Don't need this anymore, toss it to free memory.
1524 # Don't need this anymore, toss it to free memory.
1522 del msng_filenode_set[fname]
1525 del msng_filenode_set[fname]
1523 # Signal that no more groups are left.
1526 # Signal that no more groups are left.
1524 yield changegroup.closechunk()
1527 yield changegroup.closechunk()
1525
1528
1526 if msng_cl_lst:
1529 if msng_cl_lst:
1527 self.hook('outgoing', node=hex(msng_cl_lst[0]), source=source)
1530 self.hook('outgoing', node=hex(msng_cl_lst[0]), source=source)
1528
1531
1529 return util.chunkbuffer(gengroup())
1532 return util.chunkbuffer(gengroup())
1530
1533
1531 def changegroup(self, basenodes, source):
1534 def changegroup(self, basenodes, source):
1532 """Generate a changegroup of all nodes that we have that a recipient
1535 """Generate a changegroup of all nodes that we have that a recipient
1533 doesn't.
1536 doesn't.
1534
1537
1535 This is much easier than the previous function as we can assume that
1538 This is much easier than the previous function as we can assume that
1536 the recipient has any changenode we aren't sending them."""
1539 the recipient has any changenode we aren't sending them."""
1537
1540
1538 self.hook('preoutgoing', throw=True, source=source)
1541 self.hook('preoutgoing', throw=True, source=source)
1539
1542
1540 cl = self.changelog
1543 cl = self.changelog
1541 nodes = cl.nodesbetween(basenodes, None)[0]
1544 nodes = cl.nodesbetween(basenodes, None)[0]
1542 revset = dict.fromkeys([cl.rev(n) for n in nodes])
1545 revset = dict.fromkeys([cl.rev(n) for n in nodes])
1543
1546
1544 def identity(x):
1547 def identity(x):
1545 return x
1548 return x
1546
1549
1547 def gennodelst(revlog):
1550 def gennodelst(revlog):
1548 for r in xrange(0, revlog.count()):
1551 for r in xrange(0, revlog.count()):
1549 n = revlog.node(r)
1552 n = revlog.node(r)
1550 if revlog.linkrev(n) in revset:
1553 if revlog.linkrev(n) in revset:
1551 yield n
1554 yield n
1552
1555
1553 def changed_file_collector(changedfileset):
1556 def changed_file_collector(changedfileset):
1554 def collect_changed_files(clnode):
1557 def collect_changed_files(clnode):
1555 c = cl.read(clnode)
1558 c = cl.read(clnode)
1556 for fname in c[3]:
1559 for fname in c[3]:
1557 changedfileset[fname] = 1
1560 changedfileset[fname] = 1
1558 return collect_changed_files
1561 return collect_changed_files
1559
1562
1560 def lookuprevlink_func(revlog):
1563 def lookuprevlink_func(revlog):
1561 def lookuprevlink(n):
1564 def lookuprevlink(n):
1562 return cl.node(revlog.linkrev(n))
1565 return cl.node(revlog.linkrev(n))
1563 return lookuprevlink
1566 return lookuprevlink
1564
1567
1565 def gengroup():
1568 def gengroup():
1566 # construct a list of all changed files
1569 # construct a list of all changed files
1567 changedfiles = {}
1570 changedfiles = {}
1568
1571
1569 for chnk in cl.group(nodes, identity,
1572 for chnk in cl.group(nodes, identity,
1570 changed_file_collector(changedfiles)):
1573 changed_file_collector(changedfiles)):
1571 yield chnk
1574 yield chnk
1572 changedfiles = changedfiles.keys()
1575 changedfiles = changedfiles.keys()
1573 changedfiles.sort()
1576 changedfiles.sort()
1574
1577
1575 mnfst = self.manifest
1578 mnfst = self.manifest
1576 nodeiter = gennodelst(mnfst)
1579 nodeiter = gennodelst(mnfst)
1577 for chnk in mnfst.group(nodeiter, lookuprevlink_func(mnfst)):
1580 for chnk in mnfst.group(nodeiter, lookuprevlink_func(mnfst)):
1578 yield chnk
1581 yield chnk
1579
1582
1580 for fname in changedfiles:
1583 for fname in changedfiles:
1581 filerevlog = self.file(fname)
1584 filerevlog = self.file(fname)
1582 nodeiter = gennodelst(filerevlog)
1585 nodeiter = gennodelst(filerevlog)
1583 nodeiter = list(nodeiter)
1586 nodeiter = list(nodeiter)
1584 if nodeiter:
1587 if nodeiter:
1585 yield changegroup.genchunk(fname)
1588 yield changegroup.genchunk(fname)
1586 lookup = lookuprevlink_func(filerevlog)
1589 lookup = lookuprevlink_func(filerevlog)
1587 for chnk in filerevlog.group(nodeiter, lookup):
1590 for chnk in filerevlog.group(nodeiter, lookup):
1588 yield chnk
1591 yield chnk
1589
1592
1590 yield changegroup.closechunk()
1593 yield changegroup.closechunk()
1591
1594
1592 if nodes:
1595 if nodes:
1593 self.hook('outgoing', node=hex(nodes[0]), source=source)
1596 self.hook('outgoing', node=hex(nodes[0]), source=source)
1594
1597
1595 return util.chunkbuffer(gengroup())
1598 return util.chunkbuffer(gengroup())
1596
1599
1597 def addchangegroup(self, source, srctype):
1600 def addchangegroup(self, source, srctype, url):
1598 """add changegroup to repo.
1601 """add changegroup to repo.
1599 returns number of heads modified or added + 1."""
1602 returns number of heads modified or added + 1."""
1600
1603
1601 def csmap(x):
1604 def csmap(x):
1602 self.ui.debug(_("add changeset %s\n") % short(x))
1605 self.ui.debug(_("add changeset %s\n") % short(x))
1603 return cl.count()
1606 return cl.count()
1604
1607
1605 def revmap(x):
1608 def revmap(x):
1606 return cl.rev(x)
1609 return cl.rev(x)
1607
1610
1608 if not source:
1611 if not source:
1609 return 0
1612 return 0
1610
1613
1611 self.hook('prechangegroup', throw=True, source=srctype)
1614 self.hook('prechangegroup', throw=True, source=srctype, url=url)
1612
1615
1613 changesets = files = revisions = 0
1616 changesets = files = revisions = 0
1614
1617
1615 tr = self.transaction()
1618 tr = self.transaction()
1616
1619
1617 # write changelog data to temp files so concurrent readers will not see
1620 # write changelog data to temp files so concurrent readers will not see
1618 # inconsistent view
1621 # inconsistent view
1619 cl = None
1622 cl = None
1620 try:
1623 try:
1621 cl = appendfile.appendchangelog(self.opener, self.changelog.version)
1624 cl = appendfile.appendchangelog(self.opener, self.changelog.version)
1622
1625
1623 oldheads = len(cl.heads())
1626 oldheads = len(cl.heads())
1624
1627
1625 # pull off the changeset group
1628 # pull off the changeset group
1626 self.ui.status(_("adding changesets\n"))
1629 self.ui.status(_("adding changesets\n"))
1627 cor = cl.count() - 1
1630 cor = cl.count() - 1
1628 chunkiter = changegroup.chunkiter(source)
1631 chunkiter = changegroup.chunkiter(source)
1629 if cl.addgroup(chunkiter, csmap, tr, 1) is None:
1632 if cl.addgroup(chunkiter, csmap, tr, 1) is None:
1630 raise util.Abort(_("received changelog group is empty"))
1633 raise util.Abort(_("received changelog group is empty"))
1631 cnr = cl.count() - 1
1634 cnr = cl.count() - 1
1632 changesets = cnr - cor
1635 changesets = cnr - cor
1633
1636
1634 # pull off the manifest group
1637 # pull off the manifest group
1635 self.ui.status(_("adding manifests\n"))
1638 self.ui.status(_("adding manifests\n"))
1636 chunkiter = changegroup.chunkiter(source)
1639 chunkiter = changegroup.chunkiter(source)
1637 # no need to check for empty manifest group here:
1640 # no need to check for empty manifest group here:
1638 # if the result of the merge of 1 and 2 is the same in 3 and 4,
1641 # if the result of the merge of 1 and 2 is the same in 3 and 4,
1639 # no new manifest will be created and the manifest group will
1642 # no new manifest will be created and the manifest group will
1640 # be empty during the pull
1643 # be empty during the pull
1641 self.manifest.addgroup(chunkiter, revmap, tr)
1644 self.manifest.addgroup(chunkiter, revmap, tr)
1642
1645
1643 # process the files
1646 # process the files
1644 self.ui.status(_("adding file changes\n"))
1647 self.ui.status(_("adding file changes\n"))
1645 while 1:
1648 while 1:
1646 f = changegroup.getchunk(source)
1649 f = changegroup.getchunk(source)
1647 if not f:
1650 if not f:
1648 break
1651 break
1649 self.ui.debug(_("adding %s revisions\n") % f)
1652 self.ui.debug(_("adding %s revisions\n") % f)
1650 fl = self.file(f)
1653 fl = self.file(f)
1651 o = fl.count()
1654 o = fl.count()
1652 chunkiter = changegroup.chunkiter(source)
1655 chunkiter = changegroup.chunkiter(source)
1653 if fl.addgroup(chunkiter, revmap, tr) is None:
1656 if fl.addgroup(chunkiter, revmap, tr) is None:
1654 raise util.Abort(_("received file revlog group is empty"))
1657 raise util.Abort(_("received file revlog group is empty"))
1655 revisions += fl.count() - o
1658 revisions += fl.count() - o
1656 files += 1
1659 files += 1
1657
1660
1658 cl.writedata()
1661 cl.writedata()
1659 finally:
1662 finally:
1660 if cl:
1663 if cl:
1661 cl.cleanup()
1664 cl.cleanup()
1662
1665
1663 # make changelog see real files again
1666 # make changelog see real files again
1664 self.changelog = changelog.changelog(self.opener, self.changelog.version)
1667 self.changelog = changelog.changelog(self.opener, self.changelog.version)
1665 self.changelog.checkinlinesize(tr)
1668 self.changelog.checkinlinesize(tr)
1666
1669
1667 newheads = len(self.changelog.heads())
1670 newheads = len(self.changelog.heads())
1668 heads = ""
1671 heads = ""
1669 if oldheads and newheads != oldheads:
1672 if oldheads and newheads != oldheads:
1670 heads = _(" (%+d heads)") % (newheads - oldheads)
1673 heads = _(" (%+d heads)") % (newheads - oldheads)
1671
1674
1672 self.ui.status(_("added %d changesets"
1675 self.ui.status(_("added %d changesets"
1673 " with %d changes to %d files%s\n")
1676 " with %d changes to %d files%s\n")
1674 % (changesets, revisions, files, heads))
1677 % (changesets, revisions, files, heads))
1675
1678
1676 if changesets > 0:
1679 if changesets > 0:
1677 self.hook('pretxnchangegroup', throw=True,
1680 self.hook('pretxnchangegroup', throw=True,
1678 node=hex(self.changelog.node(cor+1)), source=srctype)
1681 node=hex(self.changelog.node(cor+1)), source=srctype,
1682 url=url)
1679
1683
1680 tr.close()
1684 tr.close()
1681
1685
1682 if changesets > 0:
1686 if changesets > 0:
1683 self.hook("changegroup", node=hex(self.changelog.node(cor+1)),
1687 self.hook("changegroup", node=hex(self.changelog.node(cor+1)),
1684 source=srctype)
1688 source=srctype, url=url)
1685
1689
1686 for i in range(cor + 1, cnr + 1):
1690 for i in range(cor + 1, cnr + 1):
1687 self.hook("incoming", node=hex(self.changelog.node(i)),
1691 self.hook("incoming", node=hex(self.changelog.node(i)),
1688 source=srctype)
1692 source=srctype, url=url)
1689
1693
1690 return newheads - oldheads + 1
1694 return newheads - oldheads + 1
1691
1695
1692 def update(self, node, allow=False, force=False, choose=None,
1696 def update(self, node, allow=False, force=False, choose=None,
1693 moddirstate=True, forcemerge=False, wlock=None, show_stats=True):
1697 moddirstate=True, forcemerge=False, wlock=None, show_stats=True):
1694 pl = self.dirstate.parents()
1698 pl = self.dirstate.parents()
1695 if not force and pl[1] != nullid:
1699 if not force and pl[1] != nullid:
1696 raise util.Abort(_("outstanding uncommitted merges"))
1700 raise util.Abort(_("outstanding uncommitted merges"))
1697
1701
1698 err = False
1702 err = False
1699
1703
1700 p1, p2 = pl[0], node
1704 p1, p2 = pl[0], node
1701 pa = self.changelog.ancestor(p1, p2)
1705 pa = self.changelog.ancestor(p1, p2)
1702 m1n = self.changelog.read(p1)[0]
1706 m1n = self.changelog.read(p1)[0]
1703 m2n = self.changelog.read(p2)[0]
1707 m2n = self.changelog.read(p2)[0]
1704 man = self.manifest.ancestor(m1n, m2n)
1708 man = self.manifest.ancestor(m1n, m2n)
1705 m1 = self.manifest.read(m1n)
1709 m1 = self.manifest.read(m1n)
1706 mf1 = self.manifest.readflags(m1n)
1710 mf1 = self.manifest.readflags(m1n)
1707 m2 = self.manifest.read(m2n).copy()
1711 m2 = self.manifest.read(m2n).copy()
1708 mf2 = self.manifest.readflags(m2n)
1712 mf2 = self.manifest.readflags(m2n)
1709 ma = self.manifest.read(man)
1713 ma = self.manifest.read(man)
1710 mfa = self.manifest.readflags(man)
1714 mfa = self.manifest.readflags(man)
1711
1715
1712 modified, added, removed, deleted, unknown = self.changes()
1716 modified, added, removed, deleted, unknown = self.changes()
1713
1717
1714 # is this a jump, or a merge? i.e. is there a linear path
1718 # is this a jump, or a merge? i.e. is there a linear path
1715 # from p1 to p2?
1719 # from p1 to p2?
1716 linear_path = (pa == p1 or pa == p2)
1720 linear_path = (pa == p1 or pa == p2)
1717
1721
1718 if allow and linear_path:
1722 if allow and linear_path:
1719 raise util.Abort(_("there is nothing to merge, just use "
1723 raise util.Abort(_("there is nothing to merge, just use "
1720 "'hg update' or look at 'hg heads'"))
1724 "'hg update' or look at 'hg heads'"))
1721 if allow and not forcemerge:
1725 if allow and not forcemerge:
1722 if modified or added or removed:
1726 if modified or added or removed:
1723 raise util.Abort(_("outstanding uncommitted changes"))
1727 raise util.Abort(_("outstanding uncommitted changes"))
1724
1728
1725 if not forcemerge and not force:
1729 if not forcemerge and not force:
1726 for f in unknown:
1730 for f in unknown:
1727 if f in m2:
1731 if f in m2:
1728 t1 = self.wread(f)
1732 t1 = self.wread(f)
1729 t2 = self.file(f).read(m2[f])
1733 t2 = self.file(f).read(m2[f])
1730 if cmp(t1, t2) != 0:
1734 if cmp(t1, t2) != 0:
1731 raise util.Abort(_("'%s' already exists in the working"
1735 raise util.Abort(_("'%s' already exists in the working"
1732 " dir and differs from remote") % f)
1736 " dir and differs from remote") % f)
1733
1737
1734 # resolve the manifest to determine which files
1738 # resolve the manifest to determine which files
1735 # we care about merging
1739 # we care about merging
1736 self.ui.note(_("resolving manifests\n"))
1740 self.ui.note(_("resolving manifests\n"))
1737 self.ui.debug(_(" force %s allow %s moddirstate %s linear %s\n") %
1741 self.ui.debug(_(" force %s allow %s moddirstate %s linear %s\n") %
1738 (force, allow, moddirstate, linear_path))
1742 (force, allow, moddirstate, linear_path))
1739 self.ui.debug(_(" ancestor %s local %s remote %s\n") %
1743 self.ui.debug(_(" ancestor %s local %s remote %s\n") %
1740 (short(man), short(m1n), short(m2n)))
1744 (short(man), short(m1n), short(m2n)))
1741
1745
1742 merge = {}
1746 merge = {}
1743 get = {}
1747 get = {}
1744 remove = []
1748 remove = []
1745
1749
1746 # construct a working dir manifest
1750 # construct a working dir manifest
1747 mw = m1.copy()
1751 mw = m1.copy()
1748 mfw = mf1.copy()
1752 mfw = mf1.copy()
1749 umap = dict.fromkeys(unknown)
1753 umap = dict.fromkeys(unknown)
1750
1754
1751 for f in added + modified + unknown:
1755 for f in added + modified + unknown:
1752 mw[f] = ""
1756 mw[f] = ""
1753 mfw[f] = util.is_exec(self.wjoin(f), mfw.get(f, False))
1757 mfw[f] = util.is_exec(self.wjoin(f), mfw.get(f, False))
1754
1758
1755 if moddirstate and not wlock:
1759 if moddirstate and not wlock:
1756 wlock = self.wlock()
1760 wlock = self.wlock()
1757
1761
1758 for f in deleted + removed:
1762 for f in deleted + removed:
1759 if f in mw:
1763 if f in mw:
1760 del mw[f]
1764 del mw[f]
1761
1765
1762 # If we're jumping between revisions (as opposed to merging),
1766 # If we're jumping between revisions (as opposed to merging),
1763 # and if neither the working directory nor the target rev has
1767 # and if neither the working directory nor the target rev has
1764 # the file, then we need to remove it from the dirstate, to
1768 # the file, then we need to remove it from the dirstate, to
1765 # prevent the dirstate from listing the file when it is no
1769 # prevent the dirstate from listing the file when it is no
1766 # longer in the manifest.
1770 # longer in the manifest.
1767 if moddirstate and linear_path and f not in m2:
1771 if moddirstate and linear_path and f not in m2:
1768 self.dirstate.forget((f,))
1772 self.dirstate.forget((f,))
1769
1773
1770 # Compare manifests
1774 # Compare manifests
1771 for f, n in mw.iteritems():
1775 for f, n in mw.iteritems():
1772 if choose and not choose(f):
1776 if choose and not choose(f):
1773 continue
1777 continue
1774 if f in m2:
1778 if f in m2:
1775 s = 0
1779 s = 0
1776
1780
1777 # is the wfile new since m1, and match m2?
1781 # is the wfile new since m1, and match m2?
1778 if f not in m1:
1782 if f not in m1:
1779 t1 = self.wread(f)
1783 t1 = self.wread(f)
1780 t2 = self.file(f).read(m2[f])
1784 t2 = self.file(f).read(m2[f])
1781 if cmp(t1, t2) == 0:
1785 if cmp(t1, t2) == 0:
1782 n = m2[f]
1786 n = m2[f]
1783 del t1, t2
1787 del t1, t2
1784
1788
1785 # are files different?
1789 # are files different?
1786 if n != m2[f]:
1790 if n != m2[f]:
1787 a = ma.get(f, nullid)
1791 a = ma.get(f, nullid)
1788 # are both different from the ancestor?
1792 # are both different from the ancestor?
1789 if n != a and m2[f] != a:
1793 if n != a and m2[f] != a:
1790 self.ui.debug(_(" %s versions differ, resolve\n") % f)
1794 self.ui.debug(_(" %s versions differ, resolve\n") % f)
1791 # merge executable bits
1795 # merge executable bits
1792 # "if we changed or they changed, change in merge"
1796 # "if we changed or they changed, change in merge"
1793 a, b, c = mfa.get(f, 0), mfw[f], mf2[f]
1797 a, b, c = mfa.get(f, 0), mfw[f], mf2[f]
1794 mode = ((a^b) | (a^c)) ^ a
1798 mode = ((a^b) | (a^c)) ^ a
1795 merge[f] = (m1.get(f, nullid), m2[f], mode)
1799 merge[f] = (m1.get(f, nullid), m2[f], mode)
1796 s = 1
1800 s = 1
1797 # are we clobbering?
1801 # are we clobbering?
1798 # is remote's version newer?
1802 # is remote's version newer?
1799 # or are we going back in time?
1803 # or are we going back in time?
1800 elif force or m2[f] != a or (p2 == pa and mw[f] == m1[f]):
1804 elif force or m2[f] != a or (p2 == pa and mw[f] == m1[f]):
1801 self.ui.debug(_(" remote %s is newer, get\n") % f)
1805 self.ui.debug(_(" remote %s is newer, get\n") % f)
1802 get[f] = m2[f]
1806 get[f] = m2[f]
1803 s = 1
1807 s = 1
1804 elif f in umap or f in added:
1808 elif f in umap or f in added:
1805 # this unknown file is the same as the checkout
1809 # this unknown file is the same as the checkout
1806 # we need to reset the dirstate if the file was added
1810 # we need to reset the dirstate if the file was added
1807 get[f] = m2[f]
1811 get[f] = m2[f]
1808
1812
1809 if not s and mfw[f] != mf2[f]:
1813 if not s and mfw[f] != mf2[f]:
1810 if force:
1814 if force:
1811 self.ui.debug(_(" updating permissions for %s\n") % f)
1815 self.ui.debug(_(" updating permissions for %s\n") % f)
1812 util.set_exec(self.wjoin(f), mf2[f])
1816 util.set_exec(self.wjoin(f), mf2[f])
1813 else:
1817 else:
1814 a, b, c = mfa.get(f, 0), mfw[f], mf2[f]
1818 a, b, c = mfa.get(f, 0), mfw[f], mf2[f]
1815 mode = ((a^b) | (a^c)) ^ a
1819 mode = ((a^b) | (a^c)) ^ a
1816 if mode != b:
1820 if mode != b:
1817 self.ui.debug(_(" updating permissions for %s\n")
1821 self.ui.debug(_(" updating permissions for %s\n")
1818 % f)
1822 % f)
1819 util.set_exec(self.wjoin(f), mode)
1823 util.set_exec(self.wjoin(f), mode)
1820 del m2[f]
1824 del m2[f]
1821 elif f in ma:
1825 elif f in ma:
1822 if n != ma[f]:
1826 if n != ma[f]:
1823 r = _("d")
1827 r = _("d")
1824 if not force and (linear_path or allow):
1828 if not force and (linear_path or allow):
1825 r = self.ui.prompt(
1829 r = self.ui.prompt(
1826 (_(" local changed %s which remote deleted\n") % f) +
1830 (_(" local changed %s which remote deleted\n") % f) +
1827 _("(k)eep or (d)elete?"), _("[kd]"), _("k"))
1831 _("(k)eep or (d)elete?"), _("[kd]"), _("k"))
1828 if r == _("d"):
1832 if r == _("d"):
1829 remove.append(f)
1833 remove.append(f)
1830 else:
1834 else:
1831 self.ui.debug(_("other deleted %s\n") % f)
1835 self.ui.debug(_("other deleted %s\n") % f)
1832 remove.append(f) # other deleted it
1836 remove.append(f) # other deleted it
1833 else:
1837 else:
1834 # file is created on branch or in working directory
1838 # file is created on branch or in working directory
1835 if force and f not in umap:
1839 if force and f not in umap:
1836 self.ui.debug(_("remote deleted %s, clobbering\n") % f)
1840 self.ui.debug(_("remote deleted %s, clobbering\n") % f)
1837 remove.append(f)
1841 remove.append(f)
1838 elif n == m1.get(f, nullid): # same as parent
1842 elif n == m1.get(f, nullid): # same as parent
1839 if p2 == pa: # going backwards?
1843 if p2 == pa: # going backwards?
1840 self.ui.debug(_("remote deleted %s\n") % f)
1844 self.ui.debug(_("remote deleted %s\n") % f)
1841 remove.append(f)
1845 remove.append(f)
1842 else:
1846 else:
1843 self.ui.debug(_("local modified %s, keeping\n") % f)
1847 self.ui.debug(_("local modified %s, keeping\n") % f)
1844 else:
1848 else:
1845 self.ui.debug(_("working dir created %s, keeping\n") % f)
1849 self.ui.debug(_("working dir created %s, keeping\n") % f)
1846
1850
1847 for f, n in m2.iteritems():
1851 for f, n in m2.iteritems():
1848 if choose and not choose(f):
1852 if choose and not choose(f):
1849 continue
1853 continue
1850 if f[0] == "/":
1854 if f[0] == "/":
1851 continue
1855 continue
1852 if f in ma and n != ma[f]:
1856 if f in ma and n != ma[f]:
1853 r = _("k")
1857 r = _("k")
1854 if not force and (linear_path or allow):
1858 if not force and (linear_path or allow):
1855 r = self.ui.prompt(
1859 r = self.ui.prompt(
1856 (_("remote changed %s which local deleted\n") % f) +
1860 (_("remote changed %s which local deleted\n") % f) +
1857 _("(k)eep or (d)elete?"), _("[kd]"), _("k"))
1861 _("(k)eep or (d)elete?"), _("[kd]"), _("k"))
1858 if r == _("k"):
1862 if r == _("k"):
1859 get[f] = n
1863 get[f] = n
1860 elif f not in ma:
1864 elif f not in ma:
1861 self.ui.debug(_("remote created %s\n") % f)
1865 self.ui.debug(_("remote created %s\n") % f)
1862 get[f] = n
1866 get[f] = n
1863 else:
1867 else:
1864 if force or p2 == pa: # going backwards?
1868 if force or p2 == pa: # going backwards?
1865 self.ui.debug(_("local deleted %s, recreating\n") % f)
1869 self.ui.debug(_("local deleted %s, recreating\n") % f)
1866 get[f] = n
1870 get[f] = n
1867 else:
1871 else:
1868 self.ui.debug(_("local deleted %s\n") % f)
1872 self.ui.debug(_("local deleted %s\n") % f)
1869
1873
1870 del mw, m1, m2, ma
1874 del mw, m1, m2, ma
1871
1875
1872 if force:
1876 if force:
1873 for f in merge:
1877 for f in merge:
1874 get[f] = merge[f][1]
1878 get[f] = merge[f][1]
1875 merge = {}
1879 merge = {}
1876
1880
1877 if linear_path or force:
1881 if linear_path or force:
1878 # we don't need to do any magic, just jump to the new rev
1882 # we don't need to do any magic, just jump to the new rev
1879 branch_merge = False
1883 branch_merge = False
1880 p1, p2 = p2, nullid
1884 p1, p2 = p2, nullid
1881 else:
1885 else:
1882 if not allow:
1886 if not allow:
1883 self.ui.status(_("this update spans a branch"
1887 self.ui.status(_("this update spans a branch"
1884 " affecting the following files:\n"))
1888 " affecting the following files:\n"))
1885 fl = merge.keys() + get.keys()
1889 fl = merge.keys() + get.keys()
1886 fl.sort()
1890 fl.sort()
1887 for f in fl:
1891 for f in fl:
1888 cf = ""
1892 cf = ""
1889 if f in merge:
1893 if f in merge:
1890 cf = _(" (resolve)")
1894 cf = _(" (resolve)")
1891 self.ui.status(" %s%s\n" % (f, cf))
1895 self.ui.status(" %s%s\n" % (f, cf))
1892 self.ui.warn(_("aborting update spanning branches!\n"))
1896 self.ui.warn(_("aborting update spanning branches!\n"))
1893 self.ui.status(_("(use 'hg merge' to merge across branches"
1897 self.ui.status(_("(use 'hg merge' to merge across branches"
1894 " or 'hg update -C' to lose changes)\n"))
1898 " or 'hg update -C' to lose changes)\n"))
1895 return 1
1899 return 1
1896 branch_merge = True
1900 branch_merge = True
1897
1901
1898 xp1 = hex(p1)
1902 xp1 = hex(p1)
1899 xp2 = hex(p2)
1903 xp2 = hex(p2)
1900 if p2 == nullid: xxp2 = ''
1904 if p2 == nullid: xxp2 = ''
1901 else: xxp2 = xp2
1905 else: xxp2 = xp2
1902
1906
1903 self.hook('preupdate', throw=True, parent1=xp1, parent2=xxp2)
1907 self.hook('preupdate', throw=True, parent1=xp1, parent2=xxp2)
1904
1908
1905 # get the files we don't need to change
1909 # get the files we don't need to change
1906 files = get.keys()
1910 files = get.keys()
1907 files.sort()
1911 files.sort()
1908 for f in files:
1912 for f in files:
1909 if f[0] == "/":
1913 if f[0] == "/":
1910 continue
1914 continue
1911 self.ui.note(_("getting %s\n") % f)
1915 self.ui.note(_("getting %s\n") % f)
1912 t = self.file(f).read(get[f])
1916 t = self.file(f).read(get[f])
1913 self.wwrite(f, t)
1917 self.wwrite(f, t)
1914 util.set_exec(self.wjoin(f), mf2[f])
1918 util.set_exec(self.wjoin(f), mf2[f])
1915 if moddirstate:
1919 if moddirstate:
1916 if branch_merge:
1920 if branch_merge:
1917 self.dirstate.update([f], 'n', st_mtime=-1)
1921 self.dirstate.update([f], 'n', st_mtime=-1)
1918 else:
1922 else:
1919 self.dirstate.update([f], 'n')
1923 self.dirstate.update([f], 'n')
1920
1924
1921 # merge the tricky bits
1925 # merge the tricky bits
1922 failedmerge = []
1926 failedmerge = []
1923 files = merge.keys()
1927 files = merge.keys()
1924 files.sort()
1928 files.sort()
1925 for f in files:
1929 for f in files:
1926 self.ui.status(_("merging %s\n") % f)
1930 self.ui.status(_("merging %s\n") % f)
1927 my, other, flag = merge[f]
1931 my, other, flag = merge[f]
1928 ret = self.merge3(f, my, other, xp1, xp2)
1932 ret = self.merge3(f, my, other, xp1, xp2)
1929 if ret:
1933 if ret:
1930 err = True
1934 err = True
1931 failedmerge.append(f)
1935 failedmerge.append(f)
1932 util.set_exec(self.wjoin(f), flag)
1936 util.set_exec(self.wjoin(f), flag)
1933 if moddirstate:
1937 if moddirstate:
1934 if branch_merge:
1938 if branch_merge:
1935 # We've done a branch merge, mark this file as merged
1939 # We've done a branch merge, mark this file as merged
1936 # so that we properly record the merger later
1940 # so that we properly record the merger later
1937 self.dirstate.update([f], 'm')
1941 self.dirstate.update([f], 'm')
1938 else:
1942 else:
1939 # We've update-merged a locally modified file, so
1943 # We've update-merged a locally modified file, so
1940 # we set the dirstate to emulate a normal checkout
1944 # we set the dirstate to emulate a normal checkout
1941 # of that file some time in the past. Thus our
1945 # of that file some time in the past. Thus our
1942 # merge will appear as a normal local file
1946 # merge will appear as a normal local file
1943 # modification.
1947 # modification.
1944 f_len = len(self.file(f).read(other))
1948 f_len = len(self.file(f).read(other))
1945 self.dirstate.update([f], 'n', st_size=f_len, st_mtime=-1)
1949 self.dirstate.update([f], 'n', st_size=f_len, st_mtime=-1)
1946
1950
1947 remove.sort()
1951 remove.sort()
1948 for f in remove:
1952 for f in remove:
1949 self.ui.note(_("removing %s\n") % f)
1953 self.ui.note(_("removing %s\n") % f)
1950 util.audit_path(f)
1954 util.audit_path(f)
1951 try:
1955 try:
1952 util.unlink(self.wjoin(f))
1956 util.unlink(self.wjoin(f))
1953 except OSError, inst:
1957 except OSError, inst:
1954 if inst.errno != errno.ENOENT:
1958 if inst.errno != errno.ENOENT:
1955 self.ui.warn(_("update failed to remove %s: %s!\n") %
1959 self.ui.warn(_("update failed to remove %s: %s!\n") %
1956 (f, inst.strerror))
1960 (f, inst.strerror))
1957 if moddirstate:
1961 if moddirstate:
1958 if branch_merge:
1962 if branch_merge:
1959 self.dirstate.update(remove, 'r')
1963 self.dirstate.update(remove, 'r')
1960 else:
1964 else:
1961 self.dirstate.forget(remove)
1965 self.dirstate.forget(remove)
1962
1966
1963 if moddirstate:
1967 if moddirstate:
1964 self.dirstate.setparents(p1, p2)
1968 self.dirstate.setparents(p1, p2)
1965
1969
1966 if show_stats:
1970 if show_stats:
1967 stats = ((len(get), _("updated")),
1971 stats = ((len(get), _("updated")),
1968 (len(merge) - len(failedmerge), _("merged")),
1972 (len(merge) - len(failedmerge), _("merged")),
1969 (len(remove), _("removed")),
1973 (len(remove), _("removed")),
1970 (len(failedmerge), _("unresolved")))
1974 (len(failedmerge), _("unresolved")))
1971 note = ", ".join([_("%d files %s") % s for s in stats])
1975 note = ", ".join([_("%d files %s") % s for s in stats])
1972 self.ui.status("%s\n" % note)
1976 self.ui.status("%s\n" % note)
1973 if moddirstate:
1977 if moddirstate:
1974 if branch_merge:
1978 if branch_merge:
1975 if failedmerge:
1979 if failedmerge:
1976 self.ui.status(_("There are unresolved merges,"
1980 self.ui.status(_("There are unresolved merges,"
1977 " you can redo the full merge using:\n"
1981 " you can redo the full merge using:\n"
1978 " hg update -C %s\n"
1982 " hg update -C %s\n"
1979 " hg merge %s\n"
1983 " hg merge %s\n"
1980 % (self.changelog.rev(p1),
1984 % (self.changelog.rev(p1),
1981 self.changelog.rev(p2))))
1985 self.changelog.rev(p2))))
1982 else:
1986 else:
1983 self.ui.status(_("(branch merge, don't forget to commit)\n"))
1987 self.ui.status(_("(branch merge, don't forget to commit)\n"))
1984 elif failedmerge:
1988 elif failedmerge:
1985 self.ui.status(_("There are unresolved merges with"
1989 self.ui.status(_("There are unresolved merges with"
1986 " locally modified files.\n"))
1990 " locally modified files.\n"))
1987
1991
1988 self.hook('update', parent1=xp1, parent2=xxp2, error=int(err))
1992 self.hook('update', parent1=xp1, parent2=xxp2, error=int(err))
1989 return err
1993 return err
1990
1994
1991 def merge3(self, fn, my, other, p1, p2):
1995 def merge3(self, fn, my, other, p1, p2):
1992 """perform a 3-way merge in the working directory"""
1996 """perform a 3-way merge in the working directory"""
1993
1997
1994 def temp(prefix, node):
1998 def temp(prefix, node):
1995 pre = "%s~%s." % (os.path.basename(fn), prefix)
1999 pre = "%s~%s." % (os.path.basename(fn), prefix)
1996 (fd, name) = tempfile.mkstemp(prefix=pre)
2000 (fd, name) = tempfile.mkstemp(prefix=pre)
1997 f = os.fdopen(fd, "wb")
2001 f = os.fdopen(fd, "wb")
1998 self.wwrite(fn, fl.read(node), f)
2002 self.wwrite(fn, fl.read(node), f)
1999 f.close()
2003 f.close()
2000 return name
2004 return name
2001
2005
2002 fl = self.file(fn)
2006 fl = self.file(fn)
2003 base = fl.ancestor(my, other)
2007 base = fl.ancestor(my, other)
2004 a = self.wjoin(fn)
2008 a = self.wjoin(fn)
2005 b = temp("base", base)
2009 b = temp("base", base)
2006 c = temp("other", other)
2010 c = temp("other", other)
2007
2011
2008 self.ui.note(_("resolving %s\n") % fn)
2012 self.ui.note(_("resolving %s\n") % fn)
2009 self.ui.debug(_("file %s: my %s other %s ancestor %s\n") %
2013 self.ui.debug(_("file %s: my %s other %s ancestor %s\n") %
2010 (fn, short(my), short(other), short(base)))
2014 (fn, short(my), short(other), short(base)))
2011
2015
2012 cmd = (os.environ.get("HGMERGE") or self.ui.config("ui", "merge")
2016 cmd = (os.environ.get("HGMERGE") or self.ui.config("ui", "merge")
2013 or "hgmerge")
2017 or "hgmerge")
2014 r = util.system('%s "%s" "%s" "%s"' % (cmd, a, b, c), cwd=self.root,
2018 r = util.system('%s "%s" "%s" "%s"' % (cmd, a, b, c), cwd=self.root,
2015 environ={'HG_FILE': fn,
2019 environ={'HG_FILE': fn,
2016 'HG_MY_NODE': p1,
2020 'HG_MY_NODE': p1,
2017 'HG_OTHER_NODE': p2,
2021 'HG_OTHER_NODE': p2,
2018 'HG_FILE_MY_NODE': hex(my),
2022 'HG_FILE_MY_NODE': hex(my),
2019 'HG_FILE_OTHER_NODE': hex(other),
2023 'HG_FILE_OTHER_NODE': hex(other),
2020 'HG_FILE_BASE_NODE': hex(base)})
2024 'HG_FILE_BASE_NODE': hex(base)})
2021 if r:
2025 if r:
2022 self.ui.warn(_("merging %s failed!\n") % fn)
2026 self.ui.warn(_("merging %s failed!\n") % fn)
2023
2027
2024 os.unlink(b)
2028 os.unlink(b)
2025 os.unlink(c)
2029 os.unlink(c)
2026 return r
2030 return r
2027
2031
2028 def verify(self):
2032 def verify(self):
2029 filelinkrevs = {}
2033 filelinkrevs = {}
2030 filenodes = {}
2034 filenodes = {}
2031 changesets = revisions = files = 0
2035 changesets = revisions = files = 0
2032 errors = [0]
2036 errors = [0]
2033 warnings = [0]
2037 warnings = [0]
2034 neededmanifests = {}
2038 neededmanifests = {}
2035
2039
2036 def err(msg):
2040 def err(msg):
2037 self.ui.warn(msg + "\n")
2041 self.ui.warn(msg + "\n")
2038 errors[0] += 1
2042 errors[0] += 1
2039
2043
2040 def warn(msg):
2044 def warn(msg):
2041 self.ui.warn(msg + "\n")
2045 self.ui.warn(msg + "\n")
2042 warnings[0] += 1
2046 warnings[0] += 1
2043
2047
2044 def checksize(obj, name):
2048 def checksize(obj, name):
2045 d = obj.checksize()
2049 d = obj.checksize()
2046 if d[0]:
2050 if d[0]:
2047 err(_("%s data length off by %d bytes") % (name, d[0]))
2051 err(_("%s data length off by %d bytes") % (name, d[0]))
2048 if d[1]:
2052 if d[1]:
2049 err(_("%s index contains %d extra bytes") % (name, d[1]))
2053 err(_("%s index contains %d extra bytes") % (name, d[1]))
2050
2054
2051 def checkversion(obj, name):
2055 def checkversion(obj, name):
2052 if obj.version != revlog.REVLOGV0:
2056 if obj.version != revlog.REVLOGV0:
2053 if not revlogv1:
2057 if not revlogv1:
2054 warn(_("warning: `%s' uses revlog format 1") % name)
2058 warn(_("warning: `%s' uses revlog format 1") % name)
2055 elif revlogv1:
2059 elif revlogv1:
2056 warn(_("warning: `%s' uses revlog format 0") % name)
2060 warn(_("warning: `%s' uses revlog format 0") % name)
2057
2061
2058 revlogv1 = self.revlogversion != revlog.REVLOGV0
2062 revlogv1 = self.revlogversion != revlog.REVLOGV0
2059 if self.ui.verbose or revlogv1 != self.revlogv1:
2063 if self.ui.verbose or revlogv1 != self.revlogv1:
2060 self.ui.status(_("repository uses revlog format %d\n") %
2064 self.ui.status(_("repository uses revlog format %d\n") %
2061 (revlogv1 and 1 or 0))
2065 (revlogv1 and 1 or 0))
2062
2066
2063 seen = {}
2067 seen = {}
2064 self.ui.status(_("checking changesets\n"))
2068 self.ui.status(_("checking changesets\n"))
2065 checksize(self.changelog, "changelog")
2069 checksize(self.changelog, "changelog")
2066
2070
2067 for i in range(self.changelog.count()):
2071 for i in range(self.changelog.count()):
2068 changesets += 1
2072 changesets += 1
2069 n = self.changelog.node(i)
2073 n = self.changelog.node(i)
2070 l = self.changelog.linkrev(n)
2074 l = self.changelog.linkrev(n)
2071 if l != i:
2075 if l != i:
2072 err(_("incorrect link (%d) for changeset revision %d") %(l, i))
2076 err(_("incorrect link (%d) for changeset revision %d") %(l, i))
2073 if n in seen:
2077 if n in seen:
2074 err(_("duplicate changeset at revision %d") % i)
2078 err(_("duplicate changeset at revision %d") % i)
2075 seen[n] = 1
2079 seen[n] = 1
2076
2080
2077 for p in self.changelog.parents(n):
2081 for p in self.changelog.parents(n):
2078 if p not in self.changelog.nodemap:
2082 if p not in self.changelog.nodemap:
2079 err(_("changeset %s has unknown parent %s") %
2083 err(_("changeset %s has unknown parent %s") %
2080 (short(n), short(p)))
2084 (short(n), short(p)))
2081 try:
2085 try:
2082 changes = self.changelog.read(n)
2086 changes = self.changelog.read(n)
2083 except KeyboardInterrupt:
2087 except KeyboardInterrupt:
2084 self.ui.warn(_("interrupted"))
2088 self.ui.warn(_("interrupted"))
2085 raise
2089 raise
2086 except Exception, inst:
2090 except Exception, inst:
2087 err(_("unpacking changeset %s: %s") % (short(n), inst))
2091 err(_("unpacking changeset %s: %s") % (short(n), inst))
2088 continue
2092 continue
2089
2093
2090 neededmanifests[changes[0]] = n
2094 neededmanifests[changes[0]] = n
2091
2095
2092 for f in changes[3]:
2096 for f in changes[3]:
2093 filelinkrevs.setdefault(f, []).append(i)
2097 filelinkrevs.setdefault(f, []).append(i)
2094
2098
2095 seen = {}
2099 seen = {}
2096 self.ui.status(_("checking manifests\n"))
2100 self.ui.status(_("checking manifests\n"))
2097 checkversion(self.manifest, "manifest")
2101 checkversion(self.manifest, "manifest")
2098 checksize(self.manifest, "manifest")
2102 checksize(self.manifest, "manifest")
2099
2103
2100 for i in range(self.manifest.count()):
2104 for i in range(self.manifest.count()):
2101 n = self.manifest.node(i)
2105 n = self.manifest.node(i)
2102 l = self.manifest.linkrev(n)
2106 l = self.manifest.linkrev(n)
2103
2107
2104 if l < 0 or l >= self.changelog.count():
2108 if l < 0 or l >= self.changelog.count():
2105 err(_("bad manifest link (%d) at revision %d") % (l, i))
2109 err(_("bad manifest link (%d) at revision %d") % (l, i))
2106
2110
2107 if n in neededmanifests:
2111 if n in neededmanifests:
2108 del neededmanifests[n]
2112 del neededmanifests[n]
2109
2113
2110 if n in seen:
2114 if n in seen:
2111 err(_("duplicate manifest at revision %d") % i)
2115 err(_("duplicate manifest at revision %d") % i)
2112
2116
2113 seen[n] = 1
2117 seen[n] = 1
2114
2118
2115 for p in self.manifest.parents(n):
2119 for p in self.manifest.parents(n):
2116 if p not in self.manifest.nodemap:
2120 if p not in self.manifest.nodemap:
2117 err(_("manifest %s has unknown parent %s") %
2121 err(_("manifest %s has unknown parent %s") %
2118 (short(n), short(p)))
2122 (short(n), short(p)))
2119
2123
2120 try:
2124 try:
2121 delta = mdiff.patchtext(self.manifest.delta(n))
2125 delta = mdiff.patchtext(self.manifest.delta(n))
2122 except KeyboardInterrupt:
2126 except KeyboardInterrupt:
2123 self.ui.warn(_("interrupted"))
2127 self.ui.warn(_("interrupted"))
2124 raise
2128 raise
2125 except Exception, inst:
2129 except Exception, inst:
2126 err(_("unpacking manifest %s: %s") % (short(n), inst))
2130 err(_("unpacking manifest %s: %s") % (short(n), inst))
2127 continue
2131 continue
2128
2132
2129 try:
2133 try:
2130 ff = [ l.split('\0') for l in delta.splitlines() ]
2134 ff = [ l.split('\0') for l in delta.splitlines() ]
2131 for f, fn in ff:
2135 for f, fn in ff:
2132 filenodes.setdefault(f, {})[bin(fn[:40])] = 1
2136 filenodes.setdefault(f, {})[bin(fn[:40])] = 1
2133 except (ValueError, TypeError), inst:
2137 except (ValueError, TypeError), inst:
2134 err(_("broken delta in manifest %s: %s") % (short(n), inst))
2138 err(_("broken delta in manifest %s: %s") % (short(n), inst))
2135
2139
2136 self.ui.status(_("crosschecking files in changesets and manifests\n"))
2140 self.ui.status(_("crosschecking files in changesets and manifests\n"))
2137
2141
2138 for m, c in neededmanifests.items():
2142 for m, c in neededmanifests.items():
2139 err(_("Changeset %s refers to unknown manifest %s") %
2143 err(_("Changeset %s refers to unknown manifest %s") %
2140 (short(m), short(c)))
2144 (short(m), short(c)))
2141 del neededmanifests
2145 del neededmanifests
2142
2146
2143 for f in filenodes:
2147 for f in filenodes:
2144 if f not in filelinkrevs:
2148 if f not in filelinkrevs:
2145 err(_("file %s in manifest but not in changesets") % f)
2149 err(_("file %s in manifest but not in changesets") % f)
2146
2150
2147 for f in filelinkrevs:
2151 for f in filelinkrevs:
2148 if f not in filenodes:
2152 if f not in filenodes:
2149 err(_("file %s in changeset but not in manifest") % f)
2153 err(_("file %s in changeset but not in manifest") % f)
2150
2154
2151 self.ui.status(_("checking files\n"))
2155 self.ui.status(_("checking files\n"))
2152 ff = filenodes.keys()
2156 ff = filenodes.keys()
2153 ff.sort()
2157 ff.sort()
2154 for f in ff:
2158 for f in ff:
2155 if f == "/dev/null":
2159 if f == "/dev/null":
2156 continue
2160 continue
2157 files += 1
2161 files += 1
2158 if not f:
2162 if not f:
2159 err(_("file without name in manifest %s") % short(n))
2163 err(_("file without name in manifest %s") % short(n))
2160 continue
2164 continue
2161 fl = self.file(f)
2165 fl = self.file(f)
2162 checkversion(fl, f)
2166 checkversion(fl, f)
2163 checksize(fl, f)
2167 checksize(fl, f)
2164
2168
2165 nodes = {nullid: 1}
2169 nodes = {nullid: 1}
2166 seen = {}
2170 seen = {}
2167 for i in range(fl.count()):
2171 for i in range(fl.count()):
2168 revisions += 1
2172 revisions += 1
2169 n = fl.node(i)
2173 n = fl.node(i)
2170
2174
2171 if n in seen:
2175 if n in seen:
2172 err(_("%s: duplicate revision %d") % (f, i))
2176 err(_("%s: duplicate revision %d") % (f, i))
2173 if n not in filenodes[f]:
2177 if n not in filenodes[f]:
2174 err(_("%s: %d:%s not in manifests") % (f, i, short(n)))
2178 err(_("%s: %d:%s not in manifests") % (f, i, short(n)))
2175 else:
2179 else:
2176 del filenodes[f][n]
2180 del filenodes[f][n]
2177
2181
2178 flr = fl.linkrev(n)
2182 flr = fl.linkrev(n)
2179 if flr not in filelinkrevs.get(f, []):
2183 if flr not in filelinkrevs.get(f, []):
2180 err(_("%s:%s points to unexpected changeset %d")
2184 err(_("%s:%s points to unexpected changeset %d")
2181 % (f, short(n), flr))
2185 % (f, short(n), flr))
2182 else:
2186 else:
2183 filelinkrevs[f].remove(flr)
2187 filelinkrevs[f].remove(flr)
2184
2188
2185 # verify contents
2189 # verify contents
2186 try:
2190 try:
2187 t = fl.read(n)
2191 t = fl.read(n)
2188 except KeyboardInterrupt:
2192 except KeyboardInterrupt:
2189 self.ui.warn(_("interrupted"))
2193 self.ui.warn(_("interrupted"))
2190 raise
2194 raise
2191 except Exception, inst:
2195 except Exception, inst:
2192 err(_("unpacking file %s %s: %s") % (f, short(n), inst))
2196 err(_("unpacking file %s %s: %s") % (f, short(n), inst))
2193
2197
2194 # verify parents
2198 # verify parents
2195 (p1, p2) = fl.parents(n)
2199 (p1, p2) = fl.parents(n)
2196 if p1 not in nodes:
2200 if p1 not in nodes:
2197 err(_("file %s:%s unknown parent 1 %s") %
2201 err(_("file %s:%s unknown parent 1 %s") %
2198 (f, short(n), short(p1)))
2202 (f, short(n), short(p1)))
2199 if p2 not in nodes:
2203 if p2 not in nodes:
2200 err(_("file %s:%s unknown parent 2 %s") %
2204 err(_("file %s:%s unknown parent 2 %s") %
2201 (f, short(n), short(p1)))
2205 (f, short(n), short(p1)))
2202 nodes[n] = 1
2206 nodes[n] = 1
2203
2207
2204 # cross-check
2208 # cross-check
2205 for node in filenodes[f]:
2209 for node in filenodes[f]:
2206 err(_("node %s in manifests not in %s") % (hex(node), f))
2210 err(_("node %s in manifests not in %s") % (hex(node), f))
2207
2211
2208 self.ui.status(_("%d files, %d changesets, %d total revisions\n") %
2212 self.ui.status(_("%d files, %d changesets, %d total revisions\n") %
2209 (files, changesets, revisions))
2213 (files, changesets, revisions))
2210
2214
2211 if warnings[0]:
2215 if warnings[0]:
2212 self.ui.warn(_("%d warnings encountered!\n") % warnings[0])
2216 self.ui.warn(_("%d warnings encountered!\n") % warnings[0])
2213 if errors[0]:
2217 if errors[0]:
2214 self.ui.warn(_("%d integrity errors encountered!\n") % errors[0])
2218 self.ui.warn(_("%d integrity errors encountered!\n") % errors[0])
2215 return 1
2219 return 1
2216
2220
2217 def stream_in(self, remote):
2221 def stream_in(self, remote):
2218 fp = remote.stream_out()
2222 fp = remote.stream_out()
2219 resp = int(fp.readline())
2223 resp = int(fp.readline())
2220 if resp != 0:
2224 if resp != 0:
2221 raise util.Abort(_('operation forbidden by server'))
2225 raise util.Abort(_('operation forbidden by server'))
2222 self.ui.status(_('streaming all changes\n'))
2226 self.ui.status(_('streaming all changes\n'))
2223 total_files, total_bytes = map(int, fp.readline().split(' ', 1))
2227 total_files, total_bytes = map(int, fp.readline().split(' ', 1))
2224 self.ui.status(_('%d files to transfer, %s of data\n') %
2228 self.ui.status(_('%d files to transfer, %s of data\n') %
2225 (total_files, util.bytecount(total_bytes)))
2229 (total_files, util.bytecount(total_bytes)))
2226 start = time.time()
2230 start = time.time()
2227 for i in xrange(total_files):
2231 for i in xrange(total_files):
2228 name, size = fp.readline().split('\0', 1)
2232 name, size = fp.readline().split('\0', 1)
2229 size = int(size)
2233 size = int(size)
2230 self.ui.debug('adding %s (%s)\n' % (name, util.bytecount(size)))
2234 self.ui.debug('adding %s (%s)\n' % (name, util.bytecount(size)))
2231 ofp = self.opener(name, 'w')
2235 ofp = self.opener(name, 'w')
2232 for chunk in util.filechunkiter(fp, limit=size):
2236 for chunk in util.filechunkiter(fp, limit=size):
2233 ofp.write(chunk)
2237 ofp.write(chunk)
2234 ofp.close()
2238 ofp.close()
2235 elapsed = time.time() - start
2239 elapsed = time.time() - start
2236 self.ui.status(_('transferred %s in %.1f seconds (%s/sec)\n') %
2240 self.ui.status(_('transferred %s in %.1f seconds (%s/sec)\n') %
2237 (util.bytecount(total_bytes), elapsed,
2241 (util.bytecount(total_bytes), elapsed,
2238 util.bytecount(total_bytes / elapsed)))
2242 util.bytecount(total_bytes / elapsed)))
2239 self.reload()
2243 self.reload()
2240 return len(self.heads()) + 1
2244 return len(self.heads()) + 1
2241
2245
2242 def clone(self, remote, heads=[], stream=False):
2246 def clone(self, remote, heads=[], stream=False):
2243 '''clone remote repository.
2247 '''clone remote repository.
2244
2248
2245 keyword arguments:
2249 keyword arguments:
2246 heads: list of revs to clone (forces use of pull)
2250 heads: list of revs to clone (forces use of pull)
2247 stream: use streaming clone if possible'''
2251 stream: use streaming clone if possible'''
2248
2252
2249 # now, all clients that can request uncompressed clones can
2253 # now, all clients that can request uncompressed clones can
2250 # read repo formats supported by all servers that can serve
2254 # read repo formats supported by all servers that can serve
2251 # them.
2255 # them.
2252
2256
2253 # if revlog format changes, client will have to check version
2257 # if revlog format changes, client will have to check version
2254 # and format flags on "stream" capability, and use
2258 # and format flags on "stream" capability, and use
2255 # uncompressed only if compatible.
2259 # uncompressed only if compatible.
2256
2260
2257 if stream and not heads and remote.capable('stream'):
2261 if stream and not heads and remote.capable('stream'):
2258 return self.stream_in(remote)
2262 return self.stream_in(remote)
2259 return self.pull(remote, heads)
2263 return self.pull(remote, heads)
2260
2264
2261 # used to avoid circular references so destructors work
2265 # used to avoid circular references so destructors work
2262 def aftertrans(base):
2266 def aftertrans(base):
2263 p = base
2267 p = base
2264 def a():
2268 def a():
2265 util.rename(os.path.join(p, "journal"), os.path.join(p, "undo"))
2269 util.rename(os.path.join(p, "journal"), os.path.join(p, "undo"))
2266 util.rename(os.path.join(p, "journal.dirstate"),
2270 util.rename(os.path.join(p, "journal.dirstate"),
2267 os.path.join(p, "undo.dirstate"))
2271 os.path.join(p, "undo.dirstate"))
2268 return a
2272 return a
2269
2273
@@ -1,203 +1,206 b''
1 # sshrepo.py - ssh repository proxy class for mercurial
1 # sshrepo.py - ssh repository proxy class for mercurial
2 #
2 #
3 # Copyright 2005 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms
5 # This software may be used and distributed according to the terms
6 # of the GNU General Public License, incorporated herein by reference.
6 # of the GNU General Public License, incorporated herein by reference.
7
7
8 from node import *
8 from node import *
9 from remoterepo import *
9 from remoterepo import *
10 from i18n import gettext as _
10 from i18n import gettext as _
11 from demandload import *
11 from demandload import *
12 demandload(globals(), "hg os re stat util")
12 demandload(globals(), "hg os re stat util")
13
13
14 class sshrepository(remoterepository):
14 class sshrepository(remoterepository):
15 def __init__(self, ui, path, create=0):
15 def __init__(self, ui, path, create=0):
16 self.url = path
16 self._url = path
17 self.ui = ui
17 self.ui = ui
18
18
19 m = re.match(r'ssh://(([^@]+)@)?([^:/]+)(:(\d+))?(/(.*))?', path)
19 m = re.match(r'ssh://(([^@]+)@)?([^:/]+)(:(\d+))?(/(.*))?', path)
20 if not m:
20 if not m:
21 raise hg.RepoError(_("couldn't parse location %s") % path)
21 raise hg.RepoError(_("couldn't parse location %s") % path)
22
22
23 self.user = m.group(2)
23 self.user = m.group(2)
24 self.host = m.group(3)
24 self.host = m.group(3)
25 self.port = m.group(5)
25 self.port = m.group(5)
26 self.path = m.group(7) or "."
26 self.path = m.group(7) or "."
27
27
28 args = self.user and ("%s@%s" % (self.user, self.host)) or self.host
28 args = self.user and ("%s@%s" % (self.user, self.host)) or self.host
29 args = self.port and ("%s -p %s") % (args, self.port) or args
29 args = self.port and ("%s -p %s") % (args, self.port) or args
30
30
31 sshcmd = self.ui.config("ui", "ssh", "ssh")
31 sshcmd = self.ui.config("ui", "ssh", "ssh")
32 remotecmd = self.ui.config("ui", "remotecmd", "hg")
32 remotecmd = self.ui.config("ui", "remotecmd", "hg")
33
33
34 if create:
34 if create:
35 try:
35 try:
36 self.validate_repo(ui, sshcmd, args, remotecmd)
36 self.validate_repo(ui, sshcmd, args, remotecmd)
37 return # the repo is good, nothing more to do
37 return # the repo is good, nothing more to do
38 except hg.RepoError:
38 except hg.RepoError:
39 pass
39 pass
40
40
41 cmd = '%s %s "%s init %s"'
41 cmd = '%s %s "%s init %s"'
42 cmd = cmd % (sshcmd, args, remotecmd, self.path)
42 cmd = cmd % (sshcmd, args, remotecmd, self.path)
43
43
44 ui.note('running %s\n' % cmd)
44 ui.note('running %s\n' % cmd)
45 res = os.system(cmd)
45 res = os.system(cmd)
46 if res != 0:
46 if res != 0:
47 raise hg.RepoError(_("could not create remote repo"))
47 raise hg.RepoError(_("could not create remote repo"))
48
48
49 self.validate_repo(ui, sshcmd, args, remotecmd)
49 self.validate_repo(ui, sshcmd, args, remotecmd)
50
50
51 def url(self):
52 return self._url
53
51 def validate_repo(self, ui, sshcmd, args, remotecmd):
54 def validate_repo(self, ui, sshcmd, args, remotecmd):
52 cmd = '%s %s "%s -R %s serve --stdio"'
55 cmd = '%s %s "%s -R %s serve --stdio"'
53 cmd = cmd % (sshcmd, args, remotecmd, self.path)
56 cmd = cmd % (sshcmd, args, remotecmd, self.path)
54
57
55 ui.note('running %s\n' % cmd)
58 ui.note('running %s\n' % cmd)
56 self.pipeo, self.pipei, self.pipee = os.popen3(cmd, 'b')
59 self.pipeo, self.pipei, self.pipee = os.popen3(cmd, 'b')
57
60
58 # skip any noise generated by remote shell
61 # skip any noise generated by remote shell
59 self.do_cmd("hello")
62 self.do_cmd("hello")
60 r = self.do_cmd("between", pairs=("%s-%s" % ("0"*40, "0"*40)))
63 r = self.do_cmd("between", pairs=("%s-%s" % ("0"*40, "0"*40)))
61 lines = ["", "dummy"]
64 lines = ["", "dummy"]
62 max_noise = 500
65 max_noise = 500
63 while lines[-1] and max_noise:
66 while lines[-1] and max_noise:
64 l = r.readline()
67 l = r.readline()
65 self.readerr()
68 self.readerr()
66 if lines[-1] == "1\n" and l == "\n":
69 if lines[-1] == "1\n" and l == "\n":
67 break
70 break
68 if l:
71 if l:
69 ui.debug(_("remote: "), l)
72 ui.debug(_("remote: "), l)
70 lines.append(l)
73 lines.append(l)
71 max_noise -= 1
74 max_noise -= 1
72 else:
75 else:
73 raise hg.RepoError(_("no response from remote hg"))
76 raise hg.RepoError(_("no response from remote hg"))
74
77
75 self.capabilities = ()
78 self.capabilities = ()
76 lines.reverse()
79 lines.reverse()
77 for l in lines:
80 for l in lines:
78 if l.startswith("capabilities:"):
81 if l.startswith("capabilities:"):
79 self.capabilities = l[:-1].split(":")[1].split()
82 self.capabilities = l[:-1].split(":")[1].split()
80 break
83 break
81
84
82 def readerr(self):
85 def readerr(self):
83 while 1:
86 while 1:
84 size = util.fstat(self.pipee).st_size
87 size = util.fstat(self.pipee).st_size
85 if size == 0: break
88 if size == 0: break
86 l = self.pipee.readline()
89 l = self.pipee.readline()
87 if not l: break
90 if not l: break
88 self.ui.status(_("remote: "), l)
91 self.ui.status(_("remote: "), l)
89
92
90 def __del__(self):
93 def __del__(self):
91 try:
94 try:
92 self.pipeo.close()
95 self.pipeo.close()
93 self.pipei.close()
96 self.pipei.close()
94 # read the error descriptor until EOF
97 # read the error descriptor until EOF
95 for l in self.pipee:
98 for l in self.pipee:
96 self.ui.status(_("remote: "), l)
99 self.ui.status(_("remote: "), l)
97 self.pipee.close()
100 self.pipee.close()
98 except:
101 except:
99 pass
102 pass
100
103
101 def do_cmd(self, cmd, **args):
104 def do_cmd(self, cmd, **args):
102 self.ui.debug(_("sending %s command\n") % cmd)
105 self.ui.debug(_("sending %s command\n") % cmd)
103 self.pipeo.write("%s\n" % cmd)
106 self.pipeo.write("%s\n" % cmd)
104 for k, v in args.items():
107 for k, v in args.items():
105 self.pipeo.write("%s %d\n" % (k, len(v)))
108 self.pipeo.write("%s %d\n" % (k, len(v)))
106 self.pipeo.write(v)
109 self.pipeo.write(v)
107 self.pipeo.flush()
110 self.pipeo.flush()
108
111
109 return self.pipei
112 return self.pipei
110
113
111 def call(self, cmd, **args):
114 def call(self, cmd, **args):
112 r = self.do_cmd(cmd, **args)
115 r = self.do_cmd(cmd, **args)
113 l = r.readline()
116 l = r.readline()
114 self.readerr()
117 self.readerr()
115 try:
118 try:
116 l = int(l)
119 l = int(l)
117 except:
120 except:
118 raise hg.RepoError(_("unexpected response '%s'") % l)
121 raise hg.RepoError(_("unexpected response '%s'") % l)
119 return r.read(l)
122 return r.read(l)
120
123
121 def lock(self):
124 def lock(self):
122 self.call("lock")
125 self.call("lock")
123 return remotelock(self)
126 return remotelock(self)
124
127
125 def unlock(self):
128 def unlock(self):
126 self.call("unlock")
129 self.call("unlock")
127
130
128 def heads(self):
131 def heads(self):
129 d = self.call("heads")
132 d = self.call("heads")
130 try:
133 try:
131 return map(bin, d[:-1].split(" "))
134 return map(bin, d[:-1].split(" "))
132 except:
135 except:
133 raise hg.RepoError(_("unexpected response '%s'") % (d[:400] + "..."))
136 raise hg.RepoError(_("unexpected response '%s'") % (d[:400] + "..."))
134
137
135 def branches(self, nodes):
138 def branches(self, nodes):
136 n = " ".join(map(hex, nodes))
139 n = " ".join(map(hex, nodes))
137 d = self.call("branches", nodes=n)
140 d = self.call("branches", nodes=n)
138 try:
141 try:
139 br = [ tuple(map(bin, b.split(" "))) for b in d.splitlines() ]
142 br = [ tuple(map(bin, b.split(" "))) for b in d.splitlines() ]
140 return br
143 return br
141 except:
144 except:
142 raise hg.RepoError(_("unexpected response '%s'") % (d[:400] + "..."))
145 raise hg.RepoError(_("unexpected response '%s'") % (d[:400] + "..."))
143
146
144 def between(self, pairs):
147 def between(self, pairs):
145 n = "\n".join(["-".join(map(hex, p)) for p in pairs])
148 n = "\n".join(["-".join(map(hex, p)) for p in pairs])
146 d = self.call("between", pairs=n)
149 d = self.call("between", pairs=n)
147 try:
150 try:
148 p = [ l and map(bin, l.split(" ")) or [] for l in d.splitlines() ]
151 p = [ l and map(bin, l.split(" ")) or [] for l in d.splitlines() ]
149 return p
152 return p
150 except:
153 except:
151 raise hg.RepoError(_("unexpected response '%s'") % (d[:400] + "..."))
154 raise hg.RepoError(_("unexpected response '%s'") % (d[:400] + "..."))
152
155
153 def changegroup(self, nodes, kind):
156 def changegroup(self, nodes, kind):
154 n = " ".join(map(hex, nodes))
157 n = " ".join(map(hex, nodes))
155 return self.do_cmd("changegroup", roots=n)
158 return self.do_cmd("changegroup", roots=n)
156
159
157 def unbundle(self, cg, heads, source):
160 def unbundle(self, cg, heads, source):
158 d = self.call("unbundle", heads=' '.join(map(hex, heads)))
161 d = self.call("unbundle", heads=' '.join(map(hex, heads)))
159 if d:
162 if d:
160 raise hg.RepoError(_("push refused: %s") % d)
163 raise hg.RepoError(_("push refused: %s") % d)
161
164
162 while 1:
165 while 1:
163 d = cg.read(4096)
166 d = cg.read(4096)
164 if not d: break
167 if not d: break
165 self.pipeo.write(str(len(d)) + '\n')
168 self.pipeo.write(str(len(d)) + '\n')
166 self.pipeo.write(d)
169 self.pipeo.write(d)
167 self.readerr()
170 self.readerr()
168
171
169 self.pipeo.write('0\n')
172 self.pipeo.write('0\n')
170 self.pipeo.flush()
173 self.pipeo.flush()
171
174
172 self.readerr()
175 self.readerr()
173 d = self.pipei.readline()
176 d = self.pipei.readline()
174 if d != '\n':
177 if d != '\n':
175 return 1
178 return 1
176
179
177 l = int(self.pipei.readline())
180 l = int(self.pipei.readline())
178 r = self.pipei.read(l)
181 r = self.pipei.read(l)
179 if not r:
182 if not r:
180 return 1
183 return 1
181 return int(r)
184 return int(r)
182
185
183 def addchangegroup(self, cg, source):
186 def addchangegroup(self, cg, source, url):
184 d = self.call("addchangegroup")
187 d = self.call("addchangegroup")
185 if d:
188 if d:
186 raise hg.RepoError(_("push refused: %s") % d)
189 raise hg.RepoError(_("push refused: %s") % d)
187 while 1:
190 while 1:
188 d = cg.read(4096)
191 d = cg.read(4096)
189 if not d: break
192 if not d: break
190 self.pipeo.write(d)
193 self.pipeo.write(d)
191 self.readerr()
194 self.readerr()
192
195
193 self.pipeo.flush()
196 self.pipeo.flush()
194
197
195 self.readerr()
198 self.readerr()
196 l = int(self.pipei.readline())
199 l = int(self.pipei.readline())
197 r = self.pipei.read(l)
200 r = self.pipei.read(l)
198 if not r:
201 if not r:
199 return 1
202 return 1
200 return int(r)
203 return int(r)
201
204
202 def stream_out(self):
205 def stream_out(self):
203 return self.do_cmd('stream_out')
206 return self.do_cmd('stream_out')
@@ -1,173 +1,177 b''
1 # sshserver.py - ssh protocol server support for mercurial
1 # sshserver.py - ssh protocol server support for mercurial
2 #
2 #
3 # Copyright 2005 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms
5 # This software may be used and distributed according to the terms
6 # of the GNU General Public License, incorporated herein by reference.
6 # of the GNU General Public License, incorporated herein by reference.
7
7
8 from demandload import demandload
8 from demandload import demandload
9 from i18n import gettext as _
9 from i18n import gettext as _
10 from node import *
10 from node import *
11 demandload(globals(), "os streamclone sys tempfile util")
11 demandload(globals(), "os streamclone sys tempfile util")
12
12
13 class sshserver(object):
13 class sshserver(object):
14 def __init__(self, ui, repo):
14 def __init__(self, ui, repo):
15 self.ui = ui
15 self.ui = ui
16 self.repo = repo
16 self.repo = repo
17 self.lock = None
17 self.lock = None
18 self.fin = sys.stdin
18 self.fin = sys.stdin
19 self.fout = sys.stdout
19 self.fout = sys.stdout
20
20
21 sys.stdout = sys.stderr
21 sys.stdout = sys.stderr
22
22
23 # Prevent insertion/deletion of CRs
23 # Prevent insertion/deletion of CRs
24 util.set_binary(self.fin)
24 util.set_binary(self.fin)
25 util.set_binary(self.fout)
25 util.set_binary(self.fout)
26
26
27 def getarg(self):
27 def getarg(self):
28 argline = self.fin.readline()[:-1]
28 argline = self.fin.readline()[:-1]
29 arg, l = argline.split()
29 arg, l = argline.split()
30 val = self.fin.read(int(l))
30 val = self.fin.read(int(l))
31 return arg, val
31 return arg, val
32
32
33 def respond(self, v):
33 def respond(self, v):
34 self.fout.write("%d\n" % len(v))
34 self.fout.write("%d\n" % len(v))
35 self.fout.write(v)
35 self.fout.write(v)
36 self.fout.flush()
36 self.fout.flush()
37
37
38 def serve_forever(self):
38 def serve_forever(self):
39 while self.serve_one(): pass
39 while self.serve_one(): pass
40 sys.exit(0)
40 sys.exit(0)
41
41
42 def serve_one(self):
42 def serve_one(self):
43 cmd = self.fin.readline()[:-1]
43 cmd = self.fin.readline()[:-1]
44 if cmd:
44 if cmd:
45 impl = getattr(self, 'do_' + cmd, None)
45 impl = getattr(self, 'do_' + cmd, None)
46 if impl: impl()
46 if impl: impl()
47 else: self.respond("")
47 else: self.respond("")
48 return cmd != ''
48 return cmd != ''
49
49
50 def do_heads(self):
50 def do_heads(self):
51 h = self.repo.heads()
51 h = self.repo.heads()
52 self.respond(" ".join(map(hex, h)) + "\n")
52 self.respond(" ".join(map(hex, h)) + "\n")
53
53
54 def do_hello(self):
54 def do_hello(self):
55 '''the hello command returns a set of lines describing various
55 '''the hello command returns a set of lines describing various
56 interesting things about the server, in an RFC822-like format.
56 interesting things about the server, in an RFC822-like format.
57 Currently the only one defined is "capabilities", which
57 Currently the only one defined is "capabilities", which
58 consists of a line in the form:
58 consists of a line in the form:
59
59
60 capabilities: space separated list of tokens
60 capabilities: space separated list of tokens
61 '''
61 '''
62
62
63 caps = ['unbundle']
63 caps = ['unbundle']
64 if self.ui.configbool('server', 'uncompressed'):
64 if self.ui.configbool('server', 'uncompressed'):
65 caps.append('stream=%d' % self.repo.revlogversion)
65 caps.append('stream=%d' % self.repo.revlogversion)
66 self.respond("capabilities: %s\n" % (' '.join(caps),))
66 self.respond("capabilities: %s\n" % (' '.join(caps),))
67
67
68 def do_lock(self):
68 def do_lock(self):
69 '''DEPRECATED - allowing remote client to lock repo is not safe'''
69 '''DEPRECATED - allowing remote client to lock repo is not safe'''
70
70
71 self.lock = self.repo.lock()
71 self.lock = self.repo.lock()
72 self.respond("")
72 self.respond("")
73
73
74 def do_unlock(self):
74 def do_unlock(self):
75 '''DEPRECATED'''
75 '''DEPRECATED'''
76
76
77 if self.lock:
77 if self.lock:
78 self.lock.release()
78 self.lock.release()
79 self.lock = None
79 self.lock = None
80 self.respond("")
80 self.respond("")
81
81
82 def do_branches(self):
82 def do_branches(self):
83 arg, nodes = self.getarg()
83 arg, nodes = self.getarg()
84 nodes = map(bin, nodes.split(" "))
84 nodes = map(bin, nodes.split(" "))
85 r = []
85 r = []
86 for b in self.repo.branches(nodes):
86 for b in self.repo.branches(nodes):
87 r.append(" ".join(map(hex, b)) + "\n")
87 r.append(" ".join(map(hex, b)) + "\n")
88 self.respond("".join(r))
88 self.respond("".join(r))
89
89
90 def do_between(self):
90 def do_between(self):
91 arg, pairs = self.getarg()
91 arg, pairs = self.getarg()
92 pairs = [map(bin, p.split("-")) for p in pairs.split(" ")]
92 pairs = [map(bin, p.split("-")) for p in pairs.split(" ")]
93 r = []
93 r = []
94 for b in self.repo.between(pairs):
94 for b in self.repo.between(pairs):
95 r.append(" ".join(map(hex, b)) + "\n")
95 r.append(" ".join(map(hex, b)) + "\n")
96 self.respond("".join(r))
96 self.respond("".join(r))
97
97
98 def do_changegroup(self):
98 def do_changegroup(self):
99 nodes = []
99 nodes = []
100 arg, roots = self.getarg()
100 arg, roots = self.getarg()
101 nodes = map(bin, roots.split(" "))
101 nodes = map(bin, roots.split(" "))
102
102
103 cg = self.repo.changegroup(nodes, 'serve')
103 cg = self.repo.changegroup(nodes, 'serve')
104 while True:
104 while True:
105 d = cg.read(4096)
105 d = cg.read(4096)
106 if not d:
106 if not d:
107 break
107 break
108 self.fout.write(d)
108 self.fout.write(d)
109
109
110 self.fout.flush()
110 self.fout.flush()
111
111
112 def do_addchangegroup(self):
112 def do_addchangegroup(self):
113 '''DEPRECATED'''
113 '''DEPRECATED'''
114
114
115 if not self.lock:
115 if not self.lock:
116 self.respond("not locked")
116 self.respond("not locked")
117 return
117 return
118
118
119 self.respond("")
119 self.respond("")
120 r = self.repo.addchangegroup(self.fin, 'serve')
120 r = self.repo.addchangegroup(self.fin, 'serve', self.client_url())
121 self.respond(str(r))
121 self.respond(str(r))
122
122
123 def client_url(self):
124 client = os.environ.get('SSH_CLIENT', '').split(' ', 1)[0]
125 return 'remote:ssh:' + client
126
123 def do_unbundle(self):
127 def do_unbundle(self):
124 their_heads = self.getarg()[1].split()
128 their_heads = self.getarg()[1].split()
125
129
126 def check_heads():
130 def check_heads():
127 heads = map(hex, self.repo.heads())
131 heads = map(hex, self.repo.heads())
128 return their_heads == [hex('force')] or their_heads == heads
132 return their_heads == [hex('force')] or their_heads == heads
129
133
130 # fail early if possible
134 # fail early if possible
131 if not check_heads():
135 if not check_heads():
132 self.respond(_('unsynced changes'))
136 self.respond(_('unsynced changes'))
133 return
137 return
134
138
135 self.respond('')
139 self.respond('')
136
140
137 # write bundle data to temporary file because it can be big
141 # write bundle data to temporary file because it can be big
138
142
139 try:
143 try:
140 fd, tempname = tempfile.mkstemp(prefix='hg-unbundle-')
144 fd, tempname = tempfile.mkstemp(prefix='hg-unbundle-')
141 fp = os.fdopen(fd, 'wb+')
145 fp = os.fdopen(fd, 'wb+')
142
146
143 count = int(self.fin.readline())
147 count = int(self.fin.readline())
144 while count:
148 while count:
145 fp.write(self.fin.read(count))
149 fp.write(self.fin.read(count))
146 count = int(self.fin.readline())
150 count = int(self.fin.readline())
147
151
148 was_locked = self.lock is not None
152 was_locked = self.lock is not None
149 if not was_locked:
153 if not was_locked:
150 self.lock = self.repo.lock()
154 self.lock = self.repo.lock()
151 try:
155 try:
152 if not check_heads():
156 if not check_heads():
153 # someone else committed/pushed/unbundled while we
157 # someone else committed/pushed/unbundled while we
154 # were transferring data
158 # were transferring data
155 self.respond(_('unsynced changes'))
159 self.respond(_('unsynced changes'))
156 return
160 return
157 self.respond('')
161 self.respond('')
158
162
159 # push can proceed
163 # push can proceed
160
164
161 fp.seek(0)
165 fp.seek(0)
162 r = self.repo.addchangegroup(fp, 'serve')
166 r = self.repo.addchangegroup(fp, 'serve', self.client_url())
163 self.respond(str(r))
167 self.respond(str(r))
164 finally:
168 finally:
165 if not was_locked:
169 if not was_locked:
166 self.lock.release()
170 self.lock.release()
167 self.lock = None
171 self.lock = None
168 finally:
172 finally:
169 fp.close()
173 fp.close()
170 os.unlink(tempname)
174 os.unlink(tempname)
171
175
172 def do_stream_out(self):
176 def do_stream_out(self):
173 streamclone.stream_out(self.repo, self.fout)
177 streamclone.stream_out(self.repo, self.fout)
@@ -1,48 +1,52 b''
1 # statichttprepo.py - simple http repository class for mercurial
1 # statichttprepo.py - simple http repository class for mercurial
2 #
2 #
3 # This provides read-only repo access to repositories exported via static http
3 # This provides read-only repo access to repositories exported via static http
4 #
4 #
5 # Copyright 2005 Matt Mackall <mpm@selenic.com>
5 # Copyright 2005 Matt Mackall <mpm@selenic.com>
6 #
6 #
7 # This software may be used and distributed according to the terms
7 # This software may be used and distributed according to the terms
8 # of the GNU General Public License, incorporated herein by reference.
8 # of the GNU General Public License, incorporated herein by reference.
9
9
10 from demandload import demandload
10 from demandload import demandload
11 demandload(globals(), "changelog filelog httprangereader")
11 demandload(globals(), "changelog filelog httprangereader")
12 demandload(globals(), "localrepo manifest os urllib urllib2")
12 demandload(globals(), "localrepo manifest os urllib urllib2")
13
13
14 class rangereader(httprangereader.httprangereader):
14 class rangereader(httprangereader.httprangereader):
15 def read(self, size=None):
15 def read(self, size=None):
16 try:
16 try:
17 return httprangereader.httprangereader.read(self, size)
17 return httprangereader.httprangereader.read(self, size)
18 except urllib2.HTTPError, inst:
18 except urllib2.HTTPError, inst:
19 raise IOError(None, inst)
19 raise IOError(None, inst)
20 except urllib2.URLError, inst:
20 except urllib2.URLError, inst:
21 raise IOError(None, inst.reason[1])
21 raise IOError(None, inst.reason[1])
22
22
23 def opener(base):
23 def opener(base):
24 """return a function that opens files over http"""
24 """return a function that opens files over http"""
25 p = base
25 p = base
26 def o(path, mode="r"):
26 def o(path, mode="r"):
27 f = os.path.join(p, urllib.quote(path))
27 f = os.path.join(p, urllib.quote(path))
28 return rangereader(f)
28 return rangereader(f)
29 return o
29 return o
30
30
31 class statichttprepository(localrepo.localrepository):
31 class statichttprepository(localrepo.localrepository):
32 def __init__(self, ui, path):
32 def __init__(self, ui, path):
33 self._url = path
33 self.path = (path + "/.hg")
34 self.path = (path + "/.hg")
34 self.ui = ui
35 self.ui = ui
35 self.revlogversion = 0
36 self.revlogversion = 0
36 self.opener = opener(self.path)
37 self.opener = opener(self.path)
37 self.manifest = manifest.manifest(self.opener)
38 self.manifest = manifest.manifest(self.opener)
38 self.changelog = changelog.changelog(self.opener)
39 self.changelog = changelog.changelog(self.opener)
39 self.tagscache = None
40 self.tagscache = None
40 self.nodetagscache = None
41 self.nodetagscache = None
41 self.encodepats = None
42 self.encodepats = None
42 self.decodepats = None
43 self.decodepats = None
43
44
45 def url(self):
46 return 'static-' + self._url
47
44 def dev(self):
48 def dev(self):
45 return -1
49 return -1
46
50
47 def local(self):
51 def local(self):
48 return False
52 return False
@@ -1,54 +1,56 b''
1 #!/bin/sh
1 #!/bin/sh
2
2
3 hg init test
3 hg init test
4 cd test
4 cd test
5 echo 0 > afile
5 echo 0 > afile
6 hg add afile
6 hg add afile
7 hg commit -m "0.0" -d "1000000 0"
7 hg commit -m "0.0" -d "1000000 0"
8 echo 1 >> afile
8 echo 1 >> afile
9 hg commit -m "0.1" -d "1000000 0"
9 hg commit -m "0.1" -d "1000000 0"
10 echo 2 >> afile
10 echo 2 >> afile
11 hg commit -m "0.2" -d "1000000 0"
11 hg commit -m "0.2" -d "1000000 0"
12 echo 3 >> afile
12 echo 3 >> afile
13 hg commit -m "0.3" -d "1000000 0"
13 hg commit -m "0.3" -d "1000000 0"
14 hg update -C 0
14 hg update -C 0
15 echo 1 >> afile
15 echo 1 >> afile
16 hg commit -m "1.1" -d "1000000 0"
16 hg commit -m "1.1" -d "1000000 0"
17 echo 2 >> afile
17 echo 2 >> afile
18 hg commit -m "1.2" -d "1000000 0"
18 hg commit -m "1.2" -d "1000000 0"
19 echo "a line" > fred
19 echo "a line" > fred
20 echo 3 >> afile
20 echo 3 >> afile
21 hg add fred
21 hg add fred
22 hg commit -m "1.3" -d "1000000 0"
22 hg commit -m "1.3" -d "1000000 0"
23 hg mv afile adifferentfile
23 hg mv afile adifferentfile
24 hg commit -m "1.3m" -d "1000000 0"
24 hg commit -m "1.3m" -d "1000000 0"
25 hg update -C 3
25 hg update -C 3
26 hg mv afile anotherfile
26 hg mv afile anotherfile
27 hg commit -m "0.3m" -d "1000000 0"
27 hg commit -m "0.3m" -d "1000000 0"
28 hg verify
28 hg verify
29 cd ..
29 cd ..
30 hg init empty
30 hg init empty
31 hg -R test bundle full.hg empty
31 hg -R test bundle full.hg empty
32 hg -R test unbundle full.hg
32 hg -R test unbundle full.hg
33 hg -R empty unbundle full.hg
33 hg -R empty unbundle full.hg
34 hg -R empty heads
34 hg -R empty heads
35 hg -R empty verify
35 hg -R empty verify
36
36
37 rm -rf empty
37 rm -rf empty
38 hg init empty
38 hg init empty
39 cd empty
39 cd empty
40 hg -R bundle://../full.hg log
40 hg -R bundle://../full.hg log
41 echo '[hooks]' >> .hg/hgrc
42 echo 'changegroup = echo changegroup: u=$HG_URL' >> .hg/hgrc
41 #doesn't work (yet ?)
43 #doesn't work (yet ?)
42 #hg -R bundle://../full.hg verify
44 #hg -R bundle://../full.hg verify
43 hg pull bundle://../full.hg
45 hg pull bundle://../full.hg
44 cd ..
46 cd ..
45
47
46 rm -rf empty
48 rm -rf empty
47 hg init empty
49 hg init empty
48 hg clone -r 3 test partial
50 hg clone -r 3 test partial
49 hg clone partial partial2
51 hg clone partial partial2
50 cd partial
52 cd partial
51 hg -R bundle://../full.hg log
53 hg -R bundle://../full.hg log
52 hg incoming bundle://../full.hg
54 hg incoming bundle://../full.hg
53 hg -R bundle://../full.hg outgoing ../partial2
55 hg -R bundle://../full.hg outgoing ../partial2
54 cd ..
56 cd ..
@@ -1,203 +1,204 b''
1 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
1 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
2 1 files updated, 0 files merged, 2 files removed, 0 files unresolved
2 1 files updated, 0 files merged, 2 files removed, 0 files unresolved
3 checking changesets
3 checking changesets
4 checking manifests
4 checking manifests
5 crosschecking files in changesets and manifests
5 crosschecking files in changesets and manifests
6 checking files
6 checking files
7 4 files, 9 changesets, 7 total revisions
7 4 files, 9 changesets, 7 total revisions
8 searching for changes
8 searching for changes
9 adding changesets
9 adding changesets
10 adding manifests
10 adding manifests
11 adding file changes
11 adding file changes
12 added 0 changesets with 0 changes to 4 files
12 added 0 changesets with 0 changes to 4 files
13 (run 'hg update' to get a working copy)
13 (run 'hg update' to get a working copy)
14 adding changesets
14 adding changesets
15 adding manifests
15 adding manifests
16 adding file changes
16 adding file changes
17 added 9 changesets with 7 changes to 4 files (+1 heads)
17 added 9 changesets with 7 changes to 4 files (+1 heads)
18 (run 'hg heads' to see heads, 'hg merge' to merge)
18 (run 'hg heads' to see heads, 'hg merge' to merge)
19 changeset: 8:836ac62537ab
19 changeset: 8:836ac62537ab
20 tag: tip
20 tag: tip
21 parent: 3:ac69c658229d
21 parent: 3:ac69c658229d
22 user: test
22 user: test
23 date: Mon Jan 12 13:46:40 1970 +0000
23 date: Mon Jan 12 13:46:40 1970 +0000
24 summary: 0.3m
24 summary: 0.3m
25
25
26 changeset: 7:80fe151401c2
26 changeset: 7:80fe151401c2
27 user: test
27 user: test
28 date: Mon Jan 12 13:46:40 1970 +0000
28 date: Mon Jan 12 13:46:40 1970 +0000
29 summary: 1.3m
29 summary: 1.3m
30
30
31 checking changesets
31 checking changesets
32 checking manifests
32 checking manifests
33 crosschecking files in changesets and manifests
33 crosschecking files in changesets and manifests
34 checking files
34 checking files
35 4 files, 9 changesets, 7 total revisions
35 4 files, 9 changesets, 7 total revisions
36 changeset: 8:836ac62537ab
36 changeset: 8:836ac62537ab
37 tag: tip
37 tag: tip
38 parent: 3:ac69c658229d
38 parent: 3:ac69c658229d
39 user: test
39 user: test
40 date: Mon Jan 12 13:46:40 1970 +0000
40 date: Mon Jan 12 13:46:40 1970 +0000
41 summary: 0.3m
41 summary: 0.3m
42
42
43 changeset: 7:80fe151401c2
43 changeset: 7:80fe151401c2
44 user: test
44 user: test
45 date: Mon Jan 12 13:46:40 1970 +0000
45 date: Mon Jan 12 13:46:40 1970 +0000
46 summary: 1.3m
46 summary: 1.3m
47
47
48 changeset: 6:1e3f6b843bd6
48 changeset: 6:1e3f6b843bd6
49 user: test
49 user: test
50 date: Mon Jan 12 13:46:40 1970 +0000
50 date: Mon Jan 12 13:46:40 1970 +0000
51 summary: 1.3
51 summary: 1.3
52
52
53 changeset: 5:024e4e7df376
53 changeset: 5:024e4e7df376
54 user: test
54 user: test
55 date: Mon Jan 12 13:46:40 1970 +0000
55 date: Mon Jan 12 13:46:40 1970 +0000
56 summary: 1.2
56 summary: 1.2
57
57
58 changeset: 4:5f4f3ceb285e
58 changeset: 4:5f4f3ceb285e
59 parent: 0:5649c9d34dd8
59 parent: 0:5649c9d34dd8
60 user: test
60 user: test
61 date: Mon Jan 12 13:46:40 1970 +0000
61 date: Mon Jan 12 13:46:40 1970 +0000
62 summary: 1.1
62 summary: 1.1
63
63
64 changeset: 3:ac69c658229d
64 changeset: 3:ac69c658229d
65 user: test
65 user: test
66 date: Mon Jan 12 13:46:40 1970 +0000
66 date: Mon Jan 12 13:46:40 1970 +0000
67 summary: 0.3
67 summary: 0.3
68
68
69 changeset: 2:d62976ca1e50
69 changeset: 2:d62976ca1e50
70 user: test
70 user: test
71 date: Mon Jan 12 13:46:40 1970 +0000
71 date: Mon Jan 12 13:46:40 1970 +0000
72 summary: 0.2
72 summary: 0.2
73
73
74 changeset: 1:10b2180f755b
74 changeset: 1:10b2180f755b
75 user: test
75 user: test
76 date: Mon Jan 12 13:46:40 1970 +0000
76 date: Mon Jan 12 13:46:40 1970 +0000
77 summary: 0.1
77 summary: 0.1
78
78
79 changeset: 0:5649c9d34dd8
79 changeset: 0:5649c9d34dd8
80 user: test
80 user: test
81 date: Mon Jan 12 13:46:40 1970 +0000
81 date: Mon Jan 12 13:46:40 1970 +0000
82 summary: 0.0
82 summary: 0.0
83
83
84 changegroup: u=bundle:../full.hg
84 pulling from bundle://../full.hg
85 pulling from bundle://../full.hg
85 requesting all changes
86 requesting all changes
86 adding changesets
87 adding changesets
87 adding manifests
88 adding manifests
88 adding file changes
89 adding file changes
89 added 9 changesets with 7 changes to 4 files (+1 heads)
90 added 9 changesets with 7 changes to 4 files (+1 heads)
90 (run 'hg heads' to see heads, 'hg merge' to merge)
91 (run 'hg heads' to see heads, 'hg merge' to merge)
91 requesting all changes
92 requesting all changes
92 adding changesets
93 adding changesets
93 adding manifests
94 adding manifests
94 adding file changes
95 adding file changes
95 added 4 changesets with 4 changes to 1 files
96 added 4 changesets with 4 changes to 1 files
96 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
97 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
97 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
98 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
98 changeset: 8:836ac62537ab
99 changeset: 8:836ac62537ab
99 tag: tip
100 tag: tip
100 parent: 3:ac69c658229d
101 parent: 3:ac69c658229d
101 user: test
102 user: test
102 date: Mon Jan 12 13:46:40 1970 +0000
103 date: Mon Jan 12 13:46:40 1970 +0000
103 summary: 0.3m
104 summary: 0.3m
104
105
105 changeset: 7:80fe151401c2
106 changeset: 7:80fe151401c2
106 user: test
107 user: test
107 date: Mon Jan 12 13:46:40 1970 +0000
108 date: Mon Jan 12 13:46:40 1970 +0000
108 summary: 1.3m
109 summary: 1.3m
109
110
110 changeset: 6:1e3f6b843bd6
111 changeset: 6:1e3f6b843bd6
111 user: test
112 user: test
112 date: Mon Jan 12 13:46:40 1970 +0000
113 date: Mon Jan 12 13:46:40 1970 +0000
113 summary: 1.3
114 summary: 1.3
114
115
115 changeset: 5:024e4e7df376
116 changeset: 5:024e4e7df376
116 user: test
117 user: test
117 date: Mon Jan 12 13:46:40 1970 +0000
118 date: Mon Jan 12 13:46:40 1970 +0000
118 summary: 1.2
119 summary: 1.2
119
120
120 changeset: 4:5f4f3ceb285e
121 changeset: 4:5f4f3ceb285e
121 parent: 0:5649c9d34dd8
122 parent: 0:5649c9d34dd8
122 user: test
123 user: test
123 date: Mon Jan 12 13:46:40 1970 +0000
124 date: Mon Jan 12 13:46:40 1970 +0000
124 summary: 1.1
125 summary: 1.1
125
126
126 changeset: 3:ac69c658229d
127 changeset: 3:ac69c658229d
127 user: test
128 user: test
128 date: Mon Jan 12 13:46:40 1970 +0000
129 date: Mon Jan 12 13:46:40 1970 +0000
129 summary: 0.3
130 summary: 0.3
130
131
131 changeset: 2:d62976ca1e50
132 changeset: 2:d62976ca1e50
132 user: test
133 user: test
133 date: Mon Jan 12 13:46:40 1970 +0000
134 date: Mon Jan 12 13:46:40 1970 +0000
134 summary: 0.2
135 summary: 0.2
135
136
136 changeset: 1:10b2180f755b
137 changeset: 1:10b2180f755b
137 user: test
138 user: test
138 date: Mon Jan 12 13:46:40 1970 +0000
139 date: Mon Jan 12 13:46:40 1970 +0000
139 summary: 0.1
140 summary: 0.1
140
141
141 changeset: 0:5649c9d34dd8
142 changeset: 0:5649c9d34dd8
142 user: test
143 user: test
143 date: Mon Jan 12 13:46:40 1970 +0000
144 date: Mon Jan 12 13:46:40 1970 +0000
144 summary: 0.0
145 summary: 0.0
145
146
146 searching for changes
147 searching for changes
147 changeset: 4:5f4f3ceb285e
148 changeset: 4:5f4f3ceb285e
148 parent: 0:5649c9d34dd8
149 parent: 0:5649c9d34dd8
149 user: test
150 user: test
150 date: Mon Jan 12 13:46:40 1970 +0000
151 date: Mon Jan 12 13:46:40 1970 +0000
151 summary: 1.1
152 summary: 1.1
152
153
153 changeset: 5:024e4e7df376
154 changeset: 5:024e4e7df376
154 user: test
155 user: test
155 date: Mon Jan 12 13:46:40 1970 +0000
156 date: Mon Jan 12 13:46:40 1970 +0000
156 summary: 1.2
157 summary: 1.2
157
158
158 changeset: 6:1e3f6b843bd6
159 changeset: 6:1e3f6b843bd6
159 user: test
160 user: test
160 date: Mon Jan 12 13:46:40 1970 +0000
161 date: Mon Jan 12 13:46:40 1970 +0000
161 summary: 1.3
162 summary: 1.3
162
163
163 changeset: 7:80fe151401c2
164 changeset: 7:80fe151401c2
164 user: test
165 user: test
165 date: Mon Jan 12 13:46:40 1970 +0000
166 date: Mon Jan 12 13:46:40 1970 +0000
166 summary: 1.3m
167 summary: 1.3m
167
168
168 changeset: 8:836ac62537ab
169 changeset: 8:836ac62537ab
169 tag: tip
170 tag: tip
170 parent: 3:ac69c658229d
171 parent: 3:ac69c658229d
171 user: test
172 user: test
172 date: Mon Jan 12 13:46:40 1970 +0000
173 date: Mon Jan 12 13:46:40 1970 +0000
173 summary: 0.3m
174 summary: 0.3m
174
175
175 searching for changes
176 searching for changes
176 changeset: 4:5f4f3ceb285e
177 changeset: 4:5f4f3ceb285e
177 parent: 0:5649c9d34dd8
178 parent: 0:5649c9d34dd8
178 user: test
179 user: test
179 date: Mon Jan 12 13:46:40 1970 +0000
180 date: Mon Jan 12 13:46:40 1970 +0000
180 summary: 1.1
181 summary: 1.1
181
182
182 changeset: 5:024e4e7df376
183 changeset: 5:024e4e7df376
183 user: test
184 user: test
184 date: Mon Jan 12 13:46:40 1970 +0000
185 date: Mon Jan 12 13:46:40 1970 +0000
185 summary: 1.2
186 summary: 1.2
186
187
187 changeset: 6:1e3f6b843bd6
188 changeset: 6:1e3f6b843bd6
188 user: test
189 user: test
189 date: Mon Jan 12 13:46:40 1970 +0000
190 date: Mon Jan 12 13:46:40 1970 +0000
190 summary: 1.3
191 summary: 1.3
191
192
192 changeset: 7:80fe151401c2
193 changeset: 7:80fe151401c2
193 user: test
194 user: test
194 date: Mon Jan 12 13:46:40 1970 +0000
195 date: Mon Jan 12 13:46:40 1970 +0000
195 summary: 1.3m
196 summary: 1.3m
196
197
197 changeset: 8:836ac62537ab
198 changeset: 8:836ac62537ab
198 tag: tip
199 tag: tip
199 parent: 3:ac69c658229d
200 parent: 3:ac69c658229d
200 user: test
201 user: test
201 date: Mon Jan 12 13:46:40 1970 +0000
202 date: Mon Jan 12 13:46:40 1970 +0000
202 summary: 0.3m
203 summary: 0.3m
203
204
@@ -1,186 +1,186 b''
1 #!/bin/sh
1 #!/bin/sh
2
2
3 # commit hooks can see env vars
3 # commit hooks can see env vars
4 hg init a
4 hg init a
5 cd a
5 cd a
6 echo "[hooks]" > .hg/hgrc
6 echo "[hooks]" > .hg/hgrc
7 echo 'commit = echo commit hook: n=$HG_NODE p1=$HG_PARENT1 p2=$HG_PARENT2' >> .hg/hgrc
7 echo 'commit = echo commit hook: n=$HG_NODE p1=$HG_PARENT1 p2=$HG_PARENT2' >> .hg/hgrc
8 echo 'commit.b = echo commit hook b' >> .hg/hgrc
8 echo 'commit.b = echo commit hook b' >> .hg/hgrc
9 echo 'precommit = echo precommit hook: p1=$HG_PARENT1 p2=$HG_PARENT2' >> .hg/hgrc
9 echo 'precommit = echo precommit hook: p1=$HG_PARENT1 p2=$HG_PARENT2' >> .hg/hgrc
10 echo 'pretxncommit = echo pretxncommit hook: n=$HG_NODE p1=$HG_PARENT1 p2=$HG_PARENT2; hg -q tip' >> .hg/hgrc
10 echo 'pretxncommit = echo pretxncommit hook: n=$HG_NODE p1=$HG_PARENT1 p2=$HG_PARENT2; hg -q tip' >> .hg/hgrc
11 echo a > a
11 echo a > a
12 hg add a
12 hg add a
13 hg commit -m a -d "1000000 0"
13 hg commit -m a -d "1000000 0"
14
14
15 hg clone . ../b
15 hg clone . ../b
16 cd ../b
16 cd ../b
17
17
18 # changegroup hooks can see env vars
18 # changegroup hooks can see env vars
19 echo '[hooks]' > .hg/hgrc
19 echo '[hooks]' > .hg/hgrc
20 echo 'prechangegroup = echo prechangegroup hook' >> .hg/hgrc
20 echo 'prechangegroup = echo prechangegroup hook: u=`echo $HG_URL | sed s,file:.*,file:,`' >> .hg/hgrc
21 echo 'changegroup = echo changegroup hook: n=$HG_NODE' >> .hg/hgrc
21 echo 'changegroup = echo changegroup hook: n=$HG_NODE u=`echo $HG_URL | sed s,file:.*,file:,`' >> .hg/hgrc
22 echo 'incoming = echo incoming hook: n=$HG_NODE' >> .hg/hgrc
22 echo 'incoming = echo incoming hook: n=$HG_NODE u=`echo $HG_URL | sed s,file:.*,file:,`' >> .hg/hgrc
23
23
24 # pretxncommit and commit hooks can see both parents of merge
24 # pretxncommit and commit hooks can see both parents of merge
25 cd ../a
25 cd ../a
26 echo b >> a
26 echo b >> a
27 hg commit -m a1 -d "1 0"
27 hg commit -m a1 -d "1 0"
28 hg update -C 0
28 hg update -C 0
29 echo b > b
29 echo b > b
30 hg add b
30 hg add b
31 hg commit -m b -d '1 0'
31 hg commit -m b -d '1 0'
32 hg merge 1
32 hg merge 1
33 hg commit -m merge -d '2 0'
33 hg commit -m merge -d '2 0'
34
34
35 cd ../b
35 cd ../b
36 hg pull ../a
36 hg pull ../a
37
37
38 # tag hooks can see env vars
38 # tag hooks can see env vars
39 cd ../a
39 cd ../a
40 echo 'pretag = echo pretag hook: t=$HG_TAG n=$HG_NODE l=$HG_LOCAL' >> .hg/hgrc
40 echo 'pretag = echo pretag hook: t=$HG_TAG n=$HG_NODE l=$HG_LOCAL' >> .hg/hgrc
41 echo 'tag = echo tag hook: t=$HG_TAG n=$HG_NODE l=$HG_LOCAL' >> .hg/hgrc
41 echo 'tag = echo tag hook: t=$HG_TAG n=$HG_NODE l=$HG_LOCAL' >> .hg/hgrc
42 hg tag -d '3 0' a
42 hg tag -d '3 0' a
43 hg tag -l la
43 hg tag -l la
44
44
45 # pretag hook can forbid tagging
45 # pretag hook can forbid tagging
46 echo 'pretag.forbid = echo pretag.forbid hook; exit 1' >> .hg/hgrc
46 echo 'pretag.forbid = echo pretag.forbid hook; exit 1' >> .hg/hgrc
47 hg tag -d '4 0' fa
47 hg tag -d '4 0' fa
48 hg tag -l fla
48 hg tag -l fla
49
49
50 # pretxncommit hook can see changeset, can roll back txn, changeset
50 # pretxncommit hook can see changeset, can roll back txn, changeset
51 # no more there after
51 # no more there after
52 echo 'pretxncommit.forbid = echo pretxncommit.forbid hook: tip=`hg -q tip`; exit 1' >> .hg/hgrc
52 echo 'pretxncommit.forbid = echo pretxncommit.forbid hook: tip=`hg -q tip`; exit 1' >> .hg/hgrc
53 echo z > z
53 echo z > z
54 hg add z
54 hg add z
55 hg -q tip
55 hg -q tip
56 hg commit -m 'fail' -d '4 0'
56 hg commit -m 'fail' -d '4 0'
57 hg -q tip
57 hg -q tip
58
58
59 # precommit hook can prevent commit
59 # precommit hook can prevent commit
60 echo 'precommit.forbid = echo precommit.forbid hook; exit 1' >> .hg/hgrc
60 echo 'precommit.forbid = echo precommit.forbid hook; exit 1' >> .hg/hgrc
61 hg commit -m 'fail' -d '4 0'
61 hg commit -m 'fail' -d '4 0'
62 hg -q tip
62 hg -q tip
63
63
64 # preupdate hook can prevent update
64 # preupdate hook can prevent update
65 echo 'preupdate = echo preupdate hook: p1=$HG_PARENT1 p2=$HG_PARENT2' >> .hg/hgrc
65 echo 'preupdate = echo preupdate hook: p1=$HG_PARENT1 p2=$HG_PARENT2' >> .hg/hgrc
66 hg update 1
66 hg update 1
67
67
68 # update hook
68 # update hook
69 echo 'update = echo update hook: p1=$HG_PARENT1 p2=$HG_PARENT2 err=$HG_ERROR' >> .hg/hgrc
69 echo 'update = echo update hook: p1=$HG_PARENT1 p2=$HG_PARENT2 err=$HG_ERROR' >> .hg/hgrc
70 hg update
70 hg update
71
71
72 # prechangegroup hook can prevent incoming changes
72 # prechangegroup hook can prevent incoming changes
73 cd ../b
73 cd ../b
74 hg -q tip
74 hg -q tip
75 echo '[hooks]' > .hg/hgrc
75 echo '[hooks]' > .hg/hgrc
76 echo 'prechangegroup.forbid = echo prechangegroup.forbid hook; exit 1' >> .hg/hgrc
76 echo 'prechangegroup.forbid = echo prechangegroup.forbid hook; exit 1' >> .hg/hgrc
77 hg pull ../a
77 hg pull ../a
78
78
79 # pretxnchangegroup hook can see incoming changes, can roll back txn,
79 # pretxnchangegroup hook can see incoming changes, can roll back txn,
80 # incoming changes no longer there after
80 # incoming changes no longer there after
81 echo '[hooks]' > .hg/hgrc
81 echo '[hooks]' > .hg/hgrc
82 echo 'pretxnchangegroup.forbid = echo pretxnchangegroup.forbid hook: tip=`hg -q tip`; exit 1' >> .hg/hgrc
82 echo 'pretxnchangegroup.forbid = echo pretxnchangegroup.forbid hook: tip=`hg -q tip`; exit 1' >> .hg/hgrc
83 hg pull ../a
83 hg pull ../a
84 hg -q tip
84 hg -q tip
85
85
86 # outgoing hooks can see env vars
86 # outgoing hooks can see env vars
87 rm .hg/hgrc
87 rm .hg/hgrc
88 echo '[hooks]' > ../a/.hg/hgrc
88 echo '[hooks]' > ../a/.hg/hgrc
89 echo 'preoutgoing = echo preoutgoing hook: s=$HG_SOURCE' >> ../a/.hg/hgrc
89 echo 'preoutgoing = echo preoutgoing hook: s=$HG_SOURCE' >> ../a/.hg/hgrc
90 echo 'outgoing = echo outgoing hook: n=$HG_NODE s=$HG_SOURCE' >> ../a/.hg/hgrc
90 echo 'outgoing = echo outgoing hook: n=$HG_NODE s=$HG_SOURCE' >> ../a/.hg/hgrc
91 hg pull ../a
91 hg pull ../a
92 hg rollback
92 hg rollback
93
93
94 # preoutgoing hook can prevent outgoing changes
94 # preoutgoing hook can prevent outgoing changes
95 echo 'preoutgoing.forbid = echo preoutgoing.forbid hook; exit 1' >> ../a/.hg/hgrc
95 echo 'preoutgoing.forbid = echo preoutgoing.forbid hook; exit 1' >> ../a/.hg/hgrc
96 hg pull ../a
96 hg pull ../a
97
97
98 cat > hooktests.py <<EOF
98 cat > hooktests.py <<EOF
99 from mercurial import util
99 from mercurial import util
100
100
101 uncallable = 0
101 uncallable = 0
102
102
103 def printargs(args):
103 def printargs(args):
104 args.pop('ui', None)
104 args.pop('ui', None)
105 args.pop('repo', None)
105 args.pop('repo', None)
106 a = list(args.items())
106 a = list(args.items())
107 a.sort()
107 a.sort()
108 print 'hook args:'
108 print 'hook args:'
109 for k, v in a:
109 for k, v in a:
110 print ' ', k, v
110 print ' ', k, v
111
111
112 def passhook(**args):
112 def passhook(**args):
113 printargs(args)
113 printargs(args)
114
114
115 def failhook(**args):
115 def failhook(**args):
116 printargs(args)
116 printargs(args)
117 return True
117 return True
118
118
119 class LocalException(Exception):
119 class LocalException(Exception):
120 pass
120 pass
121
121
122 def raisehook(**args):
122 def raisehook(**args):
123 raise LocalException('exception from hook')
123 raise LocalException('exception from hook')
124
124
125 def aborthook(**args):
125 def aborthook(**args):
126 raise util.Abort('raise abort from hook')
126 raise util.Abort('raise abort from hook')
127
127
128 def brokenhook(**args):
128 def brokenhook(**args):
129 return 1 + {}
129 return 1 + {}
130
130
131 class container:
131 class container:
132 unreachable = 1
132 unreachable = 1
133 EOF
133 EOF
134
134
135 echo '# test python hooks'
135 echo '# test python hooks'
136 PYTHONPATH="`pwd`:$PYTHONPATH"
136 PYTHONPATH="`pwd`:$PYTHONPATH"
137 export PYTHONPATH
137 export PYTHONPATH
138
138
139 echo '[hooks]' > ../a/.hg/hgrc
139 echo '[hooks]' > ../a/.hg/hgrc
140 echo 'preoutgoing.broken = python:hooktests.brokenhook' >> ../a/.hg/hgrc
140 echo 'preoutgoing.broken = python:hooktests.brokenhook' >> ../a/.hg/hgrc
141 hg pull ../a 2>&1 | grep 'raised an exception'
141 hg pull ../a 2>&1 | grep 'raised an exception'
142
142
143 echo '[hooks]' > ../a/.hg/hgrc
143 echo '[hooks]' > ../a/.hg/hgrc
144 echo 'preoutgoing.raise = python:hooktests.raisehook' >> ../a/.hg/hgrc
144 echo 'preoutgoing.raise = python:hooktests.raisehook' >> ../a/.hg/hgrc
145 hg pull ../a 2>&1 | grep 'raised an exception'
145 hg pull ../a 2>&1 | grep 'raised an exception'
146
146
147 echo '[hooks]' > ../a/.hg/hgrc
147 echo '[hooks]' > ../a/.hg/hgrc
148 echo 'preoutgoing.abort = python:hooktests.aborthook' >> ../a/.hg/hgrc
148 echo 'preoutgoing.abort = python:hooktests.aborthook' >> ../a/.hg/hgrc
149 hg pull ../a
149 hg pull ../a
150
150
151 echo '[hooks]' > ../a/.hg/hgrc
151 echo '[hooks]' > ../a/.hg/hgrc
152 echo 'preoutgoing.fail = python:hooktests.failhook' >> ../a/.hg/hgrc
152 echo 'preoutgoing.fail = python:hooktests.failhook' >> ../a/.hg/hgrc
153 hg pull ../a
153 hg pull ../a
154
154
155 echo '[hooks]' > ../a/.hg/hgrc
155 echo '[hooks]' > ../a/.hg/hgrc
156 echo 'preoutgoing.uncallable = python:hooktests.uncallable' >> ../a/.hg/hgrc
156 echo 'preoutgoing.uncallable = python:hooktests.uncallable' >> ../a/.hg/hgrc
157 hg pull ../a
157 hg pull ../a
158
158
159 echo '[hooks]' > ../a/.hg/hgrc
159 echo '[hooks]' > ../a/.hg/hgrc
160 echo 'preoutgoing.nohook = python:hooktests.nohook' >> ../a/.hg/hgrc
160 echo 'preoutgoing.nohook = python:hooktests.nohook' >> ../a/.hg/hgrc
161 hg pull ../a
161 hg pull ../a
162
162
163 echo '[hooks]' > ../a/.hg/hgrc
163 echo '[hooks]' > ../a/.hg/hgrc
164 echo 'preoutgoing.nomodule = python:nomodule' >> ../a/.hg/hgrc
164 echo 'preoutgoing.nomodule = python:nomodule' >> ../a/.hg/hgrc
165 hg pull ../a
165 hg pull ../a
166
166
167 echo '[hooks]' > ../a/.hg/hgrc
167 echo '[hooks]' > ../a/.hg/hgrc
168 echo 'preoutgoing.badmodule = python:nomodule.nowhere' >> ../a/.hg/hgrc
168 echo 'preoutgoing.badmodule = python:nomodule.nowhere' >> ../a/.hg/hgrc
169 hg pull ../a
169 hg pull ../a
170
170
171 echo '[hooks]' > ../a/.hg/hgrc
171 echo '[hooks]' > ../a/.hg/hgrc
172 echo 'preoutgoing.unreachable = python:hooktests.container.unreachable' >> ../a/.hg/hgrc
172 echo 'preoutgoing.unreachable = python:hooktests.container.unreachable' >> ../a/.hg/hgrc
173 hg pull ../a
173 hg pull ../a
174
174
175 echo '[hooks]' > ../a/.hg/hgrc
175 echo '[hooks]' > ../a/.hg/hgrc
176 echo 'preoutgoing.pass = python:hooktests.passhook' >> ../a/.hg/hgrc
176 echo 'preoutgoing.pass = python:hooktests.passhook' >> ../a/.hg/hgrc
177 hg pull ../a
177 hg pull ../a
178
178
179 echo '# make sure --traceback works'
179 echo '# make sure --traceback works'
180 echo '[hooks]' > .hg/hgrc
180 echo '[hooks]' > .hg/hgrc
181 echo 'commit.abort = python:hooktests.aborthook' >> .hg/hgrc
181 echo 'commit.abort = python:hooktests.aborthook' >> .hg/hgrc
182
182
183 echo a >> a
183 echo a >> a
184 hg --traceback commit -A -m a 2>&1 | grep '^Traceback'
184 hg --traceback commit -A -m a 2>&1 | grep '^Traceback'
185
185
186 exit 0
186 exit 0
@@ -1,140 +1,140 b''
1 precommit hook: p1=0000000000000000000000000000000000000000 p2=
1 precommit hook: p1=0000000000000000000000000000000000000000 p2=
2 pretxncommit hook: n=29b62aeb769fdf78d8d9c5f28b017f76d7ef824b p1=0000000000000000000000000000000000000000 p2=
2 pretxncommit hook: n=29b62aeb769fdf78d8d9c5f28b017f76d7ef824b p1=0000000000000000000000000000000000000000 p2=
3 0:29b62aeb769f
3 0:29b62aeb769f
4 commit hook: n=29b62aeb769fdf78d8d9c5f28b017f76d7ef824b p1=0000000000000000000000000000000000000000 p2=
4 commit hook: n=29b62aeb769fdf78d8d9c5f28b017f76d7ef824b p1=0000000000000000000000000000000000000000 p2=
5 commit hook b
5 commit hook b
6 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
6 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
7 precommit hook: p1=29b62aeb769fdf78d8d9c5f28b017f76d7ef824b p2=
7 precommit hook: p1=29b62aeb769fdf78d8d9c5f28b017f76d7ef824b p2=
8 pretxncommit hook: n=b702efe9688826e3a91283852b328b84dbf37bc2 p1=29b62aeb769fdf78d8d9c5f28b017f76d7ef824b p2=
8 pretxncommit hook: n=b702efe9688826e3a91283852b328b84dbf37bc2 p1=29b62aeb769fdf78d8d9c5f28b017f76d7ef824b p2=
9 1:b702efe96888
9 1:b702efe96888
10 commit hook: n=b702efe9688826e3a91283852b328b84dbf37bc2 p1=29b62aeb769fdf78d8d9c5f28b017f76d7ef824b p2=
10 commit hook: n=b702efe9688826e3a91283852b328b84dbf37bc2 p1=29b62aeb769fdf78d8d9c5f28b017f76d7ef824b p2=
11 commit hook b
11 commit hook b
12 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
12 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
13 precommit hook: p1=29b62aeb769fdf78d8d9c5f28b017f76d7ef824b p2=
13 precommit hook: p1=29b62aeb769fdf78d8d9c5f28b017f76d7ef824b p2=
14 pretxncommit hook: n=1324a5531bac09b329c3845d35ae6a7526874edb p1=29b62aeb769fdf78d8d9c5f28b017f76d7ef824b p2=
14 pretxncommit hook: n=1324a5531bac09b329c3845d35ae6a7526874edb p1=29b62aeb769fdf78d8d9c5f28b017f76d7ef824b p2=
15 2:1324a5531bac
15 2:1324a5531bac
16 commit hook: n=1324a5531bac09b329c3845d35ae6a7526874edb p1=29b62aeb769fdf78d8d9c5f28b017f76d7ef824b p2=
16 commit hook: n=1324a5531bac09b329c3845d35ae6a7526874edb p1=29b62aeb769fdf78d8d9c5f28b017f76d7ef824b p2=
17 commit hook b
17 commit hook b
18 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
18 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
19 (branch merge, don't forget to commit)
19 (branch merge, don't forget to commit)
20 precommit hook: p1=1324a5531bac09b329c3845d35ae6a7526874edb p2=b702efe9688826e3a91283852b328b84dbf37bc2
20 precommit hook: p1=1324a5531bac09b329c3845d35ae6a7526874edb p2=b702efe9688826e3a91283852b328b84dbf37bc2
21 pretxncommit hook: n=4c52fb2e402287dd5dc052090682536c8406c321 p1=1324a5531bac09b329c3845d35ae6a7526874edb p2=b702efe9688826e3a91283852b328b84dbf37bc2
21 pretxncommit hook: n=4c52fb2e402287dd5dc052090682536c8406c321 p1=1324a5531bac09b329c3845d35ae6a7526874edb p2=b702efe9688826e3a91283852b328b84dbf37bc2
22 3:4c52fb2e4022
22 3:4c52fb2e4022
23 commit hook: n=4c52fb2e402287dd5dc052090682536c8406c321 p1=1324a5531bac09b329c3845d35ae6a7526874edb p2=b702efe9688826e3a91283852b328b84dbf37bc2
23 commit hook: n=4c52fb2e402287dd5dc052090682536c8406c321 p1=1324a5531bac09b329c3845d35ae6a7526874edb p2=b702efe9688826e3a91283852b328b84dbf37bc2
24 commit hook b
24 commit hook b
25 prechangegroup hook
25 prechangegroup hook: u=file:
26 changegroup hook: n=b702efe9688826e3a91283852b328b84dbf37bc2
26 changegroup hook: n=b702efe9688826e3a91283852b328b84dbf37bc2 u=file:
27 incoming hook: n=b702efe9688826e3a91283852b328b84dbf37bc2
27 incoming hook: n=b702efe9688826e3a91283852b328b84dbf37bc2 u=file:
28 incoming hook: n=1324a5531bac09b329c3845d35ae6a7526874edb
28 incoming hook: n=1324a5531bac09b329c3845d35ae6a7526874edb u=file:
29 incoming hook: n=4c52fb2e402287dd5dc052090682536c8406c321
29 incoming hook: n=4c52fb2e402287dd5dc052090682536c8406c321 u=file:
30 pulling from ../a
30 pulling from ../a
31 searching for changes
31 searching for changes
32 adding changesets
32 adding changesets
33 adding manifests
33 adding manifests
34 adding file changes
34 adding file changes
35 added 3 changesets with 2 changes to 2 files
35 added 3 changesets with 2 changes to 2 files
36 (run 'hg update' to get a working copy)
36 (run 'hg update' to get a working copy)
37 pretag hook: t=a n=4c52fb2e402287dd5dc052090682536c8406c321 l=0
37 pretag hook: t=a n=4c52fb2e402287dd5dc052090682536c8406c321 l=0
38 precommit hook: p1=4c52fb2e402287dd5dc052090682536c8406c321 p2=
38 precommit hook: p1=4c52fb2e402287dd5dc052090682536c8406c321 p2=
39 pretxncommit hook: n=4f92e785b90ae8995dfe156e39dd4fbc3b346a24 p1=4c52fb2e402287dd5dc052090682536c8406c321 p2=
39 pretxncommit hook: n=4f92e785b90ae8995dfe156e39dd4fbc3b346a24 p1=4c52fb2e402287dd5dc052090682536c8406c321 p2=
40 4:4f92e785b90a
40 4:4f92e785b90a
41 commit hook: n=4f92e785b90ae8995dfe156e39dd4fbc3b346a24 p1=4c52fb2e402287dd5dc052090682536c8406c321 p2=
41 commit hook: n=4f92e785b90ae8995dfe156e39dd4fbc3b346a24 p1=4c52fb2e402287dd5dc052090682536c8406c321 p2=
42 commit hook b
42 commit hook b
43 tag hook: t=a n=4c52fb2e402287dd5dc052090682536c8406c321 l=0
43 tag hook: t=a n=4c52fb2e402287dd5dc052090682536c8406c321 l=0
44 pretag hook: t=la n=4f92e785b90ae8995dfe156e39dd4fbc3b346a24 l=1
44 pretag hook: t=la n=4f92e785b90ae8995dfe156e39dd4fbc3b346a24 l=1
45 tag hook: t=la n=4f92e785b90ae8995dfe156e39dd4fbc3b346a24 l=1
45 tag hook: t=la n=4f92e785b90ae8995dfe156e39dd4fbc3b346a24 l=1
46 pretag hook: t=fa n=4f92e785b90ae8995dfe156e39dd4fbc3b346a24 l=0
46 pretag hook: t=fa n=4f92e785b90ae8995dfe156e39dd4fbc3b346a24 l=0
47 pretag.forbid hook
47 pretag.forbid hook
48 abort: pretag.forbid hook exited with status 1
48 abort: pretag.forbid hook exited with status 1
49 pretag hook: t=fla n=4f92e785b90ae8995dfe156e39dd4fbc3b346a24 l=1
49 pretag hook: t=fla n=4f92e785b90ae8995dfe156e39dd4fbc3b346a24 l=1
50 pretag.forbid hook
50 pretag.forbid hook
51 abort: pretag.forbid hook exited with status 1
51 abort: pretag.forbid hook exited with status 1
52 4:4f92e785b90a
52 4:4f92e785b90a
53 precommit hook: p1=4f92e785b90ae8995dfe156e39dd4fbc3b346a24 p2=
53 precommit hook: p1=4f92e785b90ae8995dfe156e39dd4fbc3b346a24 p2=
54 pretxncommit hook: n=7792358308a2026661cea44f9d47c072813004cb p1=4f92e785b90ae8995dfe156e39dd4fbc3b346a24 p2=
54 pretxncommit hook: n=7792358308a2026661cea44f9d47c072813004cb p1=4f92e785b90ae8995dfe156e39dd4fbc3b346a24 p2=
55 5:7792358308a2
55 5:7792358308a2
56 pretxncommit.forbid hook: tip=5:7792358308a2
56 pretxncommit.forbid hook: tip=5:7792358308a2
57 abort: pretxncommit.forbid hook exited with status 1
57 abort: pretxncommit.forbid hook exited with status 1
58 transaction abort!
58 transaction abort!
59 rollback completed
59 rollback completed
60 4:4f92e785b90a
60 4:4f92e785b90a
61 precommit hook: p1=4f92e785b90ae8995dfe156e39dd4fbc3b346a24 p2=
61 precommit hook: p1=4f92e785b90ae8995dfe156e39dd4fbc3b346a24 p2=
62 precommit.forbid hook
62 precommit.forbid hook
63 abort: precommit.forbid hook exited with status 1
63 abort: precommit.forbid hook exited with status 1
64 4:4f92e785b90a
64 4:4f92e785b90a
65 preupdate hook: p1=b702efe9688826e3a91283852b328b84dbf37bc2 p2=
65 preupdate hook: p1=b702efe9688826e3a91283852b328b84dbf37bc2 p2=
66 0 files updated, 0 files merged, 2 files removed, 0 files unresolved
66 0 files updated, 0 files merged, 2 files removed, 0 files unresolved
67 preupdate hook: p1=4f92e785b90ae8995dfe156e39dd4fbc3b346a24 p2=
67 preupdate hook: p1=4f92e785b90ae8995dfe156e39dd4fbc3b346a24 p2=
68 update hook: p1=4f92e785b90ae8995dfe156e39dd4fbc3b346a24 p2= err=0
68 update hook: p1=4f92e785b90ae8995dfe156e39dd4fbc3b346a24 p2= err=0
69 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
69 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
70 3:4c52fb2e4022
70 3:4c52fb2e4022
71 prechangegroup.forbid hook
71 prechangegroup.forbid hook
72 pulling from ../a
72 pulling from ../a
73 searching for changes
73 searching for changes
74 abort: prechangegroup.forbid hook exited with status 1
74 abort: prechangegroup.forbid hook exited with status 1
75 pretxnchangegroup.forbid hook: tip=4:4f92e785b90a
75 pretxnchangegroup.forbid hook: tip=4:4f92e785b90a
76 pulling from ../a
76 pulling from ../a
77 searching for changes
77 searching for changes
78 adding changesets
78 adding changesets
79 adding manifests
79 adding manifests
80 adding file changes
80 adding file changes
81 added 1 changesets with 1 changes to 1 files
81 added 1 changesets with 1 changes to 1 files
82 abort: pretxnchangegroup.forbid hook exited with status 1
82 abort: pretxnchangegroup.forbid hook exited with status 1
83 transaction abort!
83 transaction abort!
84 rollback completed
84 rollback completed
85 3:4c52fb2e4022
85 3:4c52fb2e4022
86 preoutgoing hook: s=pull
86 preoutgoing hook: s=pull
87 outgoing hook: n=4f92e785b90ae8995dfe156e39dd4fbc3b346a24 s=pull
87 outgoing hook: n=4f92e785b90ae8995dfe156e39dd4fbc3b346a24 s=pull
88 pulling from ../a
88 pulling from ../a
89 searching for changes
89 searching for changes
90 adding changesets
90 adding changesets
91 adding manifests
91 adding manifests
92 adding file changes
92 adding file changes
93 added 1 changesets with 1 changes to 1 files
93 added 1 changesets with 1 changes to 1 files
94 (run 'hg update' to get a working copy)
94 (run 'hg update' to get a working copy)
95 rolling back last transaction
95 rolling back last transaction
96 preoutgoing hook: s=pull
96 preoutgoing hook: s=pull
97 preoutgoing.forbid hook
97 preoutgoing.forbid hook
98 pulling from ../a
98 pulling from ../a
99 searching for changes
99 searching for changes
100 abort: preoutgoing.forbid hook exited with status 1
100 abort: preoutgoing.forbid hook exited with status 1
101 # test python hooks
101 # test python hooks
102 error: preoutgoing.broken hook raised an exception: unsupported operand type(s) for +: 'int' and 'dict'
102 error: preoutgoing.broken hook raised an exception: unsupported operand type(s) for +: 'int' and 'dict'
103 error: preoutgoing.raise hook raised an exception: exception from hook
103 error: preoutgoing.raise hook raised an exception: exception from hook
104 pulling from ../a
104 pulling from ../a
105 searching for changes
105 searching for changes
106 error: preoutgoing.abort hook failed: raise abort from hook
106 error: preoutgoing.abort hook failed: raise abort from hook
107 abort: raise abort from hook
107 abort: raise abort from hook
108 pulling from ../a
108 pulling from ../a
109 searching for changes
109 searching for changes
110 hook args:
110 hook args:
111 hooktype preoutgoing
111 hooktype preoutgoing
112 source pull
112 source pull
113 abort: preoutgoing.fail hook failed
113 abort: preoutgoing.fail hook failed
114 pulling from ../a
114 pulling from ../a
115 searching for changes
115 searching for changes
116 abort: preoutgoing.uncallable hook is invalid ("hooktests.uncallable" is not callable)
116 abort: preoutgoing.uncallable hook is invalid ("hooktests.uncallable" is not callable)
117 pulling from ../a
117 pulling from ../a
118 searching for changes
118 searching for changes
119 abort: preoutgoing.nohook hook is invalid ("hooktests.nohook" is not defined)
119 abort: preoutgoing.nohook hook is invalid ("hooktests.nohook" is not defined)
120 pulling from ../a
120 pulling from ../a
121 searching for changes
121 searching for changes
122 abort: preoutgoing.nomodule hook is invalid ("nomodule" not in a module)
122 abort: preoutgoing.nomodule hook is invalid ("nomodule" not in a module)
123 pulling from ../a
123 pulling from ../a
124 searching for changes
124 searching for changes
125 abort: preoutgoing.badmodule hook is invalid (import of "nomodule" failed)
125 abort: preoutgoing.badmodule hook is invalid (import of "nomodule" failed)
126 pulling from ../a
126 pulling from ../a
127 searching for changes
127 searching for changes
128 abort: preoutgoing.unreachable hook is invalid (import of "hooktests.container" failed)
128 abort: preoutgoing.unreachable hook is invalid (import of "hooktests.container" failed)
129 pulling from ../a
129 pulling from ../a
130 searching for changes
130 searching for changes
131 hook args:
131 hook args:
132 hooktype preoutgoing
132 hooktype preoutgoing
133 source pull
133 source pull
134 adding changesets
134 adding changesets
135 adding manifests
135 adding manifests
136 adding file changes
136 adding file changes
137 added 1 changesets with 1 changes to 1 files
137 added 1 changesets with 1 changes to 1 files
138 (run 'hg update' to get a working copy)
138 (run 'hg update' to get a working copy)
139 # make sure --traceback works
139 # make sure --traceback works
140 Traceback (most recent call last):
140 Traceback (most recent call last):
@@ -1,25 +1,35 b''
1 #!/bin/sh
1 #!/bin/sh
2
2
3 hg init test
3 hg init test
4 cd test
4 cd test
5 echo foo>foo
5 echo foo>foo
6 hg commit -A -d '0 0' -m 1
6 hg commit -A -d '0 0' -m 1
7 hg --config server.uncompressed=True serve -p 20059 -d --pid-file=hg1.pid
7 hg --config server.uncompressed=True serve -p 20059 -d --pid-file=hg1.pid
8 cat hg1.pid >> $DAEMON_PIDS
8 cat hg1.pid >> $DAEMON_PIDS
9 hg serve -p 20060 -d --pid-file=hg2.pid
9 hg serve -p 20060 -d --pid-file=hg2.pid
10 cat hg2.pid >> $DAEMON_PIDS
10 cat hg2.pid >> $DAEMON_PIDS
11 cd ..
11 cd ..
12
12
13 echo % clone via stream
13 echo % clone via stream
14 http_proxy= hg clone --uncompressed http://localhost:20059/ copy 2>&1 | \
14 http_proxy= hg clone --uncompressed http://localhost:20059/ copy 2>&1 | \
15 sed -e 's/[0-9][0-9.]*/XXX/g'
15 sed -e 's/[0-9][0-9.]*/XXX/g'
16 cd copy
16 cd copy
17 hg verify
17 hg verify
18
18
19 echo % try to clone via stream, should use pull instead
19 echo % try to clone via stream, should use pull instead
20 http_proxy= hg clone --uncompressed http://localhost:20060/ copy2
20 http_proxy= hg clone --uncompressed http://localhost:20060/ copy2
21
21
22 echo % clone via pull
22 echo % clone via pull
23 http_proxy= hg clone http://localhost:20059/ copy-pull
23 http_proxy= hg clone http://localhost:20059/ copy-pull
24 cd copy-pull
24 cd copy-pull
25 hg verify
25 hg verify
26
27 cd test
28 echo bar > bar
29 hg commit -A -d '1 0' -m 2
30
31 echo % pull
32 cd ../copy-pull
33 echo '[hooks]' >> .hg/hgrc
34 echo 'changegroup = echo changegroup: u=$HG_URL' >> .hg/hgrc
35 hg pull
@@ -1,30 +1,36 b''
1 adding foo
1 adding foo
2 % clone via stream
2 % clone via stream
3 streaming all changes
3 streaming all changes
4 XXX files to transfer, XXX bytes of data
4 XXX files to transfer, XXX bytes of data
5 transferred XXX bytes in XXX seconds (XXX KB/sec)
5 transferred XXX bytes in XXX seconds (XXX KB/sec)
6 XXX files updated, XXX files merged, XXX files removed, XXX files unresolved
6 XXX files updated, XXX files merged, XXX files removed, XXX files unresolved
7 checking changesets
7 checking changesets
8 checking manifests
8 checking manifests
9 crosschecking files in changesets and manifests
9 crosschecking files in changesets and manifests
10 checking files
10 checking files
11 1 files, 1 changesets, 1 total revisions
11 1 files, 1 changesets, 1 total revisions
12 % try to clone via stream, should use pull instead
12 % try to clone via stream, should use pull instead
13 requesting all changes
13 requesting all changes
14 adding changesets
14 adding changesets
15 adding manifests
15 adding manifests
16 adding file changes
16 adding file changes
17 added 1 changesets with 1 changes to 1 files
17 added 1 changesets with 1 changes to 1 files
18 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
18 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
19 % clone via pull
19 % clone via pull
20 requesting all changes
20 requesting all changes
21 adding changesets
21 adding changesets
22 adding manifests
22 adding manifests
23 adding file changes
23 adding file changes
24 added 1 changesets with 1 changes to 1 files
24 added 1 changesets with 1 changes to 1 files
25 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
25 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
26 checking changesets
26 checking changesets
27 checking manifests
27 checking manifests
28 crosschecking files in changesets and manifests
28 crosschecking files in changesets and manifests
29 checking files
29 checking files
30 1 files, 1 changesets, 1 total revisions
30 1 files, 1 changesets, 1 total revisions
31 /home/bos/hg/hg/hg-url/tests/test-http: line 27: cd: test: No such file or directory
32 adding bar
33 % pull
34 pulling from http://localhost:20059/
35 searching for changes
36 no changes found
@@ -1,57 +1,63 b''
1 #!/bin/sh
1 #!/bin/sh
2
2
3 hg init test
3 hg init test
4 cd test
4 cd test
5 echo a > a
5 echo a > a
6 hg ci -Ama
6 hg ci -Ama
7
7
8 cd ..
8 cd ..
9 hg clone test test2
9 hg clone test test2
10 cd test2
10 cd test2
11 echo a >> a
11 echo a >> a
12 hg ci -mb
12 hg ci -mb
13
13
14 cd ../test
14 cd ../test
15
15
16 echo % expect ssl error
16 echo % expect ssl error
17 hg serve -p 20059 -d --pid-file=hg.pid
17 hg serve -p 20059 -d --pid-file=hg.pid
18 cat hg.pid >> $DAEMON_PIDS
18 cat hg.pid >> $DAEMON_PIDS
19 hg --cwd ../test2 push http://localhost:20059/
19 hg --cwd ../test2 push http://localhost:20059/
20 kill `cat hg.pid`
20 kill `cat hg.pid`
21
21
22 echo % expect authorization error
22 echo % expect authorization error
23 echo '[web]' > .hg/hgrc
23 echo '[web]' > .hg/hgrc
24 echo 'push_ssl = false' >> .hg/hgrc
24 echo 'push_ssl = false' >> .hg/hgrc
25 hg serve -p 20059 -d --pid-file=hg.pid
25 hg serve -p 20059 -d --pid-file=hg.pid
26 cat hg.pid >> $DAEMON_PIDS
26 cat hg.pid >> $DAEMON_PIDS
27 hg --cwd ../test2 push http://localhost:20059/
27 hg --cwd ../test2 push http://localhost:20059/
28 kill `cat hg.pid`
28 kill `cat hg.pid`
29
29
30 echo % expect authorization error: must have authorized user
30 echo % expect authorization error: must have authorized user
31 echo 'allow_push = unperson' >> .hg/hgrc
31 echo 'allow_push = unperson' >> .hg/hgrc
32 hg serve -p 20059 -d --pid-file=hg.pid
32 hg serve -p 20059 -d --pid-file=hg.pid
33 cat hg.pid >> $DAEMON_PIDS
33 cat hg.pid >> $DAEMON_PIDS
34 hg --cwd ../test2 push http://localhost:20059/
34 hg --cwd ../test2 push http://localhost:20059/
35 kill `cat hg.pid`
35 kill `cat hg.pid`
36
36
37 echo % expect success
37 echo % expect success
38 echo 'allow_push = *' >> .hg/hgrc
38 echo 'allow_push = *' >> .hg/hgrc
39 echo '[hooks]' >> .hg/hgrc
40 echo 'changegroup = echo changegroup: u=$HG_URL >> $HGTMP/urls' >> .hg/hgrc
39 hg serve -p 20059 -d --pid-file=hg.pid
41 hg serve -p 20059 -d --pid-file=hg.pid
40 cat hg.pid >> $DAEMON_PIDS
42 cat hg.pid >> $DAEMON_PIDS
41 hg --cwd ../test2 push http://localhost:20059/
43 hg --cwd ../test2 push http://localhost:20059/
42 kill `cat hg.pid`
44 kill `cat hg.pid`
43 hg rollback
45 hg rollback
44
46
47 sed 's/\(remote:http.*\):.*/\1/' $HGTMP/urls
48
45 echo % expect authorization error: all users denied
49 echo % expect authorization error: all users denied
50 echo '[web]' > .hg/hgrc
51 echo 'push_ssl = false' >> .hg/hgrc
46 echo 'deny_push = *' >> .hg/hgrc
52 echo 'deny_push = *' >> .hg/hgrc
47 hg serve -p 20059 -d --pid-file=hg.pid
53 hg serve -p 20059 -d --pid-file=hg.pid
48 cat hg.pid >> $DAEMON_PIDS
54 cat hg.pid >> $DAEMON_PIDS
49 hg --cwd ../test2 push http://localhost:20059/
55 hg --cwd ../test2 push http://localhost:20059/
50 kill `cat hg.pid`
56 kill `cat hg.pid`
51
57
52 echo % expect authorization error: some users denied, users must be authenticated
58 echo % expect authorization error: some users denied, users must be authenticated
53 echo 'deny_push = unperson' >> .hg/hgrc
59 echo 'deny_push = unperson' >> .hg/hgrc
54 hg serve -p 20059 -d --pid-file=hg.pid
60 hg serve -p 20059 -d --pid-file=hg.pid
55 cat hg.pid >> $DAEMON_PIDS
61 cat hg.pid >> $DAEMON_PIDS
56 hg --cwd ../test2 push http://localhost:20059/
62 hg --cwd ../test2 push http://localhost:20059/
57 kill `cat hg.pid`
63 kill `cat hg.pid`
@@ -1,30 +1,31 b''
1 adding a
1 adding a
2 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
2 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
3 % expect ssl error
3 % expect ssl error
4 pushing to http://localhost:20059/
4 pushing to http://localhost:20059/
5 searching for changes
5 searching for changes
6 ssl required
6 ssl required
7 % expect authorization error
7 % expect authorization error
8 pushing to http://localhost:20059/
8 pushing to http://localhost:20059/
9 searching for changes
9 searching for changes
10 push not authorized
10 push not authorized
11 % expect authorization error: must have authorized user
11 % expect authorization error: must have authorized user
12 pushing to http://localhost:20059/
12 pushing to http://localhost:20059/
13 searching for changes
13 searching for changes
14 push not authorized
14 push not authorized
15 % expect success
15 % expect success
16 pushing to http://localhost:20059/
16 pushing to http://localhost:20059/
17 searching for changes
17 searching for changes
18 adding changesets
18 adding changesets
19 adding manifests
19 adding manifests
20 adding file changes
20 adding file changes
21 added 1 changesets with 1 changes to 1 files
21 added 1 changesets with 1 changes to 1 files
22 rolling back last transaction
22 rolling back last transaction
23 changegroup: u=remote:http
23 % expect authorization error: all users denied
24 % expect authorization error: all users denied
24 pushing to http://localhost:20059/
25 pushing to http://localhost:20059/
25 searching for changes
26 searching for changes
26 push not authorized
27 push not authorized
27 % expect authorization error: some users denied, users must be authenticated
28 % expect authorization error: some users denied, users must be authenticated
28 pushing to http://localhost:20059/
29 pushing to http://localhost:20059/
29 searching for changes
30 searching for changes
30 push not authorized
31 push not authorized
@@ -1,92 +1,99 b''
1 #!/bin/sh
1 #!/bin/sh
2
2
3 # This test tries to exercise the ssh functionality with a dummy script
3 # This test tries to exercise the ssh functionality with a dummy script
4
4
5 cat <<'EOF' > dummyssh
5 cat <<'EOF' > dummyssh
6 #!/bin/sh
6 #!/bin/sh
7 # this attempts to deal with relative pathnames
7 # this attempts to deal with relative pathnames
8 cd `dirname $0`
8 cd `dirname $0`
9
9
10 # check for proper args
10 # check for proper args
11 if [ $1 != "user@dummy" ] ; then
11 if [ $1 != "user@dummy" ] ; then
12 exit -1
12 exit -1
13 fi
13 fi
14
14
15 # check that we're in the right directory
15 # check that we're in the right directory
16 if [ ! -x dummyssh ] ; then
16 if [ ! -x dummyssh ] ; then
17 exit -1
17 exit -1
18 fi
18 fi
19
19
20 SSH_CLIENT='127.0.0.1 1 2'
21 export SSH_CLIENT
20 echo Got arguments 1:$1 2:$2 3:$3 4:$4 5:$5 >> dummylog
22 echo Got arguments 1:$1 2:$2 3:$3 4:$4 5:$5 >> dummylog
21 $2
23 $2
22 EOF
24 EOF
23 chmod +x dummyssh
25 chmod +x dummyssh
24
26
25 echo "# creating 'remote'"
27 echo "# creating 'remote'"
26 hg init remote
28 hg init remote
27 cd remote
29 cd remote
28 echo this > foo
30 echo this > foo
29 hg ci -A -m "init" -d "1000000 0" foo
31 hg ci -A -m "init" -d "1000000 0" foo
30 echo '[server]' > .hg/hgrc
32 echo '[server]' > .hg/hgrc
31 echo 'uncompressed = True' >> .hg/hgrc
33 echo 'uncompressed = True' >> .hg/hgrc
34 echo '[hooks]' >> .hg/hgrc
35 echo 'changegroup = echo changegroup in remote: u=$HG_URL >> ../dummylog' >> .hg/hgrc
32
36
33 cd ..
37 cd ..
34
38
35 echo "# clone remote via stream"
39 echo "# clone remote via stream"
36 hg clone -e ./dummyssh --uncompressed ssh://user@dummy/remote local-stream 2>&1 | \
40 hg clone -e ./dummyssh --uncompressed ssh://user@dummy/remote local-stream 2>&1 | \
37 sed -e 's/[0-9][0-9.]*/XXX/g'
41 sed -e 's/[0-9][0-9.]*/XXX/g'
38 cd local-stream
42 cd local-stream
39 hg verify
43 hg verify
40 cd ..
44 cd ..
41
45
42 echo "# clone remote via pull"
46 echo "# clone remote via pull"
43 hg clone -e ./dummyssh ssh://user@dummy/remote local
47 hg clone -e ./dummyssh ssh://user@dummy/remote local
44
48
45 echo "# verify"
49 echo "# verify"
46 cd local
50 cd local
47 hg verify
51 hg verify
48
52
53 echo '[hooks]' >> .hg/hgrc
54 echo 'changegroup = echo changegroup in local: u=$HG_URL >> ../dummylog' >> .hg/hgrc
55
49 echo "# empty default pull"
56 echo "# empty default pull"
50 hg paths
57 hg paths
51 hg pull -e ../dummyssh
58 hg pull -e ../dummyssh
52
59
53 echo "# local change"
60 echo "# local change"
54 echo bleah > foo
61 echo bleah > foo
55 hg ci -m "add" -d "1000000 0"
62 hg ci -m "add" -d "1000000 0"
56
63
57 echo "# updating rc"
64 echo "# updating rc"
58 echo "default-push = ssh://user@dummy/remote" >> .hg/hgrc
65 echo "default-push = ssh://user@dummy/remote" >> .hg/hgrc
59 echo "[ui]" >> .hg/hgrc
66 echo "[ui]" >> .hg/hgrc
60 echo "ssh = ../dummyssh" >> .hg/hgrc
67 echo "ssh = ../dummyssh" >> .hg/hgrc
61
68
62 echo "# find outgoing"
69 echo "# find outgoing"
63 hg out ssh://user@dummy/remote
70 hg out ssh://user@dummy/remote
64
71
65 echo "# find incoming on the remote side"
72 echo "# find incoming on the remote side"
66 hg incoming -R ../remote -e ../dummyssh ssh://user@dummy/local
73 hg incoming -R ../remote -e ../dummyssh ssh://user@dummy/local
67
74
68 echo "# push"
75 echo "# push"
69 hg push
76 hg push
70
77
71 cd ../remote
78 cd ../remote
72
79
73 echo "# check remote tip"
80 echo "# check remote tip"
74 hg tip
81 hg tip
75 hg verify
82 hg verify
76 hg cat foo
83 hg cat foo
77
84
78 echo z > z
85 echo z > z
79 hg ci -A -m z -d '1000001 0' z
86 hg ci -A -m z -d '1000001 0' z
80
87
81 cd ../local
88 cd ../local
82 echo r > r
89 echo r > r
83 hg ci -A -m z -d '1000002 0' r
90 hg ci -A -m z -d '1000002 0' r
84
91
85 echo "# push should fail"
92 echo "# push should fail"
86 hg push
93 hg push
87
94
88 echo "# push should succeed"
95 echo "# push should succeed"
89 hg push -f
96 hg push -f
90
97
91 cd ..
98 cd ..
92 cat dummylog
99 cat dummylog
@@ -1,87 +1,89 b''
1 # creating 'remote'
1 # creating 'remote'
2 # clone remote via stream
2 # clone remote via stream
3 streaming all changes
3 streaming all changes
4 XXX files to transfer, XXX bytes of data
4 XXX files to transfer, XXX bytes of data
5 transferred XXX bytes in XXX seconds (XXX KB/sec)
5 transferred XXX bytes in XXX seconds (XXX KB/sec)
6 XXX files updated, XXX files merged, XXX files removed, XXX files unresolved
6 XXX files updated, XXX files merged, XXX files removed, XXX files unresolved
7 checking changesets
7 checking changesets
8 checking manifests
8 checking manifests
9 crosschecking files in changesets and manifests
9 crosschecking files in changesets and manifests
10 checking files
10 checking files
11 1 files, 1 changesets, 1 total revisions
11 1 files, 1 changesets, 1 total revisions
12 # clone remote via pull
12 # clone remote via pull
13 requesting all changes
13 requesting all changes
14 adding changesets
14 adding changesets
15 adding manifests
15 adding manifests
16 adding file changes
16 adding file changes
17 added 1 changesets with 1 changes to 1 files
17 added 1 changesets with 1 changes to 1 files
18 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
18 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
19 # verify
19 # verify
20 checking changesets
20 checking changesets
21 checking manifests
21 checking manifests
22 crosschecking files in changesets and manifests
22 crosschecking files in changesets and manifests
23 checking files
23 checking files
24 1 files, 1 changesets, 1 total revisions
24 1 files, 1 changesets, 1 total revisions
25 # empty default pull
25 # empty default pull
26 default = ssh://user@dummy/remote
26 default = ssh://user@dummy/remote
27 pulling from ssh://user@dummy/remote
27 pulling from ssh://user@dummy/remote
28 searching for changes
28 searching for changes
29 no changes found
29 no changes found
30 # local change
30 # local change
31 # updating rc
31 # updating rc
32 # find outgoing
32 # find outgoing
33 searching for changes
33 searching for changes
34 changeset: 1:c54836a570be
34 changeset: 1:c54836a570be
35 tag: tip
35 tag: tip
36 user: test
36 user: test
37 date: Mon Jan 12 13:46:40 1970 +0000
37 date: Mon Jan 12 13:46:40 1970 +0000
38 summary: add
38 summary: add
39
39
40 # find incoming on the remote side
40 # find incoming on the remote side
41 searching for changes
41 searching for changes
42 changeset: 1:c54836a570be
42 changeset: 1:c54836a570be
43 tag: tip
43 tag: tip
44 user: test
44 user: test
45 date: Mon Jan 12 13:46:40 1970 +0000
45 date: Mon Jan 12 13:46:40 1970 +0000
46 summary: add
46 summary: add
47
47
48 # push
48 # push
49 pushing to ssh://user@dummy/remote
49 pushing to ssh://user@dummy/remote
50 searching for changes
50 searching for changes
51 remote: adding changesets
51 remote: adding changesets
52 remote: adding manifests
52 remote: adding manifests
53 remote: adding file changes
53 remote: adding file changes
54 remote: added 1 changesets with 1 changes to 1 files
54 remote: added 1 changesets with 1 changes to 1 files
55 # check remote tip
55 # check remote tip
56 changeset: 1:c54836a570be
56 changeset: 1:c54836a570be
57 tag: tip
57 tag: tip
58 user: test
58 user: test
59 date: Mon Jan 12 13:46:40 1970 +0000
59 date: Mon Jan 12 13:46:40 1970 +0000
60 summary: add
60 summary: add
61
61
62 checking changesets
62 checking changesets
63 checking manifests
63 checking manifests
64 crosschecking files in changesets and manifests
64 crosschecking files in changesets and manifests
65 checking files
65 checking files
66 1 files, 2 changesets, 2 total revisions
66 1 files, 2 changesets, 2 total revisions
67 bleah
67 bleah
68 # push should fail
68 # push should fail
69 pushing to ssh://user@dummy/remote
69 pushing to ssh://user@dummy/remote
70 searching for changes
70 searching for changes
71 abort: unsynced remote changes!
71 abort: unsynced remote changes!
72 (did you forget to sync? use push -f to force)
72 (did you forget to sync? use push -f to force)
73 # push should succeed
73 # push should succeed
74 pushing to ssh://user@dummy/remote
74 pushing to ssh://user@dummy/remote
75 searching for changes
75 searching for changes
76 remote: adding changesets
76 remote: adding changesets
77 remote: adding manifests
77 remote: adding manifests
78 remote: adding file changes
78 remote: adding file changes
79 remote: added 1 changesets with 1 changes to 1 files
79 remote: added 1 changesets with 1 changes to 1 files
80 Got arguments 1:user@dummy 2:hg -R remote serve --stdio 3: 4: 5:
80 Got arguments 1:user@dummy 2:hg -R remote serve --stdio 3: 4: 5:
81 Got arguments 1:user@dummy 2:hg -R remote serve --stdio 3: 4: 5:
81 Got arguments 1:user@dummy 2:hg -R remote serve --stdio 3: 4: 5:
82 Got arguments 1:user@dummy 2:hg -R remote serve --stdio 3: 4: 5:
82 Got arguments 1:user@dummy 2:hg -R remote serve --stdio 3: 4: 5:
83 Got arguments 1:user@dummy 2:hg -R remote serve --stdio 3: 4: 5:
83 Got arguments 1:user@dummy 2:hg -R remote serve --stdio 3: 4: 5:
84 Got arguments 1:user@dummy 2:hg -R local serve --stdio 3: 4: 5:
84 Got arguments 1:user@dummy 2:hg -R local serve --stdio 3: 4: 5:
85 Got arguments 1:user@dummy 2:hg -R remote serve --stdio 3: 4: 5:
85 Got arguments 1:user@dummy 2:hg -R remote serve --stdio 3: 4: 5:
86 changegroup in remote: u=remote:ssh:127.0.0.1
86 Got arguments 1:user@dummy 2:hg -R remote serve --stdio 3: 4: 5:
87 Got arguments 1:user@dummy 2:hg -R remote serve --stdio 3: 4: 5:
87 Got arguments 1:user@dummy 2:hg -R remote serve --stdio 3: 4: 5:
88 Got arguments 1:user@dummy 2:hg -R remote serve --stdio 3: 4: 5:
89 changegroup in remote: u=remote:ssh:127.0.0.1
@@ -1,42 +1,50 b''
1 #!/bin/sh
1 #!/bin/sh
2
2
3 http_proxy= hg clone static-http://localhost:20059/ copy
3 http_proxy= hg clone static-http://localhost:20059/ copy
4 echo $?
4 echo $?
5 ls copy 2>/dev/null || echo copy: No such file or directory
5 ls copy 2>/dev/null || echo copy: No such file or directory
6
6
7 # This server doesn't do range requests so it's basically only good for
7 # This server doesn't do range requests so it's basically only good for
8 # one pull
8 # one pull
9 cat > dumb.py <<EOF
9 cat > dumb.py <<EOF
10 import BaseHTTPServer, SimpleHTTPServer, signal
10 import BaseHTTPServer, SimpleHTTPServer, signal
11
11
12 def run(server_class=BaseHTTPServer.HTTPServer,
12 def run(server_class=BaseHTTPServer.HTTPServer,
13 handler_class=SimpleHTTPServer.SimpleHTTPRequestHandler):
13 handler_class=SimpleHTTPServer.SimpleHTTPRequestHandler):
14 server_address = ('localhost', 20059)
14 server_address = ('localhost', 20059)
15 httpd = server_class(server_address, handler_class)
15 httpd = server_class(server_address, handler_class)
16 httpd.serve_forever()
16 httpd.serve_forever()
17
17
18 signal.signal(signal.SIGTERM, lambda x: sys.exit(0))
18 signal.signal(signal.SIGTERM, lambda x: sys.exit(0))
19 run()
19 run()
20 EOF
20 EOF
21
21
22 python dumb.py 2>/dev/null &
22 python dumb.py 2>/dev/null &
23 echo $! >> $DAEMON_PIDS
23 echo $! >> $DAEMON_PIDS
24
24
25 mkdir remote
25 mkdir remote
26 cd remote
26 cd remote
27 hg init
27 hg init
28 echo foo > bar
28 echo foo > bar
29 hg add bar
29 hg add bar
30 hg commit -m"test" -d "1000000 0"
30 hg commit -m"test" -d "1000000 0"
31 hg tip
31 hg tip
32
32
33 cd ..
33 cd ..
34
34
35 http_proxy= hg clone static-http://localhost:20059/remote local
35 http_proxy= hg clone static-http://localhost:20059/remote local
36
36
37 cd local
37 cd local
38 hg verify
38 hg verify
39 cat bar
39 cat bar
40
41 cd ../remote
42 echo baz > quux
43 hg commit -A -mtest2 -d '100000000 0'
44
45 cd ../local
46 echo '[hooks]' >> .hg/hgrc
47 echo 'changegroup = echo changegroup: u=$HG_URL' >> .hg/hgrc
40 http_proxy= hg pull
48 http_proxy= hg pull
41
49
42 kill $!
50 kill $!
@@ -1,24 +1,30 b''
1 abort: Connection refused
1 abort: Connection refused
2 255
2 255
3 copy: No such file or directory
3 copy: No such file or directory
4 changeset: 0:53e17d176ae6
4 changeset: 0:53e17d176ae6
5 tag: tip
5 tag: tip
6 user: test
6 user: test
7 date: Mon Jan 12 13:46:40 1970 +0000
7 date: Mon Jan 12 13:46:40 1970 +0000
8 summary: test
8 summary: test
9
9
10 requesting all changes
10 requesting all changes
11 adding changesets
11 adding changesets
12 adding manifests
12 adding manifests
13 adding file changes
13 adding file changes
14 added 1 changesets with 1 changes to 1 files
14 added 1 changesets with 1 changes to 1 files
15 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
15 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
16 checking changesets
16 checking changesets
17 checking manifests
17 checking manifests
18 crosschecking files in changesets and manifests
18 crosschecking files in changesets and manifests
19 checking files
19 checking files
20 1 files, 1 changesets, 1 total revisions
20 1 files, 1 changesets, 1 total revisions
21 foo
21 foo
22 adding quux
23 changegroup: u=static-http://localhost:20059/remote
22 pulling from static-http://localhost:20059/remote
24 pulling from static-http://localhost:20059/remote
23 searching for changes
25 searching for changes
24 no changes found
26 adding changesets
27 adding manifests
28 adding file changes
29 added 1 changesets with 1 changes to 1 files
30 (run 'hg update' to get a working copy)
General Comments 0
You need to be logged in to leave comments. Login now