##// END OF EJS Templates
remove non-prefixed environment variables from hooks.
Vadim Gelfer -
r2288:dfa17bd1 default
parent child Browse files
Show More
@@ -1,399 +1,398 b''
1 HGRC(5)
1 HGRC(5)
2 =======
2 =======
3 Bryan O'Sullivan <bos@serpentine.com>
3 Bryan O'Sullivan <bos@serpentine.com>
4
4
5 NAME
5 NAME
6 ----
6 ----
7 hgrc - configuration files for Mercurial
7 hgrc - configuration files for Mercurial
8
8
9 SYNOPSIS
9 SYNOPSIS
10 --------
10 --------
11
11
12 The Mercurial system uses a set of configuration files to control
12 The Mercurial system uses a set of configuration files to control
13 aspects of its behaviour.
13 aspects of its behaviour.
14
14
15 FILES
15 FILES
16 -----
16 -----
17
17
18 Mercurial reads configuration data from several files, if they exist.
18 Mercurial reads configuration data from several files, if they exist.
19 The names of these files depend on the system on which Mercurial is
19 The names of these files depend on the system on which Mercurial is
20 installed.
20 installed.
21
21
22 (Unix) <install-root>/etc/mercurial/hgrc.d/*.rc::
22 (Unix) <install-root>/etc/mercurial/hgrc.d/*.rc::
23 (Unix) <install-root>/etc/mercurial/hgrc::
23 (Unix) <install-root>/etc/mercurial/hgrc::
24 Per-installation configuration files, searched for in the
24 Per-installation configuration files, searched for in the
25 directory where Mercurial is installed. For example, if installed
25 directory where Mercurial is installed. For example, if installed
26 in /shared/tools, Mercurial will look in
26 in /shared/tools, Mercurial will look in
27 /shared/tools/etc/mercurial/hgrc. Options in these files apply to
27 /shared/tools/etc/mercurial/hgrc. Options in these files apply to
28 all Mercurial commands executed by any user in any directory.
28 all Mercurial commands executed by any user in any directory.
29
29
30 (Unix) /etc/mercurial/hgrc.d/*.rc::
30 (Unix) /etc/mercurial/hgrc.d/*.rc::
31 (Unix) /etc/mercurial/hgrc::
31 (Unix) /etc/mercurial/hgrc::
32 (Windows) C:\Mercurial\Mercurial.ini::
32 (Windows) C:\Mercurial\Mercurial.ini::
33 Per-system configuration files, for the system on which Mercurial
33 Per-system configuration files, for the system on which Mercurial
34 is running. Options in these files apply to all Mercurial
34 is running. Options in these files apply to all Mercurial
35 commands executed by any user in any directory. Options in these
35 commands executed by any user in any directory. Options in these
36 files override per-installation options.
36 files override per-installation options.
37
37
38 (Unix) $HOME/.hgrc::
38 (Unix) $HOME/.hgrc::
39 (Windows) C:\Documents and Settings\USERNAME\Mercurial.ini
39 (Windows) C:\Documents and Settings\USERNAME\Mercurial.ini
40 Per-user configuration file, for the user running Mercurial.
40 Per-user configuration file, for the user running Mercurial.
41 Options in this file apply to all Mercurial commands executed by
41 Options in this file apply to all Mercurial commands executed by
42 any user in any directory. Options in this file override
42 any user in any directory. Options in this file override
43 per-installation and per-system options.
43 per-installation and per-system options.
44
44
45 (Unix, Windows) <repo>/.hg/hgrc::
45 (Unix, Windows) <repo>/.hg/hgrc::
46 Per-repository configuration options that only apply in a
46 Per-repository configuration options that only apply in a
47 particular repository. This file is not version-controlled, and
47 particular repository. This file is not version-controlled, and
48 will not get transferred during a "clone" operation. Options in
48 will not get transferred during a "clone" operation. Options in
49 this file override options in all other configuration files.
49 this file override options in all other configuration files.
50
50
51 SYNTAX
51 SYNTAX
52 ------
52 ------
53
53
54 A configuration file consists of sections, led by a "[section]" header
54 A configuration file consists of sections, led by a "[section]" header
55 and followed by "name: value" entries; "name=value" is also accepted.
55 and followed by "name: value" entries; "name=value" is also accepted.
56
56
57 [spam]
57 [spam]
58 eggs=ham
58 eggs=ham
59 green=
59 green=
60 eggs
60 eggs
61
61
62 Each line contains one entry. If the lines that follow are indented,
62 Each line contains one entry. If the lines that follow are indented,
63 they are treated as continuations of that entry.
63 they are treated as continuations of that entry.
64
64
65 Leading whitespace is removed from values. Empty lines are skipped.
65 Leading whitespace is removed from values. Empty lines are skipped.
66
66
67 The optional values can contain format strings which refer to other
67 The optional values can contain format strings which refer to other
68 values in the same section, or values in a special DEFAULT section.
68 values in the same section, or values in a special DEFAULT section.
69
69
70 Lines beginning with "#" or ";" are ignored and may be used to provide
70 Lines beginning with "#" or ";" are ignored and may be used to provide
71 comments.
71 comments.
72
72
73 SECTIONS
73 SECTIONS
74 --------
74 --------
75
75
76 This section describes the different sections that may appear in a
76 This section describes the different sections that may appear in a
77 Mercurial "hgrc" file, the purpose of each section, its possible
77 Mercurial "hgrc" file, the purpose of each section, its possible
78 keys, and their possible values.
78 keys, and their possible values.
79
79
80 decode/encode::
80 decode/encode::
81 Filters for transforming files on checkout/checkin. This would
81 Filters for transforming files on checkout/checkin. This would
82 typically be used for newline processing or other
82 typically be used for newline processing or other
83 localization/canonicalization of files.
83 localization/canonicalization of files.
84
84
85 Filters consist of a filter pattern followed by a filter command.
85 Filters consist of a filter pattern followed by a filter command.
86 Filter patterns are globs by default, rooted at the repository
86 Filter patterns are globs by default, rooted at the repository
87 root. For example, to match any file ending in ".txt" in the root
87 root. For example, to match any file ending in ".txt" in the root
88 directory only, use the pattern "*.txt". To match any file ending
88 directory only, use the pattern "*.txt". To match any file ending
89 in ".c" anywhere in the repository, use the pattern "**.c".
89 in ".c" anywhere in the repository, use the pattern "**.c".
90
90
91 The filter command can start with a specifier, either "pipe:" or
91 The filter command can start with a specifier, either "pipe:" or
92 "tempfile:". If no specifier is given, "pipe:" is used by default.
92 "tempfile:". If no specifier is given, "pipe:" is used by default.
93
93
94 A "pipe:" command must accept data on stdin and return the
94 A "pipe:" command must accept data on stdin and return the
95 transformed data on stdout.
95 transformed data on stdout.
96
96
97 Pipe example:
97 Pipe example:
98
98
99 [encode]
99 [encode]
100 # uncompress gzip files on checkin to improve delta compression
100 # uncompress gzip files on checkin to improve delta compression
101 # note: not necessarily a good idea, just an example
101 # note: not necessarily a good idea, just an example
102 *.gz = pipe: gunzip
102 *.gz = pipe: gunzip
103
103
104 [decode]
104 [decode]
105 # recompress gzip files when writing them to the working dir (we
105 # recompress gzip files when writing them to the working dir (we
106 # can safely omit "pipe:", because it's the default)
106 # can safely omit "pipe:", because it's the default)
107 *.gz = gzip
107 *.gz = gzip
108
108
109 A "tempfile:" command is a template. The string INFILE is replaced
109 A "tempfile:" command is a template. The string INFILE is replaced
110 with the name of a temporary file that contains the data to be
110 with the name of a temporary file that contains the data to be
111 filtered by the command. The string OUTFILE is replaced with the
111 filtered by the command. The string OUTFILE is replaced with the
112 name of an empty temporary file, where the filtered data must be
112 name of an empty temporary file, where the filtered data must be
113 written by the command.
113 written by the command.
114
114
115 NOTE: the tempfile mechanism is recommended for Windows systems,
115 NOTE: the tempfile mechanism is recommended for Windows systems,
116 where the standard shell I/O redirection operators often have
116 where the standard shell I/O redirection operators often have
117 strange effects. In particular, if you are doing line ending
117 strange effects. In particular, if you are doing line ending
118 conversion on Windows using the popular dos2unix and unix2dos
118 conversion on Windows using the popular dos2unix and unix2dos
119 programs, you *must* use the tempfile mechanism, as using pipes will
119 programs, you *must* use the tempfile mechanism, as using pipes will
120 corrupt the contents of your files.
120 corrupt the contents of your files.
121
121
122 Tempfile example:
122 Tempfile example:
123
123
124 [encode]
124 [encode]
125 # convert files to unix line ending conventions on checkin
125 # convert files to unix line ending conventions on checkin
126 **.txt = tempfile: dos2unix -n INFILE OUTFILE
126 **.txt = tempfile: dos2unix -n INFILE OUTFILE
127
127
128 [decode]
128 [decode]
129 # convert files to windows line ending conventions when writing
129 # convert files to windows line ending conventions when writing
130 # them to the working dir
130 # them to the working dir
131 **.txt = tempfile: unix2dos -n INFILE OUTFILE
131 **.txt = tempfile: unix2dos -n INFILE OUTFILE
132
132
133 email::
133 email::
134 Settings for extensions that send email messages.
134 Settings for extensions that send email messages.
135 from;;
135 from;;
136 Optional. Email address to use in "From" header and SMTP envelope
136 Optional. Email address to use in "From" header and SMTP envelope
137 of outgoing messages.
137 of outgoing messages.
138
138
139 extensions::
139 extensions::
140 Mercurial has an extension mechanism for adding new features. To
140 Mercurial has an extension mechanism for adding new features. To
141 enable an extension, create an entry for it in this section.
141 enable an extension, create an entry for it in this section.
142
142
143 If you know that the extension is already in Python's search path,
143 If you know that the extension is already in Python's search path,
144 you can give the name of the module, followed by "=", with nothing
144 you can give the name of the module, followed by "=", with nothing
145 after the "=".
145 after the "=".
146
146
147 Otherwise, give a name that you choose, followed by "=", followed by
147 Otherwise, give a name that you choose, followed by "=", followed by
148 the path to the ".py" file (including the file name extension) that
148 the path to the ".py" file (including the file name extension) that
149 defines the extension.
149 defines the extension.
150
150
151 hooks::
151 hooks::
152 Commands or Python functions that get automatically executed by
152 Commands or Python functions that get automatically executed by
153 various actions such as starting or finishing a commit. Multiple
153 various actions such as starting or finishing a commit. Multiple
154 hooks can be run for the same action by appending a suffix to the
154 hooks can be run for the same action by appending a suffix to the
155 action. Overriding a site-wide hook can be done by changing its
155 action. Overriding a site-wide hook can be done by changing its
156 value or setting it to an empty string.
156 value or setting it to an empty string.
157
157
158 Example .hg/hgrc:
158 Example .hg/hgrc:
159
159
160 [hooks]
160 [hooks]
161 # do not use the site-wide hook
161 # do not use the site-wide hook
162 incoming =
162 incoming =
163 incoming.email = /my/email/hook
163 incoming.email = /my/email/hook
164 incoming.autobuild = /my/build/hook
164 incoming.autobuild = /my/build/hook
165
165
166 Most hooks are run with environment variables set that give added
166 Most hooks are run with environment variables set that give added
167 useful information. For each hook below, the environment variables
167 useful information. For each hook below, the environment variables
168 it is passed are listed with names of the form "$HG_foo".
168 it is passed are listed with names of the form "$HG_foo".
169
169
170 changegroup;;
170 changegroup;;
171 Run after a changegroup has been added via push, pull or
171 Run after a changegroup has been added via push, pull or
172 unbundle. ID of the first new changeset is in $HG_NODE.
172 unbundle. ID of the first new changeset is in $HG_NODE.
173 commit;;
173 commit;;
174 Run after a changeset has been created in the local repository.
174 Run after a changeset has been created in the local repository.
175 ID of the newly created changeset is in $HG_NODE. Parent
175 ID of the newly created changeset is in $HG_NODE. Parent
176 changeset IDs are in $HG_PARENT1 and $HG_PARENT2.
176 changeset IDs are in $HG_PARENT1 and $HG_PARENT2.
177 incoming;;
177 incoming;;
178 Run after a changeset has been pulled, pushed, or unbundled into
178 Run after a changeset has been pulled, pushed, or unbundled into
179 the local repository. The ID of the newly arrived changeset is in
179 the local repository. The ID of the newly arrived changeset is in
180 $HG_NODE.
180 $HG_NODE.
181 outgoing;;
181 outgoing;;
182 Run after sending changes from local repository to another. ID of
182 Run after sending changes from local repository to another. ID of
183 first changeset sent is in $HG_NODE. Source of operation is in
183 first changeset sent is in $HG_NODE. Source of operation is in
184 $HG_SOURCE; see "preoutgoing" hook for description.
184 $HG_SOURCE; see "preoutgoing" hook for description.
185 prechangegroup;;
185 prechangegroup;;
186 Run before a changegroup is added via push, pull or unbundle.
186 Run before a changegroup is added via push, pull or unbundle.
187 Exit status 0 allows the changegroup to proceed. Non-zero status
187 Exit status 0 allows the changegroup to proceed. Non-zero status
188 will cause the push, pull or unbundle to fail.
188 will cause the push, pull or unbundle to fail.
189 precommit;;
189 precommit;;
190 Run before starting a local commit. Exit status 0 allows the
190 Run before starting a local commit. Exit status 0 allows the
191 commit to proceed. Non-zero status will cause the commit to fail.
191 commit to proceed. Non-zero status will cause the commit to fail.
192 Parent changeset IDs are in $HG_PARENT1 and $HG_PARENT2.
192 Parent changeset IDs are in $HG_PARENT1 and $HG_PARENT2.
193 preoutgoing;;
193 preoutgoing;;
194 Run before computing changes to send from the local repository to
194 Run before computing changes to send from the local repository to
195 another. Non-zero status will cause failure. This lets you
195 another. Non-zero status will cause failure. This lets you
196 prevent pull over http or ssh. Also prevents against local pull,
196 prevent pull over http or ssh. Also prevents against local pull,
197 push (outbound) or bundle commands, but not effective, since you
197 push (outbound) or bundle commands, but not effective, since you
198 can just copy files instead then. Source of operation is in
198 can just copy files instead then. Source of operation is in
199 $HG_SOURCE. If "serve", operation is happening on behalf of
199 $HG_SOURCE. If "serve", operation is happening on behalf of
200 remote ssh or http repository. If "push", "pull" or "bundle",
200 remote ssh or http repository. If "push", "pull" or "bundle",
201 operation is happening on behalf of repository on same system.
201 operation is happening on behalf of repository on same system.
202 pretag;;
202 pretag;;
203 Run before creating a tag. Exit status 0 allows the tag to be
203 Run before creating a tag. Exit status 0 allows the tag to be
204 created. Non-zero status will cause the tag to fail. ID of
204 created. Non-zero status will cause the tag to fail. ID of
205 changeset to tag is in $HG_NODE. Name of tag is in $HG_TAG. Tag
205 changeset to tag is in $HG_NODE. Name of tag is in $HG_TAG. Tag
206 is local if $HG_LOCAL=1, in repo if $HG_LOCAL=0.
206 is local if $HG_LOCAL=1, in repo if $HG_LOCAL=0.
207 pretxnchangegroup;;
207 pretxnchangegroup;;
208 Run after a changegroup has been added via push, pull or unbundle,
208 Run after a changegroup has been added via push, pull or unbundle,
209 but before the transaction has been committed. Changegroup is
209 but before the transaction has been committed. Changegroup is
210 visible to hook program. This lets you validate incoming changes
210 visible to hook program. This lets you validate incoming changes
211 before accepting them. Passed the ID of the first new changeset
211 before accepting them. Passed the ID of the first new changeset
212 in $HG_NODE. Exit status 0 allows the transaction to commit.
212 in $HG_NODE. Exit status 0 allows the transaction to commit.
213 Non-zero status will cause the transaction to be rolled back and
213 Non-zero status will cause the transaction to be rolled back and
214 the push, pull or unbundle will fail.
214 the push, pull or unbundle will fail.
215 pretxncommit;;
215 pretxncommit;;
216 Run after a changeset has been created but the transaction not yet
216 Run after a changeset has been created but the transaction not yet
217 committed. Changeset is visible to hook program. This lets you
217 committed. Changeset is visible to hook program. This lets you
218 validate commit message and changes. Exit status 0 allows the
218 validate commit message and changes. Exit status 0 allows the
219 commit to proceed. Non-zero status will cause the transaction to
219 commit to proceed. Non-zero status will cause the transaction to
220 be rolled back. ID of changeset is in $HG_NODE. Parent changeset
220 be rolled back. ID of changeset is in $HG_NODE. Parent changeset
221 IDs are in $HG_PARENT1 and $HG_PARENT2.
221 IDs are in $HG_PARENT1 and $HG_PARENT2.
222 preupdate;;
222 preupdate;;
223 Run before updating the working directory. Exit status 0 allows
223 Run before updating the working directory. Exit status 0 allows
224 the update to proceed. Non-zero status will prevent the update.
224 the update to proceed. Non-zero status will prevent the update.
225 Changeset ID of first new parent is in $HG_PARENT1. If merge, ID
225 Changeset ID of first new parent is in $HG_PARENT1. If merge, ID
226 of second new parent is in $HG_PARENT2.
226 of second new parent is in $HG_PARENT2.
227 tag;;
227 tag;;
228 Run after a tag is created. ID of tagged changeset is in
228 Run after a tag is created. ID of tagged changeset is in
229 $HG_NODE. Name of tag is in $HG_TAG. Tag is local if
229 $HG_NODE. Name of tag is in $HG_TAG. Tag is local if
230 $HG_LOCAL=1, in repo if $HG_LOCAL=0.
230 $HG_LOCAL=1, in repo if $HG_LOCAL=0.
231 update;;
231 update;;
232 Run after updating the working directory. Changeset ID of first
232 Run after updating the working directory. Changeset ID of first
233 new parent is in $HG_PARENT1. If merge, ID of second new parent
233 new parent is in $HG_PARENT1. If merge, ID of second new parent
234 is in $HG_PARENT2. If update succeeded, $HG_ERROR=0. If update
234 is in $HG_PARENT2. If update succeeded, $HG_ERROR=0. If update
235 failed (e.g. because conflicts not resolved), $HG_ERROR=1.
235 failed (e.g. because conflicts not resolved), $HG_ERROR=1.
236
236
237 In earlier releases, the names of hook environment variables did not
237 Note: In earlier releases, the names of hook environment variables
238 have a "HG_" prefix. These unprefixed names are still provided in
238 did not have a "HG_" prefix. The old unprefixed names are no longer
239 the environment for backwards compatibility, but their use is
239 provided in the environment.
240 deprecated, and they will be removed in a future release.
241
240
242 The syntax for Python hooks is as follows:
241 The syntax for Python hooks is as follows:
243
242
244 hookname = python:modulename.submodule.callable
243 hookname = python:modulename.submodule.callable
245
244
246 Python hooks are run within the Mercurial process. Each hook is
245 Python hooks are run within the Mercurial process. Each hook is
247 called with at least three keyword arguments: a ui object (keyword
246 called with at least three keyword arguments: a ui object (keyword
248 "ui"), a repository object (keyword "repo"), and a "hooktype"
247 "ui"), a repository object (keyword "repo"), and a "hooktype"
249 keyword that tells what kind of hook is used. Arguments listed as
248 keyword that tells what kind of hook is used. Arguments listed as
250 environment variables above are passed as keyword arguments, with no
249 environment variables above are passed as keyword arguments, with no
251 "HG_" prefix, and names in lower case.
250 "HG_" prefix, and names in lower case.
252
251
253 A Python hook must return a "true" value to succeed. Returning a
252 A Python hook must return a "true" value to succeed. Returning a
254 "false" value or raising an exception is treated as failure of the
253 "false" value or raising an exception is treated as failure of the
255 hook.
254 hook.
256
255
257 http_proxy::
256 http_proxy::
258 Used to access web-based Mercurial repositories through a HTTP
257 Used to access web-based Mercurial repositories through a HTTP
259 proxy.
258 proxy.
260 host;;
259 host;;
261 Host name and (optional) port of the proxy server, for example
260 Host name and (optional) port of the proxy server, for example
262 "myproxy:8000".
261 "myproxy:8000".
263 no;;
262 no;;
264 Optional. Comma-separated list of host names that should bypass
263 Optional. Comma-separated list of host names that should bypass
265 the proxy.
264 the proxy.
266 passwd;;
265 passwd;;
267 Optional. Password to authenticate with at the proxy server.
266 Optional. Password to authenticate with at the proxy server.
268 user;;
267 user;;
269 Optional. User name to authenticate with at the proxy server.
268 Optional. User name to authenticate with at the proxy server.
270
269
271 smtp::
270 smtp::
272 Configuration for extensions that need to send email messages.
271 Configuration for extensions that need to send email messages.
273 host;;
272 host;;
274 Optional. Host name of mail server. Default: "mail".
273 Optional. Host name of mail server. Default: "mail".
275 port;;
274 port;;
276 Optional. Port to connect to on mail server. Default: 25.
275 Optional. Port to connect to on mail server. Default: 25.
277 tls;;
276 tls;;
278 Optional. Whether to connect to mail server using TLS. True or
277 Optional. Whether to connect to mail server using TLS. True or
279 False. Default: False.
278 False. Default: False.
280 username;;
279 username;;
281 Optional. User name to authenticate to SMTP server with.
280 Optional. User name to authenticate to SMTP server with.
282 If username is specified, password must also be specified.
281 If username is specified, password must also be specified.
283 Default: none.
282 Default: none.
284 password;;
283 password;;
285 Optional. Password to authenticate to SMTP server with.
284 Optional. Password to authenticate to SMTP server with.
286 If username is specified, password must also be specified.
285 If username is specified, password must also be specified.
287 Default: none.
286 Default: none.
288
287
289 paths::
288 paths::
290 Assigns symbolic names to repositories. The left side is the
289 Assigns symbolic names to repositories. The left side is the
291 symbolic name, and the right gives the directory or URL that is the
290 symbolic name, and the right gives the directory or URL that is the
292 location of the repository. Default paths can be declared by
291 location of the repository. Default paths can be declared by
293 setting the following entries.
292 setting the following entries.
294 default;;
293 default;;
295 Directory or URL to use when pulling if no source is specified.
294 Directory or URL to use when pulling if no source is specified.
296 Default is set to repository from which the current repository
295 Default is set to repository from which the current repository
297 was cloned.
296 was cloned.
298 default-push;;
297 default-push;;
299 Optional. Directory or URL to use when pushing if no destination
298 Optional. Directory or URL to use when pushing if no destination
300 is specified.
299 is specified.
301
300
302 ui::
301 ui::
303 User interface controls.
302 User interface controls.
304 debug;;
303 debug;;
305 Print debugging information. True or False. Default is False.
304 Print debugging information. True or False. Default is False.
306 editor;;
305 editor;;
307 The editor to use during a commit. Default is $EDITOR or "vi".
306 The editor to use during a commit. Default is $EDITOR or "vi".
308 ignore;;
307 ignore;;
309 A file to read per-user ignore patterns from. This file should be in
308 A file to read per-user ignore patterns from. This file should be in
310 the same format as a repository-wide .hgignore file. This option
309 the same format as a repository-wide .hgignore file. This option
311 supports hook syntax, so if you want to specify multiple ignore
310 supports hook syntax, so if you want to specify multiple ignore
312 files, you can do so by setting something like
311 files, you can do so by setting something like
313 "ignore.other = ~/.hgignore2". For details of the ignore file
312 "ignore.other = ~/.hgignore2". For details of the ignore file
314 format, see the hgignore(5) man page.
313 format, see the hgignore(5) man page.
315 interactive;;
314 interactive;;
316 Allow to prompt the user. True or False. Default is True.
315 Allow to prompt the user. True or False. Default is True.
317 logtemplate;;
316 logtemplate;;
318 Template string for commands that print changesets.
317 Template string for commands that print changesets.
319 style;;
318 style;;
320 Name of style to use for command output.
319 Name of style to use for command output.
321 merge;;
320 merge;;
322 The conflict resolution program to use during a manual merge.
321 The conflict resolution program to use during a manual merge.
323 Default is "hgmerge".
322 Default is "hgmerge".
324 quiet;;
323 quiet;;
325 Reduce the amount of output printed. True or False. Default is False.
324 Reduce the amount of output printed. True or False. Default is False.
326 remotecmd;;
325 remotecmd;;
327 remote command to use for clone/push/pull operations. Default is 'hg'.
326 remote command to use for clone/push/pull operations. Default is 'hg'.
328 ssh;;
327 ssh;;
329 command to use for SSH connections. Default is 'ssh'.
328 command to use for SSH connections. Default is 'ssh'.
330 timeout;;
329 timeout;;
331 The timeout used when a lock is held (in seconds), a negative value
330 The timeout used when a lock is held (in seconds), a negative value
332 means no timeout. Default is 600.
331 means no timeout. Default is 600.
333 username;;
332 username;;
334 The committer of a changeset created when running "commit".
333 The committer of a changeset created when running "commit".
335 Typically a person's name and email address, e.g. "Fred Widget
334 Typically a person's name and email address, e.g. "Fred Widget
336 <fred@example.com>". Default is $EMAIL or username@hostname, unless
335 <fred@example.com>". Default is $EMAIL or username@hostname, unless
337 username is set to an empty string, which enforces specifying the
336 username is set to an empty string, which enforces specifying the
338 username manually.
337 username manually.
339 verbose;;
338 verbose;;
340 Increase the amount of output printed. True or False. Default is False.
339 Increase the amount of output printed. True or False. Default is False.
341
340
342
341
343 web::
342 web::
344 Web interface configuration.
343 Web interface configuration.
345 accesslog;;
344 accesslog;;
346 Where to output the access log. Default is stdout.
345 Where to output the access log. Default is stdout.
347 address;;
346 address;;
348 Interface address to bind to. Default is all.
347 Interface address to bind to. Default is all.
349 allowbz2;;
348 allowbz2;;
350 Whether to allow .tar.bz2 downloading of repo revisions. Default is false.
349 Whether to allow .tar.bz2 downloading of repo revisions. Default is false.
351 allowgz;;
350 allowgz;;
352 Whether to allow .tar.gz downloading of repo revisions. Default is false.
351 Whether to allow .tar.gz downloading of repo revisions. Default is false.
353 allowpull;;
352 allowpull;;
354 Whether to allow pulling from the repository. Default is true.
353 Whether to allow pulling from the repository. Default is true.
355 allowzip;;
354 allowzip;;
356 Whether to allow .zip downloading of repo revisions. Default is false.
355 Whether to allow .zip downloading of repo revisions. Default is false.
357 This feature creates temporary files.
356 This feature creates temporary files.
358 baseurl;;
357 baseurl;;
359 Base URL to use when publishing URLs in other locations, so
358 Base URL to use when publishing URLs in other locations, so
360 third-party tools like email notification hooks can construct URLs.
359 third-party tools like email notification hooks can construct URLs.
361 Example: "http://hgserver/repos/"
360 Example: "http://hgserver/repos/"
362 description;;
361 description;;
363 Textual description of the repository's purpose or contents.
362 Textual description of the repository's purpose or contents.
364 Default is "unknown".
363 Default is "unknown".
365 errorlog;;
364 errorlog;;
366 Where to output the error log. Default is stderr.
365 Where to output the error log. Default is stderr.
367 ipv6;;
366 ipv6;;
368 Whether to use IPv6. Default is false.
367 Whether to use IPv6. Default is false.
369 name;;
368 name;;
370 Repository name to use in the web interface. Default is current
369 Repository name to use in the web interface. Default is current
371 working directory.
370 working directory.
372 maxchanges;;
371 maxchanges;;
373 Maximum number of changes to list on the changelog. Default is 10.
372 Maximum number of changes to list on the changelog. Default is 10.
374 maxfiles;;
373 maxfiles;;
375 Maximum number of files to list per changeset. Default is 10.
374 Maximum number of files to list per changeset. Default is 10.
376 port;;
375 port;;
377 Port to listen on. Default is 8000.
376 Port to listen on. Default is 8000.
378 style;;
377 style;;
379 Which template map style to use.
378 Which template map style to use.
380 templates;;
379 templates;;
381 Where to find the HTML templates. Default is install path.
380 Where to find the HTML templates. Default is install path.
382
381
383
382
384 AUTHOR
383 AUTHOR
385 ------
384 ------
386 Bryan O'Sullivan <bos@serpentine.com>.
385 Bryan O'Sullivan <bos@serpentine.com>.
387
386
388 Mercurial was written by Matt Mackall <mpm@selenic.com>.
387 Mercurial was written by Matt Mackall <mpm@selenic.com>.
389
388
390 SEE ALSO
389 SEE ALSO
391 --------
390 --------
392 hg(1), hgignore(5)
391 hg(1), hgignore(5)
393
392
394 COPYING
393 COPYING
395 -------
394 -------
396 This manual page is copyright 2005 Bryan O'Sullivan.
395 This manual page is copyright 2005 Bryan O'Sullivan.
397 Mercurial is copyright 2005, 2006 Matt Mackall.
396 Mercurial is copyright 2005, 2006 Matt Mackall.
398 Free use of this software is granted under the terms of the GNU General
397 Free use of this software is granted under the terms of the GNU General
399 Public License (GPL).
398 Public License (GPL).
@@ -1,2100 +1,2099 b''
1 # localrepo.py - read/write repository class for mercurial
1 # localrepo.py - read/write repository class for mercurial
2 #
2 #
3 # Copyright 2005 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms
5 # This software may be used and distributed according to the terms
6 # of the GNU General Public License, incorporated herein by reference.
6 # of the GNU General Public License, incorporated herein by reference.
7
7
8 import os, util
8 import os, util
9 import filelog, manifest, changelog, dirstate, repo
9 import filelog, manifest, changelog, dirstate, repo
10 from node import *
10 from node import *
11 from i18n import gettext as _
11 from i18n import gettext as _
12 from demandload import *
12 from demandload import *
13 demandload(globals(), "appendfile changegroup")
13 demandload(globals(), "appendfile changegroup")
14 demandload(globals(), "re lock transaction tempfile stat mdiff errno ui")
14 demandload(globals(), "re lock transaction tempfile stat mdiff errno ui")
15 demandload(globals(), "revlog traceback")
15 demandload(globals(), "revlog traceback")
16
16
17 class localrepository(object):
17 class localrepository(object):
18 def __del__(self):
18 def __del__(self):
19 self.transhandle = None
19 self.transhandle = None
20 def __init__(self, parentui, path=None, create=0):
20 def __init__(self, parentui, path=None, create=0):
21 if not path:
21 if not path:
22 p = os.getcwd()
22 p = os.getcwd()
23 while not os.path.isdir(os.path.join(p, ".hg")):
23 while not os.path.isdir(os.path.join(p, ".hg")):
24 oldp = p
24 oldp = p
25 p = os.path.dirname(p)
25 p = os.path.dirname(p)
26 if p == oldp:
26 if p == oldp:
27 raise repo.RepoError(_("no repo found"))
27 raise repo.RepoError(_("no repo found"))
28 path = p
28 path = p
29 self.path = os.path.join(path, ".hg")
29 self.path = os.path.join(path, ".hg")
30
30
31 if not create and not os.path.isdir(self.path):
31 if not create and not os.path.isdir(self.path):
32 raise repo.RepoError(_("repository %s not found") % path)
32 raise repo.RepoError(_("repository %s not found") % path)
33
33
34 self.root = os.path.abspath(path)
34 self.root = os.path.abspath(path)
35 self.origroot = path
35 self.origroot = path
36 self.ui = ui.ui(parentui=parentui)
36 self.ui = ui.ui(parentui=parentui)
37 self.opener = util.opener(self.path)
37 self.opener = util.opener(self.path)
38 self.wopener = util.opener(self.root)
38 self.wopener = util.opener(self.root)
39
39
40 try:
40 try:
41 self.ui.readconfig(self.join("hgrc"), self.root)
41 self.ui.readconfig(self.join("hgrc"), self.root)
42 except IOError:
42 except IOError:
43 pass
43 pass
44
44
45 v = self.ui.revlogopts
45 v = self.ui.revlogopts
46 self.revlogversion = int(v.get('format', revlog.REVLOG_DEFAULT_FORMAT))
46 self.revlogversion = int(v.get('format', revlog.REVLOG_DEFAULT_FORMAT))
47 self.revlogv1 = self.revlogversion != revlog.REVLOGV0
47 self.revlogv1 = self.revlogversion != revlog.REVLOGV0
48 fl = v.get('flags', None)
48 fl = v.get('flags', None)
49 flags = 0
49 flags = 0
50 if fl != None:
50 if fl != None:
51 for x in fl.split():
51 for x in fl.split():
52 flags |= revlog.flagstr(x)
52 flags |= revlog.flagstr(x)
53 elif self.revlogv1:
53 elif self.revlogv1:
54 flags = revlog.REVLOG_DEFAULT_FLAGS
54 flags = revlog.REVLOG_DEFAULT_FLAGS
55
55
56 v = self.revlogversion | flags
56 v = self.revlogversion | flags
57 self.manifest = manifest.manifest(self.opener, v)
57 self.manifest = manifest.manifest(self.opener, v)
58 self.changelog = changelog.changelog(self.opener, v)
58 self.changelog = changelog.changelog(self.opener, v)
59
59
60 # the changelog might not have the inline index flag
60 # the changelog might not have the inline index flag
61 # on. If the format of the changelog is the same as found in
61 # on. If the format of the changelog is the same as found in
62 # .hgrc, apply any flags found in the .hgrc as well.
62 # .hgrc, apply any flags found in the .hgrc as well.
63 # Otherwise, just version from the changelog
63 # Otherwise, just version from the changelog
64 v = self.changelog.version
64 v = self.changelog.version
65 if v == self.revlogversion:
65 if v == self.revlogversion:
66 v |= flags
66 v |= flags
67 self.revlogversion = v
67 self.revlogversion = v
68
68
69 self.tagscache = None
69 self.tagscache = None
70 self.nodetagscache = None
70 self.nodetagscache = None
71 self.encodepats = None
71 self.encodepats = None
72 self.decodepats = None
72 self.decodepats = None
73 self.transhandle = None
73 self.transhandle = None
74
74
75 if create:
75 if create:
76 os.mkdir(self.path)
76 os.mkdir(self.path)
77 os.mkdir(self.join("data"))
77 os.mkdir(self.join("data"))
78
78
79 self.dirstate = dirstate.dirstate(self.opener, self.ui, self.root)
79 self.dirstate = dirstate.dirstate(self.opener, self.ui, self.root)
80
80
81 def hook(self, name, throw=False, **args):
81 def hook(self, name, throw=False, **args):
82 def callhook(hname, funcname):
82 def callhook(hname, funcname):
83 '''call python hook. hook is callable object, looked up as
83 '''call python hook. hook is callable object, looked up as
84 name in python module. if callable returns "true", hook
84 name in python module. if callable returns "true", hook
85 fails, else passes. if hook raises exception, treated as
85 fails, else passes. if hook raises exception, treated as
86 hook failure. exception propagates if throw is "true".
86 hook failure. exception propagates if throw is "true".
87
87
88 reason for "true" meaning "hook failed" is so that
88 reason for "true" meaning "hook failed" is so that
89 unmodified commands (e.g. mercurial.commands.update) can
89 unmodified commands (e.g. mercurial.commands.update) can
90 be run as hooks without wrappers to convert return values.'''
90 be run as hooks without wrappers to convert return values.'''
91
91
92 self.ui.note(_("calling hook %s: %s\n") % (hname, funcname))
92 self.ui.note(_("calling hook %s: %s\n") % (hname, funcname))
93 d = funcname.rfind('.')
93 d = funcname.rfind('.')
94 if d == -1:
94 if d == -1:
95 raise util.Abort(_('%s hook is invalid ("%s" not in a module)')
95 raise util.Abort(_('%s hook is invalid ("%s" not in a module)')
96 % (hname, funcname))
96 % (hname, funcname))
97 modname = funcname[:d]
97 modname = funcname[:d]
98 try:
98 try:
99 obj = __import__(modname)
99 obj = __import__(modname)
100 except ImportError:
100 except ImportError:
101 raise util.Abort(_('%s hook is invalid '
101 raise util.Abort(_('%s hook is invalid '
102 '(import of "%s" failed)') %
102 '(import of "%s" failed)') %
103 (hname, modname))
103 (hname, modname))
104 try:
104 try:
105 for p in funcname.split('.')[1:]:
105 for p in funcname.split('.')[1:]:
106 obj = getattr(obj, p)
106 obj = getattr(obj, p)
107 except AttributeError, err:
107 except AttributeError, err:
108 raise util.Abort(_('%s hook is invalid '
108 raise util.Abort(_('%s hook is invalid '
109 '("%s" is not defined)') %
109 '("%s" is not defined)') %
110 (hname, funcname))
110 (hname, funcname))
111 if not callable(obj):
111 if not callable(obj):
112 raise util.Abort(_('%s hook is invalid '
112 raise util.Abort(_('%s hook is invalid '
113 '("%s" is not callable)') %
113 '("%s" is not callable)') %
114 (hname, funcname))
114 (hname, funcname))
115 try:
115 try:
116 r = obj(ui=self.ui, repo=self, hooktype=name, **args)
116 r = obj(ui=self.ui, repo=self, hooktype=name, **args)
117 except (KeyboardInterrupt, util.SignalInterrupt):
117 except (KeyboardInterrupt, util.SignalInterrupt):
118 raise
118 raise
119 except Exception, exc:
119 except Exception, exc:
120 if isinstance(exc, util.Abort):
120 if isinstance(exc, util.Abort):
121 self.ui.warn(_('error: %s hook failed: %s\n') %
121 self.ui.warn(_('error: %s hook failed: %s\n') %
122 (hname, exc.args[0] % exc.args[1:]))
122 (hname, exc.args[0] % exc.args[1:]))
123 else:
123 else:
124 self.ui.warn(_('error: %s hook raised an exception: '
124 self.ui.warn(_('error: %s hook raised an exception: '
125 '%s\n') % (hname, exc))
125 '%s\n') % (hname, exc))
126 if throw:
126 if throw:
127 raise
127 raise
128 if self.ui.traceback:
128 if self.ui.traceback:
129 traceback.print_exc()
129 traceback.print_exc()
130 return True
130 return True
131 if r:
131 if r:
132 if throw:
132 if throw:
133 raise util.Abort(_('%s hook failed') % hname)
133 raise util.Abort(_('%s hook failed') % hname)
134 self.ui.warn(_('warning: %s hook failed\n') % hname)
134 self.ui.warn(_('warning: %s hook failed\n') % hname)
135 return r
135 return r
136
136
137 def runhook(name, cmd):
137 def runhook(name, cmd):
138 self.ui.note(_("running hook %s: %s\n") % (name, cmd))
138 self.ui.note(_("running hook %s: %s\n") % (name, cmd))
139 env = dict([('HG_' + k.upper(), v) for k, v in args.iteritems()] +
139 env = dict([('HG_' + k.upper(), v) for k, v in args.iteritems()])
140 [(k.upper(), v) for k, v in args.iteritems()])
141 r = util.system(cmd, environ=env, cwd=self.root)
140 r = util.system(cmd, environ=env, cwd=self.root)
142 if r:
141 if r:
143 desc, r = util.explain_exit(r)
142 desc, r = util.explain_exit(r)
144 if throw:
143 if throw:
145 raise util.Abort(_('%s hook %s') % (name, desc))
144 raise util.Abort(_('%s hook %s') % (name, desc))
146 self.ui.warn(_('warning: %s hook %s\n') % (name, desc))
145 self.ui.warn(_('warning: %s hook %s\n') % (name, desc))
147 return r
146 return r
148
147
149 r = False
148 r = False
150 hooks = [(hname, cmd) for hname, cmd in self.ui.configitems("hooks")
149 hooks = [(hname, cmd) for hname, cmd in self.ui.configitems("hooks")
151 if hname.split(".", 1)[0] == name and cmd]
150 if hname.split(".", 1)[0] == name and cmd]
152 hooks.sort()
151 hooks.sort()
153 for hname, cmd in hooks:
152 for hname, cmd in hooks:
154 if cmd.startswith('python:'):
153 if cmd.startswith('python:'):
155 r = callhook(hname, cmd[7:].strip()) or r
154 r = callhook(hname, cmd[7:].strip()) or r
156 else:
155 else:
157 r = runhook(hname, cmd) or r
156 r = runhook(hname, cmd) or r
158 return r
157 return r
159
158
160 def tags(self):
159 def tags(self):
161 '''return a mapping of tag to node'''
160 '''return a mapping of tag to node'''
162 if not self.tagscache:
161 if not self.tagscache:
163 self.tagscache = {}
162 self.tagscache = {}
164
163
165 def parsetag(line, context):
164 def parsetag(line, context):
166 if not line:
165 if not line:
167 return
166 return
168 s = l.split(" ", 1)
167 s = l.split(" ", 1)
169 if len(s) != 2:
168 if len(s) != 2:
170 self.ui.warn(_("%s: ignoring invalid tag\n") % context)
169 self.ui.warn(_("%s: ignoring invalid tag\n") % context)
171 return
170 return
172 node, key = s
171 node, key = s
173 try:
172 try:
174 bin_n = bin(node)
173 bin_n = bin(node)
175 except TypeError:
174 except TypeError:
176 self.ui.warn(_("%s: ignoring invalid tag\n") % context)
175 self.ui.warn(_("%s: ignoring invalid tag\n") % context)
177 return
176 return
178 if bin_n not in self.changelog.nodemap:
177 if bin_n not in self.changelog.nodemap:
179 self.ui.warn(_("%s: ignoring invalid tag\n") % context)
178 self.ui.warn(_("%s: ignoring invalid tag\n") % context)
180 return
179 return
181 self.tagscache[key.strip()] = bin_n
180 self.tagscache[key.strip()] = bin_n
182
181
183 # read each head of the tags file, ending with the tip
182 # read each head of the tags file, ending with the tip
184 # and add each tag found to the map, with "newer" ones
183 # and add each tag found to the map, with "newer" ones
185 # taking precedence
184 # taking precedence
186 fl = self.file(".hgtags")
185 fl = self.file(".hgtags")
187 h = fl.heads()
186 h = fl.heads()
188 h.reverse()
187 h.reverse()
189 for r in h:
188 for r in h:
190 count = 0
189 count = 0
191 for l in fl.read(r).splitlines():
190 for l in fl.read(r).splitlines():
192 count += 1
191 count += 1
193 parsetag(l, ".hgtags:%d" % count)
192 parsetag(l, ".hgtags:%d" % count)
194
193
195 try:
194 try:
196 f = self.opener("localtags")
195 f = self.opener("localtags")
197 count = 0
196 count = 0
198 for l in f:
197 for l in f:
199 count += 1
198 count += 1
200 parsetag(l, "localtags:%d" % count)
199 parsetag(l, "localtags:%d" % count)
201 except IOError:
200 except IOError:
202 pass
201 pass
203
202
204 self.tagscache['tip'] = self.changelog.tip()
203 self.tagscache['tip'] = self.changelog.tip()
205
204
206 return self.tagscache
205 return self.tagscache
207
206
208 def tagslist(self):
207 def tagslist(self):
209 '''return a list of tags ordered by revision'''
208 '''return a list of tags ordered by revision'''
210 l = []
209 l = []
211 for t, n in self.tags().items():
210 for t, n in self.tags().items():
212 try:
211 try:
213 r = self.changelog.rev(n)
212 r = self.changelog.rev(n)
214 except:
213 except:
215 r = -2 # sort to the beginning of the list if unknown
214 r = -2 # sort to the beginning of the list if unknown
216 l.append((r, t, n))
215 l.append((r, t, n))
217 l.sort()
216 l.sort()
218 return [(t, n) for r, t, n in l]
217 return [(t, n) for r, t, n in l]
219
218
220 def nodetags(self, node):
219 def nodetags(self, node):
221 '''return the tags associated with a node'''
220 '''return the tags associated with a node'''
222 if not self.nodetagscache:
221 if not self.nodetagscache:
223 self.nodetagscache = {}
222 self.nodetagscache = {}
224 for t, n in self.tags().items():
223 for t, n in self.tags().items():
225 self.nodetagscache.setdefault(n, []).append(t)
224 self.nodetagscache.setdefault(n, []).append(t)
226 return self.nodetagscache.get(node, [])
225 return self.nodetagscache.get(node, [])
227
226
228 def lookup(self, key):
227 def lookup(self, key):
229 try:
228 try:
230 return self.tags()[key]
229 return self.tags()[key]
231 except KeyError:
230 except KeyError:
232 try:
231 try:
233 return self.changelog.lookup(key)
232 return self.changelog.lookup(key)
234 except:
233 except:
235 raise repo.RepoError(_("unknown revision '%s'") % key)
234 raise repo.RepoError(_("unknown revision '%s'") % key)
236
235
237 def dev(self):
236 def dev(self):
238 return os.stat(self.path).st_dev
237 return os.stat(self.path).st_dev
239
238
240 def local(self):
239 def local(self):
241 return True
240 return True
242
241
243 def join(self, f):
242 def join(self, f):
244 return os.path.join(self.path, f)
243 return os.path.join(self.path, f)
245
244
246 def wjoin(self, f):
245 def wjoin(self, f):
247 return os.path.join(self.root, f)
246 return os.path.join(self.root, f)
248
247
249 def file(self, f):
248 def file(self, f):
250 if f[0] == '/':
249 if f[0] == '/':
251 f = f[1:]
250 f = f[1:]
252 return filelog.filelog(self.opener, f, self.revlogversion)
251 return filelog.filelog(self.opener, f, self.revlogversion)
253
252
254 def getcwd(self):
253 def getcwd(self):
255 return self.dirstate.getcwd()
254 return self.dirstate.getcwd()
256
255
257 def wfile(self, f, mode='r'):
256 def wfile(self, f, mode='r'):
258 return self.wopener(f, mode)
257 return self.wopener(f, mode)
259
258
260 def wread(self, filename):
259 def wread(self, filename):
261 if self.encodepats == None:
260 if self.encodepats == None:
262 l = []
261 l = []
263 for pat, cmd in self.ui.configitems("encode"):
262 for pat, cmd in self.ui.configitems("encode"):
264 mf = util.matcher(self.root, "", [pat], [], [])[1]
263 mf = util.matcher(self.root, "", [pat], [], [])[1]
265 l.append((mf, cmd))
264 l.append((mf, cmd))
266 self.encodepats = l
265 self.encodepats = l
267
266
268 data = self.wopener(filename, 'r').read()
267 data = self.wopener(filename, 'r').read()
269
268
270 for mf, cmd in self.encodepats:
269 for mf, cmd in self.encodepats:
271 if mf(filename):
270 if mf(filename):
272 self.ui.debug(_("filtering %s through %s\n") % (filename, cmd))
271 self.ui.debug(_("filtering %s through %s\n") % (filename, cmd))
273 data = util.filter(data, cmd)
272 data = util.filter(data, cmd)
274 break
273 break
275
274
276 return data
275 return data
277
276
278 def wwrite(self, filename, data, fd=None):
277 def wwrite(self, filename, data, fd=None):
279 if self.decodepats == None:
278 if self.decodepats == None:
280 l = []
279 l = []
281 for pat, cmd in self.ui.configitems("decode"):
280 for pat, cmd in self.ui.configitems("decode"):
282 mf = util.matcher(self.root, "", [pat], [], [])[1]
281 mf = util.matcher(self.root, "", [pat], [], [])[1]
283 l.append((mf, cmd))
282 l.append((mf, cmd))
284 self.decodepats = l
283 self.decodepats = l
285
284
286 for mf, cmd in self.decodepats:
285 for mf, cmd in self.decodepats:
287 if mf(filename):
286 if mf(filename):
288 self.ui.debug(_("filtering %s through %s\n") % (filename, cmd))
287 self.ui.debug(_("filtering %s through %s\n") % (filename, cmd))
289 data = util.filter(data, cmd)
288 data = util.filter(data, cmd)
290 break
289 break
291
290
292 if fd:
291 if fd:
293 return fd.write(data)
292 return fd.write(data)
294 return self.wopener(filename, 'w').write(data)
293 return self.wopener(filename, 'w').write(data)
295
294
296 def transaction(self):
295 def transaction(self):
297 tr = self.transhandle
296 tr = self.transhandle
298 if tr != None and tr.running():
297 if tr != None and tr.running():
299 return tr.nest()
298 return tr.nest()
300
299
301 # save dirstate for undo
300 # save dirstate for undo
302 try:
301 try:
303 ds = self.opener("dirstate").read()
302 ds = self.opener("dirstate").read()
304 except IOError:
303 except IOError:
305 ds = ""
304 ds = ""
306 self.opener("journal.dirstate", "w").write(ds)
305 self.opener("journal.dirstate", "w").write(ds)
307
306
308 tr = transaction.transaction(self.ui.warn, self.opener,
307 tr = transaction.transaction(self.ui.warn, self.opener,
309 self.join("journal"),
308 self.join("journal"),
310 aftertrans(self.path))
309 aftertrans(self.path))
311 self.transhandle = tr
310 self.transhandle = tr
312 return tr
311 return tr
313
312
314 def recover(self):
313 def recover(self):
315 l = self.lock()
314 l = self.lock()
316 if os.path.exists(self.join("journal")):
315 if os.path.exists(self.join("journal")):
317 self.ui.status(_("rolling back interrupted transaction\n"))
316 self.ui.status(_("rolling back interrupted transaction\n"))
318 transaction.rollback(self.opener, self.join("journal"))
317 transaction.rollback(self.opener, self.join("journal"))
319 self.reload()
318 self.reload()
320 return True
319 return True
321 else:
320 else:
322 self.ui.warn(_("no interrupted transaction available\n"))
321 self.ui.warn(_("no interrupted transaction available\n"))
323 return False
322 return False
324
323
325 def undo(self, wlock=None):
324 def undo(self, wlock=None):
326 if not wlock:
325 if not wlock:
327 wlock = self.wlock()
326 wlock = self.wlock()
328 l = self.lock()
327 l = self.lock()
329 if os.path.exists(self.join("undo")):
328 if os.path.exists(self.join("undo")):
330 self.ui.status(_("rolling back last transaction\n"))
329 self.ui.status(_("rolling back last transaction\n"))
331 transaction.rollback(self.opener, self.join("undo"))
330 transaction.rollback(self.opener, self.join("undo"))
332 util.rename(self.join("undo.dirstate"), self.join("dirstate"))
331 util.rename(self.join("undo.dirstate"), self.join("dirstate"))
333 self.reload()
332 self.reload()
334 self.wreload()
333 self.wreload()
335 else:
334 else:
336 self.ui.warn(_("no undo information available\n"))
335 self.ui.warn(_("no undo information available\n"))
337
336
338 def wreload(self):
337 def wreload(self):
339 self.dirstate.read()
338 self.dirstate.read()
340
339
341 def reload(self):
340 def reload(self):
342 self.changelog.load()
341 self.changelog.load()
343 self.manifest.load()
342 self.manifest.load()
344 self.tagscache = None
343 self.tagscache = None
345 self.nodetagscache = None
344 self.nodetagscache = None
346
345
347 def do_lock(self, lockname, wait, releasefn=None, acquirefn=None,
346 def do_lock(self, lockname, wait, releasefn=None, acquirefn=None,
348 desc=None):
347 desc=None):
349 try:
348 try:
350 l = lock.lock(self.join(lockname), 0, releasefn, desc=desc)
349 l = lock.lock(self.join(lockname), 0, releasefn, desc=desc)
351 except lock.LockHeld, inst:
350 except lock.LockHeld, inst:
352 if not wait:
351 if not wait:
353 raise
352 raise
354 self.ui.warn(_("waiting for lock on %s held by %s\n") %
353 self.ui.warn(_("waiting for lock on %s held by %s\n") %
355 (desc, inst.args[0]))
354 (desc, inst.args[0]))
356 # default to 600 seconds timeout
355 # default to 600 seconds timeout
357 l = lock.lock(self.join(lockname),
356 l = lock.lock(self.join(lockname),
358 int(self.ui.config("ui", "timeout") or 600),
357 int(self.ui.config("ui", "timeout") or 600),
359 releasefn, desc=desc)
358 releasefn, desc=desc)
360 if acquirefn:
359 if acquirefn:
361 acquirefn()
360 acquirefn()
362 return l
361 return l
363
362
364 def lock(self, wait=1):
363 def lock(self, wait=1):
365 return self.do_lock("lock", wait, acquirefn=self.reload,
364 return self.do_lock("lock", wait, acquirefn=self.reload,
366 desc=_('repository %s') % self.origroot)
365 desc=_('repository %s') % self.origroot)
367
366
368 def wlock(self, wait=1):
367 def wlock(self, wait=1):
369 return self.do_lock("wlock", wait, self.dirstate.write,
368 return self.do_lock("wlock", wait, self.dirstate.write,
370 self.wreload,
369 self.wreload,
371 desc=_('working directory of %s') % self.origroot)
370 desc=_('working directory of %s') % self.origroot)
372
371
373 def checkfilemerge(self, filename, text, filelog, manifest1, manifest2):
372 def checkfilemerge(self, filename, text, filelog, manifest1, manifest2):
374 "determine whether a new filenode is needed"
373 "determine whether a new filenode is needed"
375 fp1 = manifest1.get(filename, nullid)
374 fp1 = manifest1.get(filename, nullid)
376 fp2 = manifest2.get(filename, nullid)
375 fp2 = manifest2.get(filename, nullid)
377
376
378 if fp2 != nullid:
377 if fp2 != nullid:
379 # is one parent an ancestor of the other?
378 # is one parent an ancestor of the other?
380 fpa = filelog.ancestor(fp1, fp2)
379 fpa = filelog.ancestor(fp1, fp2)
381 if fpa == fp1:
380 if fpa == fp1:
382 fp1, fp2 = fp2, nullid
381 fp1, fp2 = fp2, nullid
383 elif fpa == fp2:
382 elif fpa == fp2:
384 fp2 = nullid
383 fp2 = nullid
385
384
386 # is the file unmodified from the parent? report existing entry
385 # is the file unmodified from the parent? report existing entry
387 if fp2 == nullid and text == filelog.read(fp1):
386 if fp2 == nullid and text == filelog.read(fp1):
388 return (fp1, None, None)
387 return (fp1, None, None)
389
388
390 return (None, fp1, fp2)
389 return (None, fp1, fp2)
391
390
392 def rawcommit(self, files, text, user, date, p1=None, p2=None, wlock=None):
391 def rawcommit(self, files, text, user, date, p1=None, p2=None, wlock=None):
393 orig_parent = self.dirstate.parents()[0] or nullid
392 orig_parent = self.dirstate.parents()[0] or nullid
394 p1 = p1 or self.dirstate.parents()[0] or nullid
393 p1 = p1 or self.dirstate.parents()[0] or nullid
395 p2 = p2 or self.dirstate.parents()[1] or nullid
394 p2 = p2 or self.dirstate.parents()[1] or nullid
396 c1 = self.changelog.read(p1)
395 c1 = self.changelog.read(p1)
397 c2 = self.changelog.read(p2)
396 c2 = self.changelog.read(p2)
398 m1 = self.manifest.read(c1[0])
397 m1 = self.manifest.read(c1[0])
399 mf1 = self.manifest.readflags(c1[0])
398 mf1 = self.manifest.readflags(c1[0])
400 m2 = self.manifest.read(c2[0])
399 m2 = self.manifest.read(c2[0])
401 changed = []
400 changed = []
402
401
403 if orig_parent == p1:
402 if orig_parent == p1:
404 update_dirstate = 1
403 update_dirstate = 1
405 else:
404 else:
406 update_dirstate = 0
405 update_dirstate = 0
407
406
408 if not wlock:
407 if not wlock:
409 wlock = self.wlock()
408 wlock = self.wlock()
410 l = self.lock()
409 l = self.lock()
411 tr = self.transaction()
410 tr = self.transaction()
412 mm = m1.copy()
411 mm = m1.copy()
413 mfm = mf1.copy()
412 mfm = mf1.copy()
414 linkrev = self.changelog.count()
413 linkrev = self.changelog.count()
415 for f in files:
414 for f in files:
416 try:
415 try:
417 t = self.wread(f)
416 t = self.wread(f)
418 tm = util.is_exec(self.wjoin(f), mfm.get(f, False))
417 tm = util.is_exec(self.wjoin(f), mfm.get(f, False))
419 r = self.file(f)
418 r = self.file(f)
420 mfm[f] = tm
419 mfm[f] = tm
421
420
422 (entry, fp1, fp2) = self.checkfilemerge(f, t, r, m1, m2)
421 (entry, fp1, fp2) = self.checkfilemerge(f, t, r, m1, m2)
423 if entry:
422 if entry:
424 mm[f] = entry
423 mm[f] = entry
425 continue
424 continue
426
425
427 mm[f] = r.add(t, {}, tr, linkrev, fp1, fp2)
426 mm[f] = r.add(t, {}, tr, linkrev, fp1, fp2)
428 changed.append(f)
427 changed.append(f)
429 if update_dirstate:
428 if update_dirstate:
430 self.dirstate.update([f], "n")
429 self.dirstate.update([f], "n")
431 except IOError:
430 except IOError:
432 try:
431 try:
433 del mm[f]
432 del mm[f]
434 del mfm[f]
433 del mfm[f]
435 if update_dirstate:
434 if update_dirstate:
436 self.dirstate.forget([f])
435 self.dirstate.forget([f])
437 except:
436 except:
438 # deleted from p2?
437 # deleted from p2?
439 pass
438 pass
440
439
441 mnode = self.manifest.add(mm, mfm, tr, linkrev, c1[0], c2[0])
440 mnode = self.manifest.add(mm, mfm, tr, linkrev, c1[0], c2[0])
442 user = user or self.ui.username()
441 user = user or self.ui.username()
443 n = self.changelog.add(mnode, changed, text, tr, p1, p2, user, date)
442 n = self.changelog.add(mnode, changed, text, tr, p1, p2, user, date)
444 tr.close()
443 tr.close()
445 if update_dirstate:
444 if update_dirstate:
446 self.dirstate.setparents(n, nullid)
445 self.dirstate.setparents(n, nullid)
447
446
448 def commit(self, files=None, text="", user=None, date=None,
447 def commit(self, files=None, text="", user=None, date=None,
449 match=util.always, force=False, lock=None, wlock=None,
448 match=util.always, force=False, lock=None, wlock=None,
450 force_editor=False):
449 force_editor=False):
451 commit = []
450 commit = []
452 remove = []
451 remove = []
453 changed = []
452 changed = []
454
453
455 if files:
454 if files:
456 for f in files:
455 for f in files:
457 s = self.dirstate.state(f)
456 s = self.dirstate.state(f)
458 if s in 'nmai':
457 if s in 'nmai':
459 commit.append(f)
458 commit.append(f)
460 elif s == 'r':
459 elif s == 'r':
461 remove.append(f)
460 remove.append(f)
462 else:
461 else:
463 self.ui.warn(_("%s not tracked!\n") % f)
462 self.ui.warn(_("%s not tracked!\n") % f)
464 else:
463 else:
465 modified, added, removed, deleted, unknown = self.changes(match=match)
464 modified, added, removed, deleted, unknown = self.changes(match=match)
466 commit = modified + added
465 commit = modified + added
467 remove = removed
466 remove = removed
468
467
469 p1, p2 = self.dirstate.parents()
468 p1, p2 = self.dirstate.parents()
470 c1 = self.changelog.read(p1)
469 c1 = self.changelog.read(p1)
471 c2 = self.changelog.read(p2)
470 c2 = self.changelog.read(p2)
472 m1 = self.manifest.read(c1[0])
471 m1 = self.manifest.read(c1[0])
473 mf1 = self.manifest.readflags(c1[0])
472 mf1 = self.manifest.readflags(c1[0])
474 m2 = self.manifest.read(c2[0])
473 m2 = self.manifest.read(c2[0])
475
474
476 if not commit and not remove and not force and p2 == nullid:
475 if not commit and not remove and not force and p2 == nullid:
477 self.ui.status(_("nothing changed\n"))
476 self.ui.status(_("nothing changed\n"))
478 return None
477 return None
479
478
480 xp1 = hex(p1)
479 xp1 = hex(p1)
481 if p2 == nullid: xp2 = ''
480 if p2 == nullid: xp2 = ''
482 else: xp2 = hex(p2)
481 else: xp2 = hex(p2)
483
482
484 self.hook("precommit", throw=True, parent1=xp1, parent2=xp2)
483 self.hook("precommit", throw=True, parent1=xp1, parent2=xp2)
485
484
486 if not wlock:
485 if not wlock:
487 wlock = self.wlock()
486 wlock = self.wlock()
488 if not lock:
487 if not lock:
489 lock = self.lock()
488 lock = self.lock()
490 tr = self.transaction()
489 tr = self.transaction()
491
490
492 # check in files
491 # check in files
493 new = {}
492 new = {}
494 linkrev = self.changelog.count()
493 linkrev = self.changelog.count()
495 commit.sort()
494 commit.sort()
496 for f in commit:
495 for f in commit:
497 self.ui.note(f + "\n")
496 self.ui.note(f + "\n")
498 try:
497 try:
499 mf1[f] = util.is_exec(self.wjoin(f), mf1.get(f, False))
498 mf1[f] = util.is_exec(self.wjoin(f), mf1.get(f, False))
500 t = self.wread(f)
499 t = self.wread(f)
501 except IOError:
500 except IOError:
502 self.ui.warn(_("trouble committing %s!\n") % f)
501 self.ui.warn(_("trouble committing %s!\n") % f)
503 raise
502 raise
504
503
505 r = self.file(f)
504 r = self.file(f)
506
505
507 meta = {}
506 meta = {}
508 cp = self.dirstate.copied(f)
507 cp = self.dirstate.copied(f)
509 if cp:
508 if cp:
510 meta["copy"] = cp
509 meta["copy"] = cp
511 meta["copyrev"] = hex(m1.get(cp, m2.get(cp, nullid)))
510 meta["copyrev"] = hex(m1.get(cp, m2.get(cp, nullid)))
512 self.ui.debug(_(" %s: copy %s:%s\n") % (f, cp, meta["copyrev"]))
511 self.ui.debug(_(" %s: copy %s:%s\n") % (f, cp, meta["copyrev"]))
513 fp1, fp2 = nullid, nullid
512 fp1, fp2 = nullid, nullid
514 else:
513 else:
515 entry, fp1, fp2 = self.checkfilemerge(f, t, r, m1, m2)
514 entry, fp1, fp2 = self.checkfilemerge(f, t, r, m1, m2)
516 if entry:
515 if entry:
517 new[f] = entry
516 new[f] = entry
518 continue
517 continue
519
518
520 new[f] = r.add(t, meta, tr, linkrev, fp1, fp2)
519 new[f] = r.add(t, meta, tr, linkrev, fp1, fp2)
521 # remember what we've added so that we can later calculate
520 # remember what we've added so that we can later calculate
522 # the files to pull from a set of changesets
521 # the files to pull from a set of changesets
523 changed.append(f)
522 changed.append(f)
524
523
525 # update manifest
524 # update manifest
526 m1 = m1.copy()
525 m1 = m1.copy()
527 m1.update(new)
526 m1.update(new)
528 for f in remove:
527 for f in remove:
529 if f in m1:
528 if f in m1:
530 del m1[f]
529 del m1[f]
531 mn = self.manifest.add(m1, mf1, tr, linkrev, c1[0], c2[0],
530 mn = self.manifest.add(m1, mf1, tr, linkrev, c1[0], c2[0],
532 (new, remove))
531 (new, remove))
533
532
534 # add changeset
533 # add changeset
535 new = new.keys()
534 new = new.keys()
536 new.sort()
535 new.sort()
537
536
538 user = user or self.ui.username()
537 user = user or self.ui.username()
539 if not text or force_editor:
538 if not text or force_editor:
540 edittext = []
539 edittext = []
541 if text:
540 if text:
542 edittext.append(text)
541 edittext.append(text)
543 edittext.append("")
542 edittext.append("")
544 if p2 != nullid:
543 if p2 != nullid:
545 edittext.append("HG: branch merge")
544 edittext.append("HG: branch merge")
546 edittext.extend(["HG: changed %s" % f for f in changed])
545 edittext.extend(["HG: changed %s" % f for f in changed])
547 edittext.extend(["HG: removed %s" % f for f in remove])
546 edittext.extend(["HG: removed %s" % f for f in remove])
548 if not changed and not remove:
547 if not changed and not remove:
549 edittext.append("HG: no files changed")
548 edittext.append("HG: no files changed")
550 edittext.append("")
549 edittext.append("")
551 # run editor in the repository root
550 # run editor in the repository root
552 olddir = os.getcwd()
551 olddir = os.getcwd()
553 os.chdir(self.root)
552 os.chdir(self.root)
554 edittext = self.ui.edit("\n".join(edittext), user)
553 edittext = self.ui.edit("\n".join(edittext), user)
555 os.chdir(olddir)
554 os.chdir(olddir)
556 if not edittext.rstrip():
555 if not edittext.rstrip():
557 return None
556 return None
558 text = edittext
557 text = edittext
559
558
560 n = self.changelog.add(mn, changed + remove, text, tr, p1, p2, user, date)
559 n = self.changelog.add(mn, changed + remove, text, tr, p1, p2, user, date)
561 self.hook('pretxncommit', throw=True, node=hex(n), parent1=xp1,
560 self.hook('pretxncommit', throw=True, node=hex(n), parent1=xp1,
562 parent2=xp2)
561 parent2=xp2)
563 tr.close()
562 tr.close()
564
563
565 self.dirstate.setparents(n)
564 self.dirstate.setparents(n)
566 self.dirstate.update(new, "n")
565 self.dirstate.update(new, "n")
567 self.dirstate.forget(remove)
566 self.dirstate.forget(remove)
568
567
569 self.hook("commit", node=hex(n), parent1=xp1, parent2=xp2)
568 self.hook("commit", node=hex(n), parent1=xp1, parent2=xp2)
570 return n
569 return n
571
570
572 def walk(self, node=None, files=[], match=util.always, badmatch=None):
571 def walk(self, node=None, files=[], match=util.always, badmatch=None):
573 if node:
572 if node:
574 fdict = dict.fromkeys(files)
573 fdict = dict.fromkeys(files)
575 for fn in self.manifest.read(self.changelog.read(node)[0]):
574 for fn in self.manifest.read(self.changelog.read(node)[0]):
576 fdict.pop(fn, None)
575 fdict.pop(fn, None)
577 if match(fn):
576 if match(fn):
578 yield 'm', fn
577 yield 'm', fn
579 for fn in fdict:
578 for fn in fdict:
580 if badmatch and badmatch(fn):
579 if badmatch and badmatch(fn):
581 if match(fn):
580 if match(fn):
582 yield 'b', fn
581 yield 'b', fn
583 else:
582 else:
584 self.ui.warn(_('%s: No such file in rev %s\n') % (
583 self.ui.warn(_('%s: No such file in rev %s\n') % (
585 util.pathto(self.getcwd(), fn), short(node)))
584 util.pathto(self.getcwd(), fn), short(node)))
586 else:
585 else:
587 for src, fn in self.dirstate.walk(files, match, badmatch=badmatch):
586 for src, fn in self.dirstate.walk(files, match, badmatch=badmatch):
588 yield src, fn
587 yield src, fn
589
588
590 def changes(self, node1=None, node2=None, files=[], match=util.always,
589 def changes(self, node1=None, node2=None, files=[], match=util.always,
591 wlock=None, show_ignored=None):
590 wlock=None, show_ignored=None):
592 """return changes between two nodes or node and working directory
591 """return changes between two nodes or node and working directory
593
592
594 If node1 is None, use the first dirstate parent instead.
593 If node1 is None, use the first dirstate parent instead.
595 If node2 is None, compare node1 with working directory.
594 If node2 is None, compare node1 with working directory.
596 """
595 """
597
596
598 def fcmp(fn, mf):
597 def fcmp(fn, mf):
599 t1 = self.wread(fn)
598 t1 = self.wread(fn)
600 t2 = self.file(fn).read(mf.get(fn, nullid))
599 t2 = self.file(fn).read(mf.get(fn, nullid))
601 return cmp(t1, t2)
600 return cmp(t1, t2)
602
601
603 def mfmatches(node):
602 def mfmatches(node):
604 change = self.changelog.read(node)
603 change = self.changelog.read(node)
605 mf = dict(self.manifest.read(change[0]))
604 mf = dict(self.manifest.read(change[0]))
606 for fn in mf.keys():
605 for fn in mf.keys():
607 if not match(fn):
606 if not match(fn):
608 del mf[fn]
607 del mf[fn]
609 return mf
608 return mf
610
609
611 if node1:
610 if node1:
612 # read the manifest from node1 before the manifest from node2,
611 # read the manifest from node1 before the manifest from node2,
613 # so that we'll hit the manifest cache if we're going through
612 # so that we'll hit the manifest cache if we're going through
614 # all the revisions in parent->child order.
613 # all the revisions in parent->child order.
615 mf1 = mfmatches(node1)
614 mf1 = mfmatches(node1)
616
615
617 # are we comparing the working directory?
616 # are we comparing the working directory?
618 if not node2:
617 if not node2:
619 if not wlock:
618 if not wlock:
620 try:
619 try:
621 wlock = self.wlock(wait=0)
620 wlock = self.wlock(wait=0)
622 except lock.LockException:
621 except lock.LockException:
623 wlock = None
622 wlock = None
624 lookup, modified, added, removed, deleted, unknown, ignored = (
623 lookup, modified, added, removed, deleted, unknown, ignored = (
625 self.dirstate.changes(files, match, show_ignored))
624 self.dirstate.changes(files, match, show_ignored))
626
625
627 # are we comparing working dir against its parent?
626 # are we comparing working dir against its parent?
628 if not node1:
627 if not node1:
629 if lookup:
628 if lookup:
630 # do a full compare of any files that might have changed
629 # do a full compare of any files that might have changed
631 mf2 = mfmatches(self.dirstate.parents()[0])
630 mf2 = mfmatches(self.dirstate.parents()[0])
632 for f in lookup:
631 for f in lookup:
633 if fcmp(f, mf2):
632 if fcmp(f, mf2):
634 modified.append(f)
633 modified.append(f)
635 elif wlock is not None:
634 elif wlock is not None:
636 self.dirstate.update([f], "n")
635 self.dirstate.update([f], "n")
637 else:
636 else:
638 # we are comparing working dir against non-parent
637 # we are comparing working dir against non-parent
639 # generate a pseudo-manifest for the working dir
638 # generate a pseudo-manifest for the working dir
640 mf2 = mfmatches(self.dirstate.parents()[0])
639 mf2 = mfmatches(self.dirstate.parents()[0])
641 for f in lookup + modified + added:
640 for f in lookup + modified + added:
642 mf2[f] = ""
641 mf2[f] = ""
643 for f in removed:
642 for f in removed:
644 if f in mf2:
643 if f in mf2:
645 del mf2[f]
644 del mf2[f]
646 else:
645 else:
647 # we are comparing two revisions
646 # we are comparing two revisions
648 deleted, unknown, ignored = [], [], []
647 deleted, unknown, ignored = [], [], []
649 mf2 = mfmatches(node2)
648 mf2 = mfmatches(node2)
650
649
651 if node1:
650 if node1:
652 # flush lists from dirstate before comparing manifests
651 # flush lists from dirstate before comparing manifests
653 modified, added = [], []
652 modified, added = [], []
654
653
655 for fn in mf2:
654 for fn in mf2:
656 if mf1.has_key(fn):
655 if mf1.has_key(fn):
657 if mf1[fn] != mf2[fn] and (mf2[fn] != "" or fcmp(fn, mf1)):
656 if mf1[fn] != mf2[fn] and (mf2[fn] != "" or fcmp(fn, mf1)):
658 modified.append(fn)
657 modified.append(fn)
659 del mf1[fn]
658 del mf1[fn]
660 else:
659 else:
661 added.append(fn)
660 added.append(fn)
662
661
663 removed = mf1.keys()
662 removed = mf1.keys()
664
663
665 # sort and return results:
664 # sort and return results:
666 for l in modified, added, removed, deleted, unknown, ignored:
665 for l in modified, added, removed, deleted, unknown, ignored:
667 l.sort()
666 l.sort()
668 if show_ignored is None:
667 if show_ignored is None:
669 return (modified, added, removed, deleted, unknown)
668 return (modified, added, removed, deleted, unknown)
670 else:
669 else:
671 return (modified, added, removed, deleted, unknown, ignored)
670 return (modified, added, removed, deleted, unknown, ignored)
672
671
673 def add(self, list, wlock=None):
672 def add(self, list, wlock=None):
674 if not wlock:
673 if not wlock:
675 wlock = self.wlock()
674 wlock = self.wlock()
676 for f in list:
675 for f in list:
677 p = self.wjoin(f)
676 p = self.wjoin(f)
678 if not os.path.exists(p):
677 if not os.path.exists(p):
679 self.ui.warn(_("%s does not exist!\n") % f)
678 self.ui.warn(_("%s does not exist!\n") % f)
680 elif not os.path.isfile(p):
679 elif not os.path.isfile(p):
681 self.ui.warn(_("%s not added: only files supported currently\n")
680 self.ui.warn(_("%s not added: only files supported currently\n")
682 % f)
681 % f)
683 elif self.dirstate.state(f) in 'an':
682 elif self.dirstate.state(f) in 'an':
684 self.ui.warn(_("%s already tracked!\n") % f)
683 self.ui.warn(_("%s already tracked!\n") % f)
685 else:
684 else:
686 self.dirstate.update([f], "a")
685 self.dirstate.update([f], "a")
687
686
688 def forget(self, list, wlock=None):
687 def forget(self, list, wlock=None):
689 if not wlock:
688 if not wlock:
690 wlock = self.wlock()
689 wlock = self.wlock()
691 for f in list:
690 for f in list:
692 if self.dirstate.state(f) not in 'ai':
691 if self.dirstate.state(f) not in 'ai':
693 self.ui.warn(_("%s not added!\n") % f)
692 self.ui.warn(_("%s not added!\n") % f)
694 else:
693 else:
695 self.dirstate.forget([f])
694 self.dirstate.forget([f])
696
695
697 def remove(self, list, unlink=False, wlock=None):
696 def remove(self, list, unlink=False, wlock=None):
698 if unlink:
697 if unlink:
699 for f in list:
698 for f in list:
700 try:
699 try:
701 util.unlink(self.wjoin(f))
700 util.unlink(self.wjoin(f))
702 except OSError, inst:
701 except OSError, inst:
703 if inst.errno != errno.ENOENT:
702 if inst.errno != errno.ENOENT:
704 raise
703 raise
705 if not wlock:
704 if not wlock:
706 wlock = self.wlock()
705 wlock = self.wlock()
707 for f in list:
706 for f in list:
708 p = self.wjoin(f)
707 p = self.wjoin(f)
709 if os.path.exists(p):
708 if os.path.exists(p):
710 self.ui.warn(_("%s still exists!\n") % f)
709 self.ui.warn(_("%s still exists!\n") % f)
711 elif self.dirstate.state(f) == 'a':
710 elif self.dirstate.state(f) == 'a':
712 self.dirstate.forget([f])
711 self.dirstate.forget([f])
713 elif f not in self.dirstate:
712 elif f not in self.dirstate:
714 self.ui.warn(_("%s not tracked!\n") % f)
713 self.ui.warn(_("%s not tracked!\n") % f)
715 else:
714 else:
716 self.dirstate.update([f], "r")
715 self.dirstate.update([f], "r")
717
716
718 def undelete(self, list, wlock=None):
717 def undelete(self, list, wlock=None):
719 p = self.dirstate.parents()[0]
718 p = self.dirstate.parents()[0]
720 mn = self.changelog.read(p)[0]
719 mn = self.changelog.read(p)[0]
721 mf = self.manifest.readflags(mn)
720 mf = self.manifest.readflags(mn)
722 m = self.manifest.read(mn)
721 m = self.manifest.read(mn)
723 if not wlock:
722 if not wlock:
724 wlock = self.wlock()
723 wlock = self.wlock()
725 for f in list:
724 for f in list:
726 if self.dirstate.state(f) not in "r":
725 if self.dirstate.state(f) not in "r":
727 self.ui.warn("%s not removed!\n" % f)
726 self.ui.warn("%s not removed!\n" % f)
728 else:
727 else:
729 t = self.file(f).read(m[f])
728 t = self.file(f).read(m[f])
730 self.wwrite(f, t)
729 self.wwrite(f, t)
731 util.set_exec(self.wjoin(f), mf[f])
730 util.set_exec(self.wjoin(f), mf[f])
732 self.dirstate.update([f], "n")
731 self.dirstate.update([f], "n")
733
732
734 def copy(self, source, dest, wlock=None):
733 def copy(self, source, dest, wlock=None):
735 p = self.wjoin(dest)
734 p = self.wjoin(dest)
736 if not os.path.exists(p):
735 if not os.path.exists(p):
737 self.ui.warn(_("%s does not exist!\n") % dest)
736 self.ui.warn(_("%s does not exist!\n") % dest)
738 elif not os.path.isfile(p):
737 elif not os.path.isfile(p):
739 self.ui.warn(_("copy failed: %s is not a file\n") % dest)
738 self.ui.warn(_("copy failed: %s is not a file\n") % dest)
740 else:
739 else:
741 if not wlock:
740 if not wlock:
742 wlock = self.wlock()
741 wlock = self.wlock()
743 if self.dirstate.state(dest) == '?':
742 if self.dirstate.state(dest) == '?':
744 self.dirstate.update([dest], "a")
743 self.dirstate.update([dest], "a")
745 self.dirstate.copy(source, dest)
744 self.dirstate.copy(source, dest)
746
745
747 def heads(self, start=None):
746 def heads(self, start=None):
748 heads = self.changelog.heads(start)
747 heads = self.changelog.heads(start)
749 # sort the output in rev descending order
748 # sort the output in rev descending order
750 heads = [(-self.changelog.rev(h), h) for h in heads]
749 heads = [(-self.changelog.rev(h), h) for h in heads]
751 heads.sort()
750 heads.sort()
752 return [n for (r, n) in heads]
751 return [n for (r, n) in heads]
753
752
754 # branchlookup returns a dict giving a list of branches for
753 # branchlookup returns a dict giving a list of branches for
755 # each head. A branch is defined as the tag of a node or
754 # each head. A branch is defined as the tag of a node or
756 # the branch of the node's parents. If a node has multiple
755 # the branch of the node's parents. If a node has multiple
757 # branch tags, tags are eliminated if they are visible from other
756 # branch tags, tags are eliminated if they are visible from other
758 # branch tags.
757 # branch tags.
759 #
758 #
760 # So, for this graph: a->b->c->d->e
759 # So, for this graph: a->b->c->d->e
761 # \ /
760 # \ /
762 # aa -----/
761 # aa -----/
763 # a has tag 2.6.12
762 # a has tag 2.6.12
764 # d has tag 2.6.13
763 # d has tag 2.6.13
765 # e would have branch tags for 2.6.12 and 2.6.13. Because the node
764 # e would have branch tags for 2.6.12 and 2.6.13. Because the node
766 # for 2.6.12 can be reached from the node 2.6.13, that is eliminated
765 # for 2.6.12 can be reached from the node 2.6.13, that is eliminated
767 # from the list.
766 # from the list.
768 #
767 #
769 # It is possible that more than one head will have the same branch tag.
768 # It is possible that more than one head will have the same branch tag.
770 # callers need to check the result for multiple heads under the same
769 # callers need to check the result for multiple heads under the same
771 # branch tag if that is a problem for them (ie checkout of a specific
770 # branch tag if that is a problem for them (ie checkout of a specific
772 # branch).
771 # branch).
773 #
772 #
774 # passing in a specific branch will limit the depth of the search
773 # passing in a specific branch will limit the depth of the search
775 # through the parents. It won't limit the branches returned in the
774 # through the parents. It won't limit the branches returned in the
776 # result though.
775 # result though.
777 def branchlookup(self, heads=None, branch=None):
776 def branchlookup(self, heads=None, branch=None):
778 if not heads:
777 if not heads:
779 heads = self.heads()
778 heads = self.heads()
780 headt = [ h for h in heads ]
779 headt = [ h for h in heads ]
781 chlog = self.changelog
780 chlog = self.changelog
782 branches = {}
781 branches = {}
783 merges = []
782 merges = []
784 seenmerge = {}
783 seenmerge = {}
785
784
786 # traverse the tree once for each head, recording in the branches
785 # traverse the tree once for each head, recording in the branches
787 # dict which tags are visible from this head. The branches
786 # dict which tags are visible from this head. The branches
788 # dict also records which tags are visible from each tag
787 # dict also records which tags are visible from each tag
789 # while we traverse.
788 # while we traverse.
790 while headt or merges:
789 while headt or merges:
791 if merges:
790 if merges:
792 n, found = merges.pop()
791 n, found = merges.pop()
793 visit = [n]
792 visit = [n]
794 else:
793 else:
795 h = headt.pop()
794 h = headt.pop()
796 visit = [h]
795 visit = [h]
797 found = [h]
796 found = [h]
798 seen = {}
797 seen = {}
799 while visit:
798 while visit:
800 n = visit.pop()
799 n = visit.pop()
801 if n in seen:
800 if n in seen:
802 continue
801 continue
803 pp = chlog.parents(n)
802 pp = chlog.parents(n)
804 tags = self.nodetags(n)
803 tags = self.nodetags(n)
805 if tags:
804 if tags:
806 for x in tags:
805 for x in tags:
807 if x == 'tip':
806 if x == 'tip':
808 continue
807 continue
809 for f in found:
808 for f in found:
810 branches.setdefault(f, {})[n] = 1
809 branches.setdefault(f, {})[n] = 1
811 branches.setdefault(n, {})[n] = 1
810 branches.setdefault(n, {})[n] = 1
812 break
811 break
813 if n not in found:
812 if n not in found:
814 found.append(n)
813 found.append(n)
815 if branch in tags:
814 if branch in tags:
816 continue
815 continue
817 seen[n] = 1
816 seen[n] = 1
818 if pp[1] != nullid and n not in seenmerge:
817 if pp[1] != nullid and n not in seenmerge:
819 merges.append((pp[1], [x for x in found]))
818 merges.append((pp[1], [x for x in found]))
820 seenmerge[n] = 1
819 seenmerge[n] = 1
821 if pp[0] != nullid:
820 if pp[0] != nullid:
822 visit.append(pp[0])
821 visit.append(pp[0])
823 # traverse the branches dict, eliminating branch tags from each
822 # traverse the branches dict, eliminating branch tags from each
824 # head that are visible from another branch tag for that head.
823 # head that are visible from another branch tag for that head.
825 out = {}
824 out = {}
826 viscache = {}
825 viscache = {}
827 for h in heads:
826 for h in heads:
828 def visible(node):
827 def visible(node):
829 if node in viscache:
828 if node in viscache:
830 return viscache[node]
829 return viscache[node]
831 ret = {}
830 ret = {}
832 visit = [node]
831 visit = [node]
833 while visit:
832 while visit:
834 x = visit.pop()
833 x = visit.pop()
835 if x in viscache:
834 if x in viscache:
836 ret.update(viscache[x])
835 ret.update(viscache[x])
837 elif x not in ret:
836 elif x not in ret:
838 ret[x] = 1
837 ret[x] = 1
839 if x in branches:
838 if x in branches:
840 visit[len(visit):] = branches[x].keys()
839 visit[len(visit):] = branches[x].keys()
841 viscache[node] = ret
840 viscache[node] = ret
842 return ret
841 return ret
843 if h not in branches:
842 if h not in branches:
844 continue
843 continue
845 # O(n^2), but somewhat limited. This only searches the
844 # O(n^2), but somewhat limited. This only searches the
846 # tags visible from a specific head, not all the tags in the
845 # tags visible from a specific head, not all the tags in the
847 # whole repo.
846 # whole repo.
848 for b in branches[h]:
847 for b in branches[h]:
849 vis = False
848 vis = False
850 for bb in branches[h].keys():
849 for bb in branches[h].keys():
851 if b != bb:
850 if b != bb:
852 if b in visible(bb):
851 if b in visible(bb):
853 vis = True
852 vis = True
854 break
853 break
855 if not vis:
854 if not vis:
856 l = out.setdefault(h, [])
855 l = out.setdefault(h, [])
857 l[len(l):] = self.nodetags(b)
856 l[len(l):] = self.nodetags(b)
858 return out
857 return out
859
858
860 def branches(self, nodes):
859 def branches(self, nodes):
861 if not nodes:
860 if not nodes:
862 nodes = [self.changelog.tip()]
861 nodes = [self.changelog.tip()]
863 b = []
862 b = []
864 for n in nodes:
863 for n in nodes:
865 t = n
864 t = n
866 while n:
865 while n:
867 p = self.changelog.parents(n)
866 p = self.changelog.parents(n)
868 if p[1] != nullid or p[0] == nullid:
867 if p[1] != nullid or p[0] == nullid:
869 b.append((t, n, p[0], p[1]))
868 b.append((t, n, p[0], p[1]))
870 break
869 break
871 n = p[0]
870 n = p[0]
872 return b
871 return b
873
872
874 def between(self, pairs):
873 def between(self, pairs):
875 r = []
874 r = []
876
875
877 for top, bottom in pairs:
876 for top, bottom in pairs:
878 n, l, i = top, [], 0
877 n, l, i = top, [], 0
879 f = 1
878 f = 1
880
879
881 while n != bottom:
880 while n != bottom:
882 p = self.changelog.parents(n)[0]
881 p = self.changelog.parents(n)[0]
883 if i == f:
882 if i == f:
884 l.append(n)
883 l.append(n)
885 f = f * 2
884 f = f * 2
886 n = p
885 n = p
887 i += 1
886 i += 1
888
887
889 r.append(l)
888 r.append(l)
890
889
891 return r
890 return r
892
891
893 def findincoming(self, remote, base=None, heads=None, force=False):
892 def findincoming(self, remote, base=None, heads=None, force=False):
894 m = self.changelog.nodemap
893 m = self.changelog.nodemap
895 search = []
894 search = []
896 fetch = {}
895 fetch = {}
897 seen = {}
896 seen = {}
898 seenbranch = {}
897 seenbranch = {}
899 if base == None:
898 if base == None:
900 base = {}
899 base = {}
901
900
902 if not heads:
901 if not heads:
903 heads = remote.heads()
902 heads = remote.heads()
904
903
905 if self.changelog.tip() == nullid:
904 if self.changelog.tip() == nullid:
906 if heads != [nullid]:
905 if heads != [nullid]:
907 return [nullid]
906 return [nullid]
908 return []
907 return []
909
908
910 # assume we're closer to the tip than the root
909 # assume we're closer to the tip than the root
911 # and start by examining the heads
910 # and start by examining the heads
912 self.ui.status(_("searching for changes\n"))
911 self.ui.status(_("searching for changes\n"))
913
912
914 unknown = []
913 unknown = []
915 for h in heads:
914 for h in heads:
916 if h not in m:
915 if h not in m:
917 unknown.append(h)
916 unknown.append(h)
918 else:
917 else:
919 base[h] = 1
918 base[h] = 1
920
919
921 if not unknown:
920 if not unknown:
922 return []
921 return []
923
922
924 rep = {}
923 rep = {}
925 reqcnt = 0
924 reqcnt = 0
926
925
927 # search through remote branches
926 # search through remote branches
928 # a 'branch' here is a linear segment of history, with four parts:
927 # a 'branch' here is a linear segment of history, with four parts:
929 # head, root, first parent, second parent
928 # head, root, first parent, second parent
930 # (a branch always has two parents (or none) by definition)
929 # (a branch always has two parents (or none) by definition)
931 unknown = remote.branches(unknown)
930 unknown = remote.branches(unknown)
932 while unknown:
931 while unknown:
933 r = []
932 r = []
934 while unknown:
933 while unknown:
935 n = unknown.pop(0)
934 n = unknown.pop(0)
936 if n[0] in seen:
935 if n[0] in seen:
937 continue
936 continue
938
937
939 self.ui.debug(_("examining %s:%s\n")
938 self.ui.debug(_("examining %s:%s\n")
940 % (short(n[0]), short(n[1])))
939 % (short(n[0]), short(n[1])))
941 if n[0] == nullid:
940 if n[0] == nullid:
942 break
941 break
943 if n in seenbranch:
942 if n in seenbranch:
944 self.ui.debug(_("branch already found\n"))
943 self.ui.debug(_("branch already found\n"))
945 continue
944 continue
946 if n[1] and n[1] in m: # do we know the base?
945 if n[1] and n[1] in m: # do we know the base?
947 self.ui.debug(_("found incomplete branch %s:%s\n")
946 self.ui.debug(_("found incomplete branch %s:%s\n")
948 % (short(n[0]), short(n[1])))
947 % (short(n[0]), short(n[1])))
949 search.append(n) # schedule branch range for scanning
948 search.append(n) # schedule branch range for scanning
950 seenbranch[n] = 1
949 seenbranch[n] = 1
951 else:
950 else:
952 if n[1] not in seen and n[1] not in fetch:
951 if n[1] not in seen and n[1] not in fetch:
953 if n[2] in m and n[3] in m:
952 if n[2] in m and n[3] in m:
954 self.ui.debug(_("found new changeset %s\n") %
953 self.ui.debug(_("found new changeset %s\n") %
955 short(n[1]))
954 short(n[1]))
956 fetch[n[1]] = 1 # earliest unknown
955 fetch[n[1]] = 1 # earliest unknown
957 base[n[2]] = 1 # latest known
956 base[n[2]] = 1 # latest known
958 continue
957 continue
959
958
960 for a in n[2:4]:
959 for a in n[2:4]:
961 if a not in rep:
960 if a not in rep:
962 r.append(a)
961 r.append(a)
963 rep[a] = 1
962 rep[a] = 1
964
963
965 seen[n[0]] = 1
964 seen[n[0]] = 1
966
965
967 if r:
966 if r:
968 reqcnt += 1
967 reqcnt += 1
969 self.ui.debug(_("request %d: %s\n") %
968 self.ui.debug(_("request %d: %s\n") %
970 (reqcnt, " ".join(map(short, r))))
969 (reqcnt, " ".join(map(short, r))))
971 for p in range(0, len(r), 10):
970 for p in range(0, len(r), 10):
972 for b in remote.branches(r[p:p+10]):
971 for b in remote.branches(r[p:p+10]):
973 self.ui.debug(_("received %s:%s\n") %
972 self.ui.debug(_("received %s:%s\n") %
974 (short(b[0]), short(b[1])))
973 (short(b[0]), short(b[1])))
975 if b[0] in m:
974 if b[0] in m:
976 self.ui.debug(_("found base node %s\n")
975 self.ui.debug(_("found base node %s\n")
977 % short(b[0]))
976 % short(b[0]))
978 base[b[0]] = 1
977 base[b[0]] = 1
979 elif b[0] not in seen:
978 elif b[0] not in seen:
980 unknown.append(b)
979 unknown.append(b)
981
980
982 # do binary search on the branches we found
981 # do binary search on the branches we found
983 while search:
982 while search:
984 n = search.pop(0)
983 n = search.pop(0)
985 reqcnt += 1
984 reqcnt += 1
986 l = remote.between([(n[0], n[1])])[0]
985 l = remote.between([(n[0], n[1])])[0]
987 l.append(n[1])
986 l.append(n[1])
988 p = n[0]
987 p = n[0]
989 f = 1
988 f = 1
990 for i in l:
989 for i in l:
991 self.ui.debug(_("narrowing %d:%d %s\n") % (f, len(l), short(i)))
990 self.ui.debug(_("narrowing %d:%d %s\n") % (f, len(l), short(i)))
992 if i in m:
991 if i in m:
993 if f <= 2:
992 if f <= 2:
994 self.ui.debug(_("found new branch changeset %s\n") %
993 self.ui.debug(_("found new branch changeset %s\n") %
995 short(p))
994 short(p))
996 fetch[p] = 1
995 fetch[p] = 1
997 base[i] = 1
996 base[i] = 1
998 else:
997 else:
999 self.ui.debug(_("narrowed branch search to %s:%s\n")
998 self.ui.debug(_("narrowed branch search to %s:%s\n")
1000 % (short(p), short(i)))
999 % (short(p), short(i)))
1001 search.append((p, i))
1000 search.append((p, i))
1002 break
1001 break
1003 p, f = i, f * 2
1002 p, f = i, f * 2
1004
1003
1005 # sanity check our fetch list
1004 # sanity check our fetch list
1006 for f in fetch.keys():
1005 for f in fetch.keys():
1007 if f in m:
1006 if f in m:
1008 raise repo.RepoError(_("already have changeset ") + short(f[:4]))
1007 raise repo.RepoError(_("already have changeset ") + short(f[:4]))
1009
1008
1010 if base.keys() == [nullid]:
1009 if base.keys() == [nullid]:
1011 if force:
1010 if force:
1012 self.ui.warn(_("warning: repository is unrelated\n"))
1011 self.ui.warn(_("warning: repository is unrelated\n"))
1013 else:
1012 else:
1014 raise util.Abort(_("repository is unrelated"))
1013 raise util.Abort(_("repository is unrelated"))
1015
1014
1016 self.ui.note(_("found new changesets starting at ") +
1015 self.ui.note(_("found new changesets starting at ") +
1017 " ".join([short(f) for f in fetch]) + "\n")
1016 " ".join([short(f) for f in fetch]) + "\n")
1018
1017
1019 self.ui.debug(_("%d total queries\n") % reqcnt)
1018 self.ui.debug(_("%d total queries\n") % reqcnt)
1020
1019
1021 return fetch.keys()
1020 return fetch.keys()
1022
1021
1023 def findoutgoing(self, remote, base=None, heads=None, force=False):
1022 def findoutgoing(self, remote, base=None, heads=None, force=False):
1024 """Return list of nodes that are roots of subsets not in remote
1023 """Return list of nodes that are roots of subsets not in remote
1025
1024
1026 If base dict is specified, assume that these nodes and their parents
1025 If base dict is specified, assume that these nodes and their parents
1027 exist on the remote side.
1026 exist on the remote side.
1028 If a list of heads is specified, return only nodes which are heads
1027 If a list of heads is specified, return only nodes which are heads
1029 or ancestors of these heads, and return a second element which
1028 or ancestors of these heads, and return a second element which
1030 contains all remote heads which get new children.
1029 contains all remote heads which get new children.
1031 """
1030 """
1032 if base == None:
1031 if base == None:
1033 base = {}
1032 base = {}
1034 self.findincoming(remote, base, heads, force=force)
1033 self.findincoming(remote, base, heads, force=force)
1035
1034
1036 self.ui.debug(_("common changesets up to ")
1035 self.ui.debug(_("common changesets up to ")
1037 + " ".join(map(short, base.keys())) + "\n")
1036 + " ".join(map(short, base.keys())) + "\n")
1038
1037
1039 remain = dict.fromkeys(self.changelog.nodemap)
1038 remain = dict.fromkeys(self.changelog.nodemap)
1040
1039
1041 # prune everything remote has from the tree
1040 # prune everything remote has from the tree
1042 del remain[nullid]
1041 del remain[nullid]
1043 remove = base.keys()
1042 remove = base.keys()
1044 while remove:
1043 while remove:
1045 n = remove.pop(0)
1044 n = remove.pop(0)
1046 if n in remain:
1045 if n in remain:
1047 del remain[n]
1046 del remain[n]
1048 for p in self.changelog.parents(n):
1047 for p in self.changelog.parents(n):
1049 remove.append(p)
1048 remove.append(p)
1050
1049
1051 # find every node whose parents have been pruned
1050 # find every node whose parents have been pruned
1052 subset = []
1051 subset = []
1053 # find every remote head that will get new children
1052 # find every remote head that will get new children
1054 updated_heads = {}
1053 updated_heads = {}
1055 for n in remain:
1054 for n in remain:
1056 p1, p2 = self.changelog.parents(n)
1055 p1, p2 = self.changelog.parents(n)
1057 if p1 not in remain and p2 not in remain:
1056 if p1 not in remain and p2 not in remain:
1058 subset.append(n)
1057 subset.append(n)
1059 if heads:
1058 if heads:
1060 if p1 in heads:
1059 if p1 in heads:
1061 updated_heads[p1] = True
1060 updated_heads[p1] = True
1062 if p2 in heads:
1061 if p2 in heads:
1063 updated_heads[p2] = True
1062 updated_heads[p2] = True
1064
1063
1065 # this is the set of all roots we have to push
1064 # this is the set of all roots we have to push
1066 if heads:
1065 if heads:
1067 return subset, updated_heads.keys()
1066 return subset, updated_heads.keys()
1068 else:
1067 else:
1069 return subset
1068 return subset
1070
1069
1071 def pull(self, remote, heads=None, force=False):
1070 def pull(self, remote, heads=None, force=False):
1072 l = self.lock()
1071 l = self.lock()
1073
1072
1074 fetch = self.findincoming(remote, force=force)
1073 fetch = self.findincoming(remote, force=force)
1075 if fetch == [nullid]:
1074 if fetch == [nullid]:
1076 self.ui.status(_("requesting all changes\n"))
1075 self.ui.status(_("requesting all changes\n"))
1077
1076
1078 if not fetch:
1077 if not fetch:
1079 self.ui.status(_("no changes found\n"))
1078 self.ui.status(_("no changes found\n"))
1080 return 0
1079 return 0
1081
1080
1082 if heads is None:
1081 if heads is None:
1083 cg = remote.changegroup(fetch, 'pull')
1082 cg = remote.changegroup(fetch, 'pull')
1084 else:
1083 else:
1085 cg = remote.changegroupsubset(fetch, heads, 'pull')
1084 cg = remote.changegroupsubset(fetch, heads, 'pull')
1086 return self.addchangegroup(cg, 'pull')
1085 return self.addchangegroup(cg, 'pull')
1087
1086
1088 def push(self, remote, force=False, revs=None):
1087 def push(self, remote, force=False, revs=None):
1089 lock = remote.lock()
1088 lock = remote.lock()
1090
1089
1091 base = {}
1090 base = {}
1092 remote_heads = remote.heads()
1091 remote_heads = remote.heads()
1093 inc = self.findincoming(remote, base, remote_heads, force=force)
1092 inc = self.findincoming(remote, base, remote_heads, force=force)
1094 if not force and inc:
1093 if not force and inc:
1095 self.ui.warn(_("abort: unsynced remote changes!\n"))
1094 self.ui.warn(_("abort: unsynced remote changes!\n"))
1096 self.ui.status(_("(did you forget to sync?"
1095 self.ui.status(_("(did you forget to sync?"
1097 " use push -f to force)\n"))
1096 " use push -f to force)\n"))
1098 return 1
1097 return 1
1099
1098
1100 update, updated_heads = self.findoutgoing(remote, base, remote_heads)
1099 update, updated_heads = self.findoutgoing(remote, base, remote_heads)
1101 if revs is not None:
1100 if revs is not None:
1102 msng_cl, bases, heads = self.changelog.nodesbetween(update, revs)
1101 msng_cl, bases, heads = self.changelog.nodesbetween(update, revs)
1103 else:
1102 else:
1104 bases, heads = update, self.changelog.heads()
1103 bases, heads = update, self.changelog.heads()
1105
1104
1106 if not bases:
1105 if not bases:
1107 self.ui.status(_("no changes found\n"))
1106 self.ui.status(_("no changes found\n"))
1108 return 1
1107 return 1
1109 elif not force:
1108 elif not force:
1110 # FIXME we don't properly detect creation of new heads
1109 # FIXME we don't properly detect creation of new heads
1111 # in the push -r case, assume the user knows what he's doing
1110 # in the push -r case, assume the user knows what he's doing
1112 if not revs and len(remote_heads) < len(heads) \
1111 if not revs and len(remote_heads) < len(heads) \
1113 and remote_heads != [nullid]:
1112 and remote_heads != [nullid]:
1114 self.ui.warn(_("abort: push creates new remote branches!\n"))
1113 self.ui.warn(_("abort: push creates new remote branches!\n"))
1115 self.ui.status(_("(did you forget to merge?"
1114 self.ui.status(_("(did you forget to merge?"
1116 " use push -f to force)\n"))
1115 " use push -f to force)\n"))
1117 return 1
1116 return 1
1118
1117
1119 if revs is None:
1118 if revs is None:
1120 cg = self.changegroup(update, 'push')
1119 cg = self.changegroup(update, 'push')
1121 else:
1120 else:
1122 cg = self.changegroupsubset(update, revs, 'push')
1121 cg = self.changegroupsubset(update, revs, 'push')
1123 return remote.addchangegroup(cg, 'push')
1122 return remote.addchangegroup(cg, 'push')
1124
1123
1125 def changegroupsubset(self, bases, heads, source):
1124 def changegroupsubset(self, bases, heads, source):
1126 """This function generates a changegroup consisting of all the nodes
1125 """This function generates a changegroup consisting of all the nodes
1127 that are descendents of any of the bases, and ancestors of any of
1126 that are descendents of any of the bases, and ancestors of any of
1128 the heads.
1127 the heads.
1129
1128
1130 It is fairly complex as determining which filenodes and which
1129 It is fairly complex as determining which filenodes and which
1131 manifest nodes need to be included for the changeset to be complete
1130 manifest nodes need to be included for the changeset to be complete
1132 is non-trivial.
1131 is non-trivial.
1133
1132
1134 Another wrinkle is doing the reverse, figuring out which changeset in
1133 Another wrinkle is doing the reverse, figuring out which changeset in
1135 the changegroup a particular filenode or manifestnode belongs to."""
1134 the changegroup a particular filenode or manifestnode belongs to."""
1136
1135
1137 self.hook('preoutgoing', throw=True, source=source)
1136 self.hook('preoutgoing', throw=True, source=source)
1138
1137
1139 # Set up some initial variables
1138 # Set up some initial variables
1140 # Make it easy to refer to self.changelog
1139 # Make it easy to refer to self.changelog
1141 cl = self.changelog
1140 cl = self.changelog
1142 # msng is short for missing - compute the list of changesets in this
1141 # msng is short for missing - compute the list of changesets in this
1143 # changegroup.
1142 # changegroup.
1144 msng_cl_lst, bases, heads = cl.nodesbetween(bases, heads)
1143 msng_cl_lst, bases, heads = cl.nodesbetween(bases, heads)
1145 # Some bases may turn out to be superfluous, and some heads may be
1144 # Some bases may turn out to be superfluous, and some heads may be
1146 # too. nodesbetween will return the minimal set of bases and heads
1145 # too. nodesbetween will return the minimal set of bases and heads
1147 # necessary to re-create the changegroup.
1146 # necessary to re-create the changegroup.
1148
1147
1149 # Known heads are the list of heads that it is assumed the recipient
1148 # Known heads are the list of heads that it is assumed the recipient
1150 # of this changegroup will know about.
1149 # of this changegroup will know about.
1151 knownheads = {}
1150 knownheads = {}
1152 # We assume that all parents of bases are known heads.
1151 # We assume that all parents of bases are known heads.
1153 for n in bases:
1152 for n in bases:
1154 for p in cl.parents(n):
1153 for p in cl.parents(n):
1155 if p != nullid:
1154 if p != nullid:
1156 knownheads[p] = 1
1155 knownheads[p] = 1
1157 knownheads = knownheads.keys()
1156 knownheads = knownheads.keys()
1158 if knownheads:
1157 if knownheads:
1159 # Now that we know what heads are known, we can compute which
1158 # Now that we know what heads are known, we can compute which
1160 # changesets are known. The recipient must know about all
1159 # changesets are known. The recipient must know about all
1161 # changesets required to reach the known heads from the null
1160 # changesets required to reach the known heads from the null
1162 # changeset.
1161 # changeset.
1163 has_cl_set, junk, junk = cl.nodesbetween(None, knownheads)
1162 has_cl_set, junk, junk = cl.nodesbetween(None, knownheads)
1164 junk = None
1163 junk = None
1165 # Transform the list into an ersatz set.
1164 # Transform the list into an ersatz set.
1166 has_cl_set = dict.fromkeys(has_cl_set)
1165 has_cl_set = dict.fromkeys(has_cl_set)
1167 else:
1166 else:
1168 # If there were no known heads, the recipient cannot be assumed to
1167 # If there were no known heads, the recipient cannot be assumed to
1169 # know about any changesets.
1168 # know about any changesets.
1170 has_cl_set = {}
1169 has_cl_set = {}
1171
1170
1172 # Make it easy to refer to self.manifest
1171 # Make it easy to refer to self.manifest
1173 mnfst = self.manifest
1172 mnfst = self.manifest
1174 # We don't know which manifests are missing yet
1173 # We don't know which manifests are missing yet
1175 msng_mnfst_set = {}
1174 msng_mnfst_set = {}
1176 # Nor do we know which filenodes are missing.
1175 # Nor do we know which filenodes are missing.
1177 msng_filenode_set = {}
1176 msng_filenode_set = {}
1178
1177
1179 junk = mnfst.index[mnfst.count() - 1] # Get around a bug in lazyindex
1178 junk = mnfst.index[mnfst.count() - 1] # Get around a bug in lazyindex
1180 junk = None
1179 junk = None
1181
1180
1182 # A changeset always belongs to itself, so the changenode lookup
1181 # A changeset always belongs to itself, so the changenode lookup
1183 # function for a changenode is identity.
1182 # function for a changenode is identity.
1184 def identity(x):
1183 def identity(x):
1185 return x
1184 return x
1186
1185
1187 # A function generating function. Sets up an environment for the
1186 # A function generating function. Sets up an environment for the
1188 # inner function.
1187 # inner function.
1189 def cmp_by_rev_func(revlog):
1188 def cmp_by_rev_func(revlog):
1190 # Compare two nodes by their revision number in the environment's
1189 # Compare two nodes by their revision number in the environment's
1191 # revision history. Since the revision number both represents the
1190 # revision history. Since the revision number both represents the
1192 # most efficient order to read the nodes in, and represents a
1191 # most efficient order to read the nodes in, and represents a
1193 # topological sorting of the nodes, this function is often useful.
1192 # topological sorting of the nodes, this function is often useful.
1194 def cmp_by_rev(a, b):
1193 def cmp_by_rev(a, b):
1195 return cmp(revlog.rev(a), revlog.rev(b))
1194 return cmp(revlog.rev(a), revlog.rev(b))
1196 return cmp_by_rev
1195 return cmp_by_rev
1197
1196
1198 # If we determine that a particular file or manifest node must be a
1197 # If we determine that a particular file or manifest node must be a
1199 # node that the recipient of the changegroup will already have, we can
1198 # node that the recipient of the changegroup will already have, we can
1200 # also assume the recipient will have all the parents. This function
1199 # also assume the recipient will have all the parents. This function
1201 # prunes them from the set of missing nodes.
1200 # prunes them from the set of missing nodes.
1202 def prune_parents(revlog, hasset, msngset):
1201 def prune_parents(revlog, hasset, msngset):
1203 haslst = hasset.keys()
1202 haslst = hasset.keys()
1204 haslst.sort(cmp_by_rev_func(revlog))
1203 haslst.sort(cmp_by_rev_func(revlog))
1205 for node in haslst:
1204 for node in haslst:
1206 parentlst = [p for p in revlog.parents(node) if p != nullid]
1205 parentlst = [p for p in revlog.parents(node) if p != nullid]
1207 while parentlst:
1206 while parentlst:
1208 n = parentlst.pop()
1207 n = parentlst.pop()
1209 if n not in hasset:
1208 if n not in hasset:
1210 hasset[n] = 1
1209 hasset[n] = 1
1211 p = [p for p in revlog.parents(n) if p != nullid]
1210 p = [p for p in revlog.parents(n) if p != nullid]
1212 parentlst.extend(p)
1211 parentlst.extend(p)
1213 for n in hasset:
1212 for n in hasset:
1214 msngset.pop(n, None)
1213 msngset.pop(n, None)
1215
1214
1216 # This is a function generating function used to set up an environment
1215 # This is a function generating function used to set up an environment
1217 # for the inner function to execute in.
1216 # for the inner function to execute in.
1218 def manifest_and_file_collector(changedfileset):
1217 def manifest_and_file_collector(changedfileset):
1219 # This is an information gathering function that gathers
1218 # This is an information gathering function that gathers
1220 # information from each changeset node that goes out as part of
1219 # information from each changeset node that goes out as part of
1221 # the changegroup. The information gathered is a list of which
1220 # the changegroup. The information gathered is a list of which
1222 # manifest nodes are potentially required (the recipient may
1221 # manifest nodes are potentially required (the recipient may
1223 # already have them) and total list of all files which were
1222 # already have them) and total list of all files which were
1224 # changed in any changeset in the changegroup.
1223 # changed in any changeset in the changegroup.
1225 #
1224 #
1226 # We also remember the first changenode we saw any manifest
1225 # We also remember the first changenode we saw any manifest
1227 # referenced by so we can later determine which changenode 'owns'
1226 # referenced by so we can later determine which changenode 'owns'
1228 # the manifest.
1227 # the manifest.
1229 def collect_manifests_and_files(clnode):
1228 def collect_manifests_and_files(clnode):
1230 c = cl.read(clnode)
1229 c = cl.read(clnode)
1231 for f in c[3]:
1230 for f in c[3]:
1232 # This is to make sure we only have one instance of each
1231 # This is to make sure we only have one instance of each
1233 # filename string for each filename.
1232 # filename string for each filename.
1234 changedfileset.setdefault(f, f)
1233 changedfileset.setdefault(f, f)
1235 msng_mnfst_set.setdefault(c[0], clnode)
1234 msng_mnfst_set.setdefault(c[0], clnode)
1236 return collect_manifests_and_files
1235 return collect_manifests_and_files
1237
1236
1238 # Figure out which manifest nodes (of the ones we think might be part
1237 # Figure out which manifest nodes (of the ones we think might be part
1239 # of the changegroup) the recipient must know about and remove them
1238 # of the changegroup) the recipient must know about and remove them
1240 # from the changegroup.
1239 # from the changegroup.
1241 def prune_manifests():
1240 def prune_manifests():
1242 has_mnfst_set = {}
1241 has_mnfst_set = {}
1243 for n in msng_mnfst_set:
1242 for n in msng_mnfst_set:
1244 # If a 'missing' manifest thinks it belongs to a changenode
1243 # If a 'missing' manifest thinks it belongs to a changenode
1245 # the recipient is assumed to have, obviously the recipient
1244 # the recipient is assumed to have, obviously the recipient
1246 # must have that manifest.
1245 # must have that manifest.
1247 linknode = cl.node(mnfst.linkrev(n))
1246 linknode = cl.node(mnfst.linkrev(n))
1248 if linknode in has_cl_set:
1247 if linknode in has_cl_set:
1249 has_mnfst_set[n] = 1
1248 has_mnfst_set[n] = 1
1250 prune_parents(mnfst, has_mnfst_set, msng_mnfst_set)
1249 prune_parents(mnfst, has_mnfst_set, msng_mnfst_set)
1251
1250
1252 # Use the information collected in collect_manifests_and_files to say
1251 # Use the information collected in collect_manifests_and_files to say
1253 # which changenode any manifestnode belongs to.
1252 # which changenode any manifestnode belongs to.
1254 def lookup_manifest_link(mnfstnode):
1253 def lookup_manifest_link(mnfstnode):
1255 return msng_mnfst_set[mnfstnode]
1254 return msng_mnfst_set[mnfstnode]
1256
1255
1257 # A function generating function that sets up the initial environment
1256 # A function generating function that sets up the initial environment
1258 # the inner function.
1257 # the inner function.
1259 def filenode_collector(changedfiles):
1258 def filenode_collector(changedfiles):
1260 next_rev = [0]
1259 next_rev = [0]
1261 # This gathers information from each manifestnode included in the
1260 # This gathers information from each manifestnode included in the
1262 # changegroup about which filenodes the manifest node references
1261 # changegroup about which filenodes the manifest node references
1263 # so we can include those in the changegroup too.
1262 # so we can include those in the changegroup too.
1264 #
1263 #
1265 # It also remembers which changenode each filenode belongs to. It
1264 # It also remembers which changenode each filenode belongs to. It
1266 # does this by assuming the a filenode belongs to the changenode
1265 # does this by assuming the a filenode belongs to the changenode
1267 # the first manifest that references it belongs to.
1266 # the first manifest that references it belongs to.
1268 def collect_msng_filenodes(mnfstnode):
1267 def collect_msng_filenodes(mnfstnode):
1269 r = mnfst.rev(mnfstnode)
1268 r = mnfst.rev(mnfstnode)
1270 if r == next_rev[0]:
1269 if r == next_rev[0]:
1271 # If the last rev we looked at was the one just previous,
1270 # If the last rev we looked at was the one just previous,
1272 # we only need to see a diff.
1271 # we only need to see a diff.
1273 delta = mdiff.patchtext(mnfst.delta(mnfstnode))
1272 delta = mdiff.patchtext(mnfst.delta(mnfstnode))
1274 # For each line in the delta
1273 # For each line in the delta
1275 for dline in delta.splitlines():
1274 for dline in delta.splitlines():
1276 # get the filename and filenode for that line
1275 # get the filename and filenode for that line
1277 f, fnode = dline.split('\0')
1276 f, fnode = dline.split('\0')
1278 fnode = bin(fnode[:40])
1277 fnode = bin(fnode[:40])
1279 f = changedfiles.get(f, None)
1278 f = changedfiles.get(f, None)
1280 # And if the file is in the list of files we care
1279 # And if the file is in the list of files we care
1281 # about.
1280 # about.
1282 if f is not None:
1281 if f is not None:
1283 # Get the changenode this manifest belongs to
1282 # Get the changenode this manifest belongs to
1284 clnode = msng_mnfst_set[mnfstnode]
1283 clnode = msng_mnfst_set[mnfstnode]
1285 # Create the set of filenodes for the file if
1284 # Create the set of filenodes for the file if
1286 # there isn't one already.
1285 # there isn't one already.
1287 ndset = msng_filenode_set.setdefault(f, {})
1286 ndset = msng_filenode_set.setdefault(f, {})
1288 # And set the filenode's changelog node to the
1287 # And set the filenode's changelog node to the
1289 # manifest's if it hasn't been set already.
1288 # manifest's if it hasn't been set already.
1290 ndset.setdefault(fnode, clnode)
1289 ndset.setdefault(fnode, clnode)
1291 else:
1290 else:
1292 # Otherwise we need a full manifest.
1291 # Otherwise we need a full manifest.
1293 m = mnfst.read(mnfstnode)
1292 m = mnfst.read(mnfstnode)
1294 # For every file in we care about.
1293 # For every file in we care about.
1295 for f in changedfiles:
1294 for f in changedfiles:
1296 fnode = m.get(f, None)
1295 fnode = m.get(f, None)
1297 # If it's in the manifest
1296 # If it's in the manifest
1298 if fnode is not None:
1297 if fnode is not None:
1299 # See comments above.
1298 # See comments above.
1300 clnode = msng_mnfst_set[mnfstnode]
1299 clnode = msng_mnfst_set[mnfstnode]
1301 ndset = msng_filenode_set.setdefault(f, {})
1300 ndset = msng_filenode_set.setdefault(f, {})
1302 ndset.setdefault(fnode, clnode)
1301 ndset.setdefault(fnode, clnode)
1303 # Remember the revision we hope to see next.
1302 # Remember the revision we hope to see next.
1304 next_rev[0] = r + 1
1303 next_rev[0] = r + 1
1305 return collect_msng_filenodes
1304 return collect_msng_filenodes
1306
1305
1307 # We have a list of filenodes we think we need for a file, lets remove
1306 # We have a list of filenodes we think we need for a file, lets remove
1308 # all those we now the recipient must have.
1307 # all those we now the recipient must have.
1309 def prune_filenodes(f, filerevlog):
1308 def prune_filenodes(f, filerevlog):
1310 msngset = msng_filenode_set[f]
1309 msngset = msng_filenode_set[f]
1311 hasset = {}
1310 hasset = {}
1312 # If a 'missing' filenode thinks it belongs to a changenode we
1311 # If a 'missing' filenode thinks it belongs to a changenode we
1313 # assume the recipient must have, then the recipient must have
1312 # assume the recipient must have, then the recipient must have
1314 # that filenode.
1313 # that filenode.
1315 for n in msngset:
1314 for n in msngset:
1316 clnode = cl.node(filerevlog.linkrev(n))
1315 clnode = cl.node(filerevlog.linkrev(n))
1317 if clnode in has_cl_set:
1316 if clnode in has_cl_set:
1318 hasset[n] = 1
1317 hasset[n] = 1
1319 prune_parents(filerevlog, hasset, msngset)
1318 prune_parents(filerevlog, hasset, msngset)
1320
1319
1321 # A function generator function that sets up the a context for the
1320 # A function generator function that sets up the a context for the
1322 # inner function.
1321 # inner function.
1323 def lookup_filenode_link_func(fname):
1322 def lookup_filenode_link_func(fname):
1324 msngset = msng_filenode_set[fname]
1323 msngset = msng_filenode_set[fname]
1325 # Lookup the changenode the filenode belongs to.
1324 # Lookup the changenode the filenode belongs to.
1326 def lookup_filenode_link(fnode):
1325 def lookup_filenode_link(fnode):
1327 return msngset[fnode]
1326 return msngset[fnode]
1328 return lookup_filenode_link
1327 return lookup_filenode_link
1329
1328
1330 # Now that we have all theses utility functions to help out and
1329 # Now that we have all theses utility functions to help out and
1331 # logically divide up the task, generate the group.
1330 # logically divide up the task, generate the group.
1332 def gengroup():
1331 def gengroup():
1333 # The set of changed files starts empty.
1332 # The set of changed files starts empty.
1334 changedfiles = {}
1333 changedfiles = {}
1335 # Create a changenode group generator that will call our functions
1334 # Create a changenode group generator that will call our functions
1336 # back to lookup the owning changenode and collect information.
1335 # back to lookup the owning changenode and collect information.
1337 group = cl.group(msng_cl_lst, identity,
1336 group = cl.group(msng_cl_lst, identity,
1338 manifest_and_file_collector(changedfiles))
1337 manifest_and_file_collector(changedfiles))
1339 for chnk in group:
1338 for chnk in group:
1340 yield chnk
1339 yield chnk
1341
1340
1342 # The list of manifests has been collected by the generator
1341 # The list of manifests has been collected by the generator
1343 # calling our functions back.
1342 # calling our functions back.
1344 prune_manifests()
1343 prune_manifests()
1345 msng_mnfst_lst = msng_mnfst_set.keys()
1344 msng_mnfst_lst = msng_mnfst_set.keys()
1346 # Sort the manifestnodes by revision number.
1345 # Sort the manifestnodes by revision number.
1347 msng_mnfst_lst.sort(cmp_by_rev_func(mnfst))
1346 msng_mnfst_lst.sort(cmp_by_rev_func(mnfst))
1348 # Create a generator for the manifestnodes that calls our lookup
1347 # Create a generator for the manifestnodes that calls our lookup
1349 # and data collection functions back.
1348 # and data collection functions back.
1350 group = mnfst.group(msng_mnfst_lst, lookup_manifest_link,
1349 group = mnfst.group(msng_mnfst_lst, lookup_manifest_link,
1351 filenode_collector(changedfiles))
1350 filenode_collector(changedfiles))
1352 for chnk in group:
1351 for chnk in group:
1353 yield chnk
1352 yield chnk
1354
1353
1355 # These are no longer needed, dereference and toss the memory for
1354 # These are no longer needed, dereference and toss the memory for
1356 # them.
1355 # them.
1357 msng_mnfst_lst = None
1356 msng_mnfst_lst = None
1358 msng_mnfst_set.clear()
1357 msng_mnfst_set.clear()
1359
1358
1360 changedfiles = changedfiles.keys()
1359 changedfiles = changedfiles.keys()
1361 changedfiles.sort()
1360 changedfiles.sort()
1362 # Go through all our files in order sorted by name.
1361 # Go through all our files in order sorted by name.
1363 for fname in changedfiles:
1362 for fname in changedfiles:
1364 filerevlog = self.file(fname)
1363 filerevlog = self.file(fname)
1365 # Toss out the filenodes that the recipient isn't really
1364 # Toss out the filenodes that the recipient isn't really
1366 # missing.
1365 # missing.
1367 if msng_filenode_set.has_key(fname):
1366 if msng_filenode_set.has_key(fname):
1368 prune_filenodes(fname, filerevlog)
1367 prune_filenodes(fname, filerevlog)
1369 msng_filenode_lst = msng_filenode_set[fname].keys()
1368 msng_filenode_lst = msng_filenode_set[fname].keys()
1370 else:
1369 else:
1371 msng_filenode_lst = []
1370 msng_filenode_lst = []
1372 # If any filenodes are left, generate the group for them,
1371 # If any filenodes are left, generate the group for them,
1373 # otherwise don't bother.
1372 # otherwise don't bother.
1374 if len(msng_filenode_lst) > 0:
1373 if len(msng_filenode_lst) > 0:
1375 yield changegroup.genchunk(fname)
1374 yield changegroup.genchunk(fname)
1376 # Sort the filenodes by their revision #
1375 # Sort the filenodes by their revision #
1377 msng_filenode_lst.sort(cmp_by_rev_func(filerevlog))
1376 msng_filenode_lst.sort(cmp_by_rev_func(filerevlog))
1378 # Create a group generator and only pass in a changenode
1377 # Create a group generator and only pass in a changenode
1379 # lookup function as we need to collect no information
1378 # lookup function as we need to collect no information
1380 # from filenodes.
1379 # from filenodes.
1381 group = filerevlog.group(msng_filenode_lst,
1380 group = filerevlog.group(msng_filenode_lst,
1382 lookup_filenode_link_func(fname))
1381 lookup_filenode_link_func(fname))
1383 for chnk in group:
1382 for chnk in group:
1384 yield chnk
1383 yield chnk
1385 if msng_filenode_set.has_key(fname):
1384 if msng_filenode_set.has_key(fname):
1386 # Don't need this anymore, toss it to free memory.
1385 # Don't need this anymore, toss it to free memory.
1387 del msng_filenode_set[fname]
1386 del msng_filenode_set[fname]
1388 # Signal that no more groups are left.
1387 # Signal that no more groups are left.
1389 yield changegroup.closechunk()
1388 yield changegroup.closechunk()
1390
1389
1391 if msng_cl_lst:
1390 if msng_cl_lst:
1392 self.hook('outgoing', node=hex(msng_cl_lst[0]), source=source)
1391 self.hook('outgoing', node=hex(msng_cl_lst[0]), source=source)
1393
1392
1394 return util.chunkbuffer(gengroup())
1393 return util.chunkbuffer(gengroup())
1395
1394
1396 def changegroup(self, basenodes, source):
1395 def changegroup(self, basenodes, source):
1397 """Generate a changegroup of all nodes that we have that a recipient
1396 """Generate a changegroup of all nodes that we have that a recipient
1398 doesn't.
1397 doesn't.
1399
1398
1400 This is much easier than the previous function as we can assume that
1399 This is much easier than the previous function as we can assume that
1401 the recipient has any changenode we aren't sending them."""
1400 the recipient has any changenode we aren't sending them."""
1402
1401
1403 self.hook('preoutgoing', throw=True, source=source)
1402 self.hook('preoutgoing', throw=True, source=source)
1404
1403
1405 cl = self.changelog
1404 cl = self.changelog
1406 nodes = cl.nodesbetween(basenodes, None)[0]
1405 nodes = cl.nodesbetween(basenodes, None)[0]
1407 revset = dict.fromkeys([cl.rev(n) for n in nodes])
1406 revset = dict.fromkeys([cl.rev(n) for n in nodes])
1408
1407
1409 def identity(x):
1408 def identity(x):
1410 return x
1409 return x
1411
1410
1412 def gennodelst(revlog):
1411 def gennodelst(revlog):
1413 for r in xrange(0, revlog.count()):
1412 for r in xrange(0, revlog.count()):
1414 n = revlog.node(r)
1413 n = revlog.node(r)
1415 if revlog.linkrev(n) in revset:
1414 if revlog.linkrev(n) in revset:
1416 yield n
1415 yield n
1417
1416
1418 def changed_file_collector(changedfileset):
1417 def changed_file_collector(changedfileset):
1419 def collect_changed_files(clnode):
1418 def collect_changed_files(clnode):
1420 c = cl.read(clnode)
1419 c = cl.read(clnode)
1421 for fname in c[3]:
1420 for fname in c[3]:
1422 changedfileset[fname] = 1
1421 changedfileset[fname] = 1
1423 return collect_changed_files
1422 return collect_changed_files
1424
1423
1425 def lookuprevlink_func(revlog):
1424 def lookuprevlink_func(revlog):
1426 def lookuprevlink(n):
1425 def lookuprevlink(n):
1427 return cl.node(revlog.linkrev(n))
1426 return cl.node(revlog.linkrev(n))
1428 return lookuprevlink
1427 return lookuprevlink
1429
1428
1430 def gengroup():
1429 def gengroup():
1431 # construct a list of all changed files
1430 # construct a list of all changed files
1432 changedfiles = {}
1431 changedfiles = {}
1433
1432
1434 for chnk in cl.group(nodes, identity,
1433 for chnk in cl.group(nodes, identity,
1435 changed_file_collector(changedfiles)):
1434 changed_file_collector(changedfiles)):
1436 yield chnk
1435 yield chnk
1437 changedfiles = changedfiles.keys()
1436 changedfiles = changedfiles.keys()
1438 changedfiles.sort()
1437 changedfiles.sort()
1439
1438
1440 mnfst = self.manifest
1439 mnfst = self.manifest
1441 nodeiter = gennodelst(mnfst)
1440 nodeiter = gennodelst(mnfst)
1442 for chnk in mnfst.group(nodeiter, lookuprevlink_func(mnfst)):
1441 for chnk in mnfst.group(nodeiter, lookuprevlink_func(mnfst)):
1443 yield chnk
1442 yield chnk
1444
1443
1445 for fname in changedfiles:
1444 for fname in changedfiles:
1446 filerevlog = self.file(fname)
1445 filerevlog = self.file(fname)
1447 nodeiter = gennodelst(filerevlog)
1446 nodeiter = gennodelst(filerevlog)
1448 nodeiter = list(nodeiter)
1447 nodeiter = list(nodeiter)
1449 if nodeiter:
1448 if nodeiter:
1450 yield changegroup.genchunk(fname)
1449 yield changegroup.genchunk(fname)
1451 lookup = lookuprevlink_func(filerevlog)
1450 lookup = lookuprevlink_func(filerevlog)
1452 for chnk in filerevlog.group(nodeiter, lookup):
1451 for chnk in filerevlog.group(nodeiter, lookup):
1453 yield chnk
1452 yield chnk
1454
1453
1455 yield changegroup.closechunk()
1454 yield changegroup.closechunk()
1456
1455
1457 if nodes:
1456 if nodes:
1458 self.hook('outgoing', node=hex(nodes[0]), source=source)
1457 self.hook('outgoing', node=hex(nodes[0]), source=source)
1459
1458
1460 return util.chunkbuffer(gengroup())
1459 return util.chunkbuffer(gengroup())
1461
1460
1462 def addchangegroup(self, source, srctype):
1461 def addchangegroup(self, source, srctype):
1463 """add changegroup to repo.
1462 """add changegroup to repo.
1464 returns number of heads modified or added + 1."""
1463 returns number of heads modified or added + 1."""
1465
1464
1466 def csmap(x):
1465 def csmap(x):
1467 self.ui.debug(_("add changeset %s\n") % short(x))
1466 self.ui.debug(_("add changeset %s\n") % short(x))
1468 return cl.count()
1467 return cl.count()
1469
1468
1470 def revmap(x):
1469 def revmap(x):
1471 return cl.rev(x)
1470 return cl.rev(x)
1472
1471
1473 if not source:
1472 if not source:
1474 return 0
1473 return 0
1475
1474
1476 self.hook('prechangegroup', throw=True, source=srctype)
1475 self.hook('prechangegroup', throw=True, source=srctype)
1477
1476
1478 changesets = files = revisions = 0
1477 changesets = files = revisions = 0
1479
1478
1480 tr = self.transaction()
1479 tr = self.transaction()
1481
1480
1482 # write changelog and manifest data to temp files so
1481 # write changelog and manifest data to temp files so
1483 # concurrent readers will not see inconsistent view
1482 # concurrent readers will not see inconsistent view
1484 cl = None
1483 cl = None
1485 try:
1484 try:
1486 cl = appendfile.appendchangelog(self.opener, self.changelog.version)
1485 cl = appendfile.appendchangelog(self.opener, self.changelog.version)
1487
1486
1488 oldheads = len(cl.heads())
1487 oldheads = len(cl.heads())
1489
1488
1490 # pull off the changeset group
1489 # pull off the changeset group
1491 self.ui.status(_("adding changesets\n"))
1490 self.ui.status(_("adding changesets\n"))
1492 co = cl.tip()
1491 co = cl.tip()
1493 chunkiter = changegroup.chunkiter(source)
1492 chunkiter = changegroup.chunkiter(source)
1494 cn = cl.addgroup(chunkiter, csmap, tr, 1) # unique
1493 cn = cl.addgroup(chunkiter, csmap, tr, 1) # unique
1495 cnr, cor = map(cl.rev, (cn, co))
1494 cnr, cor = map(cl.rev, (cn, co))
1496 if cn == nullid:
1495 if cn == nullid:
1497 cnr = cor
1496 cnr = cor
1498 changesets = cnr - cor
1497 changesets = cnr - cor
1499
1498
1500 mf = None
1499 mf = None
1501 try:
1500 try:
1502 mf = appendfile.appendmanifest(self.opener,
1501 mf = appendfile.appendmanifest(self.opener,
1503 self.manifest.version)
1502 self.manifest.version)
1504
1503
1505 # pull off the manifest group
1504 # pull off the manifest group
1506 self.ui.status(_("adding manifests\n"))
1505 self.ui.status(_("adding manifests\n"))
1507 mm = mf.tip()
1506 mm = mf.tip()
1508 chunkiter = changegroup.chunkiter(source)
1507 chunkiter = changegroup.chunkiter(source)
1509 mo = mf.addgroup(chunkiter, revmap, tr)
1508 mo = mf.addgroup(chunkiter, revmap, tr)
1510
1509
1511 # process the files
1510 # process the files
1512 self.ui.status(_("adding file changes\n"))
1511 self.ui.status(_("adding file changes\n"))
1513 while 1:
1512 while 1:
1514 f = changegroup.getchunk(source)
1513 f = changegroup.getchunk(source)
1515 if not f:
1514 if not f:
1516 break
1515 break
1517 self.ui.debug(_("adding %s revisions\n") % f)
1516 self.ui.debug(_("adding %s revisions\n") % f)
1518 fl = self.file(f)
1517 fl = self.file(f)
1519 o = fl.count()
1518 o = fl.count()
1520 chunkiter = changegroup.chunkiter(source)
1519 chunkiter = changegroup.chunkiter(source)
1521 n = fl.addgroup(chunkiter, revmap, tr)
1520 n = fl.addgroup(chunkiter, revmap, tr)
1522 revisions += fl.count() - o
1521 revisions += fl.count() - o
1523 files += 1
1522 files += 1
1524
1523
1525 # write order here is important so concurrent readers will see
1524 # write order here is important so concurrent readers will see
1526 # consistent view of repo
1525 # consistent view of repo
1527 mf.writedata()
1526 mf.writedata()
1528 finally:
1527 finally:
1529 if mf:
1528 if mf:
1530 mf.cleanup()
1529 mf.cleanup()
1531 cl.writedata()
1530 cl.writedata()
1532 finally:
1531 finally:
1533 if cl:
1532 if cl:
1534 cl.cleanup()
1533 cl.cleanup()
1535
1534
1536 # make changelog and manifest see real files again
1535 # make changelog and manifest see real files again
1537 self.changelog = changelog.changelog(self.opener, self.changelog.version)
1536 self.changelog = changelog.changelog(self.opener, self.changelog.version)
1538 self.manifest = manifest.manifest(self.opener, self.manifest.version)
1537 self.manifest = manifest.manifest(self.opener, self.manifest.version)
1539 self.changelog.checkinlinesize(tr)
1538 self.changelog.checkinlinesize(tr)
1540 self.manifest.checkinlinesize(tr)
1539 self.manifest.checkinlinesize(tr)
1541
1540
1542 newheads = len(self.changelog.heads())
1541 newheads = len(self.changelog.heads())
1543 heads = ""
1542 heads = ""
1544 if oldheads and newheads > oldheads:
1543 if oldheads and newheads > oldheads:
1545 heads = _(" (+%d heads)") % (newheads - oldheads)
1544 heads = _(" (+%d heads)") % (newheads - oldheads)
1546
1545
1547 self.ui.status(_("added %d changesets"
1546 self.ui.status(_("added %d changesets"
1548 " with %d changes to %d files%s\n")
1547 " with %d changes to %d files%s\n")
1549 % (changesets, revisions, files, heads))
1548 % (changesets, revisions, files, heads))
1550
1549
1551 if changesets > 0:
1550 if changesets > 0:
1552 self.hook('pretxnchangegroup', throw=True,
1551 self.hook('pretxnchangegroup', throw=True,
1553 node=hex(self.changelog.node(cor+1)), source=srctype)
1552 node=hex(self.changelog.node(cor+1)), source=srctype)
1554
1553
1555 tr.close()
1554 tr.close()
1556
1555
1557 if changesets > 0:
1556 if changesets > 0:
1558 self.hook("changegroup", node=hex(self.changelog.node(cor+1)),
1557 self.hook("changegroup", node=hex(self.changelog.node(cor+1)),
1559 source=srctype)
1558 source=srctype)
1560
1559
1561 for i in range(cor + 1, cnr + 1):
1560 for i in range(cor + 1, cnr + 1):
1562 self.hook("incoming", node=hex(self.changelog.node(i)),
1561 self.hook("incoming", node=hex(self.changelog.node(i)),
1563 source=srctype)
1562 source=srctype)
1564
1563
1565 return newheads - oldheads + 1
1564 return newheads - oldheads + 1
1566
1565
1567 def update(self, node, allow=False, force=False, choose=None,
1566 def update(self, node, allow=False, force=False, choose=None,
1568 moddirstate=True, forcemerge=False, wlock=None, show_stats=True):
1567 moddirstate=True, forcemerge=False, wlock=None, show_stats=True):
1569 pl = self.dirstate.parents()
1568 pl = self.dirstate.parents()
1570 if not force and pl[1] != nullid:
1569 if not force and pl[1] != nullid:
1571 raise util.Abort(_("outstanding uncommitted merges"))
1570 raise util.Abort(_("outstanding uncommitted merges"))
1572
1571
1573 err = False
1572 err = False
1574
1573
1575 p1, p2 = pl[0], node
1574 p1, p2 = pl[0], node
1576 pa = self.changelog.ancestor(p1, p2)
1575 pa = self.changelog.ancestor(p1, p2)
1577 m1n = self.changelog.read(p1)[0]
1576 m1n = self.changelog.read(p1)[0]
1578 m2n = self.changelog.read(p2)[0]
1577 m2n = self.changelog.read(p2)[0]
1579 man = self.manifest.ancestor(m1n, m2n)
1578 man = self.manifest.ancestor(m1n, m2n)
1580 m1 = self.manifest.read(m1n)
1579 m1 = self.manifest.read(m1n)
1581 mf1 = self.manifest.readflags(m1n)
1580 mf1 = self.manifest.readflags(m1n)
1582 m2 = self.manifest.read(m2n).copy()
1581 m2 = self.manifest.read(m2n).copy()
1583 mf2 = self.manifest.readflags(m2n)
1582 mf2 = self.manifest.readflags(m2n)
1584 ma = self.manifest.read(man)
1583 ma = self.manifest.read(man)
1585 mfa = self.manifest.readflags(man)
1584 mfa = self.manifest.readflags(man)
1586
1585
1587 modified, added, removed, deleted, unknown = self.changes()
1586 modified, added, removed, deleted, unknown = self.changes()
1588
1587
1589 # is this a jump, or a merge? i.e. is there a linear path
1588 # is this a jump, or a merge? i.e. is there a linear path
1590 # from p1 to p2?
1589 # from p1 to p2?
1591 linear_path = (pa == p1 or pa == p2)
1590 linear_path = (pa == p1 or pa == p2)
1592
1591
1593 if allow and linear_path:
1592 if allow and linear_path:
1594 raise util.Abort(_("there is nothing to merge, "
1593 raise util.Abort(_("there is nothing to merge, "
1595 "just use 'hg update'"))
1594 "just use 'hg update'"))
1596 if allow and not forcemerge:
1595 if allow and not forcemerge:
1597 if modified or added or removed:
1596 if modified or added or removed:
1598 raise util.Abort(_("outstanding uncommitted changes"))
1597 raise util.Abort(_("outstanding uncommitted changes"))
1599
1598
1600 if not forcemerge and not force:
1599 if not forcemerge and not force:
1601 for f in unknown:
1600 for f in unknown:
1602 if f in m2:
1601 if f in m2:
1603 t1 = self.wread(f)
1602 t1 = self.wread(f)
1604 t2 = self.file(f).read(m2[f])
1603 t2 = self.file(f).read(m2[f])
1605 if cmp(t1, t2) != 0:
1604 if cmp(t1, t2) != 0:
1606 raise util.Abort(_("'%s' already exists in the working"
1605 raise util.Abort(_("'%s' already exists in the working"
1607 " dir and differs from remote") % f)
1606 " dir and differs from remote") % f)
1608
1607
1609 # resolve the manifest to determine which files
1608 # resolve the manifest to determine which files
1610 # we care about merging
1609 # we care about merging
1611 self.ui.note(_("resolving manifests\n"))
1610 self.ui.note(_("resolving manifests\n"))
1612 self.ui.debug(_(" force %s allow %s moddirstate %s linear %s\n") %
1611 self.ui.debug(_(" force %s allow %s moddirstate %s linear %s\n") %
1613 (force, allow, moddirstate, linear_path))
1612 (force, allow, moddirstate, linear_path))
1614 self.ui.debug(_(" ancestor %s local %s remote %s\n") %
1613 self.ui.debug(_(" ancestor %s local %s remote %s\n") %
1615 (short(man), short(m1n), short(m2n)))
1614 (short(man), short(m1n), short(m2n)))
1616
1615
1617 merge = {}
1616 merge = {}
1618 get = {}
1617 get = {}
1619 remove = []
1618 remove = []
1620
1619
1621 # construct a working dir manifest
1620 # construct a working dir manifest
1622 mw = m1.copy()
1621 mw = m1.copy()
1623 mfw = mf1.copy()
1622 mfw = mf1.copy()
1624 umap = dict.fromkeys(unknown)
1623 umap = dict.fromkeys(unknown)
1625
1624
1626 for f in added + modified + unknown:
1625 for f in added + modified + unknown:
1627 mw[f] = ""
1626 mw[f] = ""
1628 mfw[f] = util.is_exec(self.wjoin(f), mfw.get(f, False))
1627 mfw[f] = util.is_exec(self.wjoin(f), mfw.get(f, False))
1629
1628
1630 if moddirstate and not wlock:
1629 if moddirstate and not wlock:
1631 wlock = self.wlock()
1630 wlock = self.wlock()
1632
1631
1633 for f in deleted + removed:
1632 for f in deleted + removed:
1634 if f in mw:
1633 if f in mw:
1635 del mw[f]
1634 del mw[f]
1636
1635
1637 # If we're jumping between revisions (as opposed to merging),
1636 # If we're jumping between revisions (as opposed to merging),
1638 # and if neither the working directory nor the target rev has
1637 # and if neither the working directory nor the target rev has
1639 # the file, then we need to remove it from the dirstate, to
1638 # the file, then we need to remove it from the dirstate, to
1640 # prevent the dirstate from listing the file when it is no
1639 # prevent the dirstate from listing the file when it is no
1641 # longer in the manifest.
1640 # longer in the manifest.
1642 if moddirstate and linear_path and f not in m2:
1641 if moddirstate and linear_path and f not in m2:
1643 self.dirstate.forget((f,))
1642 self.dirstate.forget((f,))
1644
1643
1645 # Compare manifests
1644 # Compare manifests
1646 for f, n in mw.iteritems():
1645 for f, n in mw.iteritems():
1647 if choose and not choose(f):
1646 if choose and not choose(f):
1648 continue
1647 continue
1649 if f in m2:
1648 if f in m2:
1650 s = 0
1649 s = 0
1651
1650
1652 # is the wfile new since m1, and match m2?
1651 # is the wfile new since m1, and match m2?
1653 if f not in m1:
1652 if f not in m1:
1654 t1 = self.wread(f)
1653 t1 = self.wread(f)
1655 t2 = self.file(f).read(m2[f])
1654 t2 = self.file(f).read(m2[f])
1656 if cmp(t1, t2) == 0:
1655 if cmp(t1, t2) == 0:
1657 n = m2[f]
1656 n = m2[f]
1658 del t1, t2
1657 del t1, t2
1659
1658
1660 # are files different?
1659 # are files different?
1661 if n != m2[f]:
1660 if n != m2[f]:
1662 a = ma.get(f, nullid)
1661 a = ma.get(f, nullid)
1663 # are both different from the ancestor?
1662 # are both different from the ancestor?
1664 if n != a and m2[f] != a:
1663 if n != a and m2[f] != a:
1665 self.ui.debug(_(" %s versions differ, resolve\n") % f)
1664 self.ui.debug(_(" %s versions differ, resolve\n") % f)
1666 # merge executable bits
1665 # merge executable bits
1667 # "if we changed or they changed, change in merge"
1666 # "if we changed or they changed, change in merge"
1668 a, b, c = mfa.get(f, 0), mfw[f], mf2[f]
1667 a, b, c = mfa.get(f, 0), mfw[f], mf2[f]
1669 mode = ((a^b) | (a^c)) ^ a
1668 mode = ((a^b) | (a^c)) ^ a
1670 merge[f] = (m1.get(f, nullid), m2[f], mode)
1669 merge[f] = (m1.get(f, nullid), m2[f], mode)
1671 s = 1
1670 s = 1
1672 # are we clobbering?
1671 # are we clobbering?
1673 # is remote's version newer?
1672 # is remote's version newer?
1674 # or are we going back in time?
1673 # or are we going back in time?
1675 elif force or m2[f] != a or (p2 == pa and mw[f] == m1[f]):
1674 elif force or m2[f] != a or (p2 == pa and mw[f] == m1[f]):
1676 self.ui.debug(_(" remote %s is newer, get\n") % f)
1675 self.ui.debug(_(" remote %s is newer, get\n") % f)
1677 get[f] = m2[f]
1676 get[f] = m2[f]
1678 s = 1
1677 s = 1
1679 elif f in umap or f in added:
1678 elif f in umap or f in added:
1680 # this unknown file is the same as the checkout
1679 # this unknown file is the same as the checkout
1681 # we need to reset the dirstate if the file was added
1680 # we need to reset the dirstate if the file was added
1682 get[f] = m2[f]
1681 get[f] = m2[f]
1683
1682
1684 if not s and mfw[f] != mf2[f]:
1683 if not s and mfw[f] != mf2[f]:
1685 if force:
1684 if force:
1686 self.ui.debug(_(" updating permissions for %s\n") % f)
1685 self.ui.debug(_(" updating permissions for %s\n") % f)
1687 util.set_exec(self.wjoin(f), mf2[f])
1686 util.set_exec(self.wjoin(f), mf2[f])
1688 else:
1687 else:
1689 a, b, c = mfa.get(f, 0), mfw[f], mf2[f]
1688 a, b, c = mfa.get(f, 0), mfw[f], mf2[f]
1690 mode = ((a^b) | (a^c)) ^ a
1689 mode = ((a^b) | (a^c)) ^ a
1691 if mode != b:
1690 if mode != b:
1692 self.ui.debug(_(" updating permissions for %s\n")
1691 self.ui.debug(_(" updating permissions for %s\n")
1693 % f)
1692 % f)
1694 util.set_exec(self.wjoin(f), mode)
1693 util.set_exec(self.wjoin(f), mode)
1695 del m2[f]
1694 del m2[f]
1696 elif f in ma:
1695 elif f in ma:
1697 if n != ma[f]:
1696 if n != ma[f]:
1698 r = _("d")
1697 r = _("d")
1699 if not force and (linear_path or allow):
1698 if not force and (linear_path or allow):
1700 r = self.ui.prompt(
1699 r = self.ui.prompt(
1701 (_(" local changed %s which remote deleted\n") % f) +
1700 (_(" local changed %s which remote deleted\n") % f) +
1702 _("(k)eep or (d)elete?"), _("[kd]"), _("k"))
1701 _("(k)eep or (d)elete?"), _("[kd]"), _("k"))
1703 if r == _("d"):
1702 if r == _("d"):
1704 remove.append(f)
1703 remove.append(f)
1705 else:
1704 else:
1706 self.ui.debug(_("other deleted %s\n") % f)
1705 self.ui.debug(_("other deleted %s\n") % f)
1707 remove.append(f) # other deleted it
1706 remove.append(f) # other deleted it
1708 else:
1707 else:
1709 # file is created on branch or in working directory
1708 # file is created on branch or in working directory
1710 if force and f not in umap:
1709 if force and f not in umap:
1711 self.ui.debug(_("remote deleted %s, clobbering\n") % f)
1710 self.ui.debug(_("remote deleted %s, clobbering\n") % f)
1712 remove.append(f)
1711 remove.append(f)
1713 elif n == m1.get(f, nullid): # same as parent
1712 elif n == m1.get(f, nullid): # same as parent
1714 if p2 == pa: # going backwards?
1713 if p2 == pa: # going backwards?
1715 self.ui.debug(_("remote deleted %s\n") % f)
1714 self.ui.debug(_("remote deleted %s\n") % f)
1716 remove.append(f)
1715 remove.append(f)
1717 else:
1716 else:
1718 self.ui.debug(_("local modified %s, keeping\n") % f)
1717 self.ui.debug(_("local modified %s, keeping\n") % f)
1719 else:
1718 else:
1720 self.ui.debug(_("working dir created %s, keeping\n") % f)
1719 self.ui.debug(_("working dir created %s, keeping\n") % f)
1721
1720
1722 for f, n in m2.iteritems():
1721 for f, n in m2.iteritems():
1723 if choose and not choose(f):
1722 if choose and not choose(f):
1724 continue
1723 continue
1725 if f[0] == "/":
1724 if f[0] == "/":
1726 continue
1725 continue
1727 if f in ma and n != ma[f]:
1726 if f in ma and n != ma[f]:
1728 r = _("k")
1727 r = _("k")
1729 if not force and (linear_path or allow):
1728 if not force and (linear_path or allow):
1730 r = self.ui.prompt(
1729 r = self.ui.prompt(
1731 (_("remote changed %s which local deleted\n") % f) +
1730 (_("remote changed %s which local deleted\n") % f) +
1732 _("(k)eep or (d)elete?"), _("[kd]"), _("k"))
1731 _("(k)eep or (d)elete?"), _("[kd]"), _("k"))
1733 if r == _("k"):
1732 if r == _("k"):
1734 get[f] = n
1733 get[f] = n
1735 elif f not in ma:
1734 elif f not in ma:
1736 self.ui.debug(_("remote created %s\n") % f)
1735 self.ui.debug(_("remote created %s\n") % f)
1737 get[f] = n
1736 get[f] = n
1738 else:
1737 else:
1739 if force or p2 == pa: # going backwards?
1738 if force or p2 == pa: # going backwards?
1740 self.ui.debug(_("local deleted %s, recreating\n") % f)
1739 self.ui.debug(_("local deleted %s, recreating\n") % f)
1741 get[f] = n
1740 get[f] = n
1742 else:
1741 else:
1743 self.ui.debug(_("local deleted %s\n") % f)
1742 self.ui.debug(_("local deleted %s\n") % f)
1744
1743
1745 del mw, m1, m2, ma
1744 del mw, m1, m2, ma
1746
1745
1747 if force:
1746 if force:
1748 for f in merge:
1747 for f in merge:
1749 get[f] = merge[f][1]
1748 get[f] = merge[f][1]
1750 merge = {}
1749 merge = {}
1751
1750
1752 if linear_path or force:
1751 if linear_path or force:
1753 # we don't need to do any magic, just jump to the new rev
1752 # we don't need to do any magic, just jump to the new rev
1754 branch_merge = False
1753 branch_merge = False
1755 p1, p2 = p2, nullid
1754 p1, p2 = p2, nullid
1756 else:
1755 else:
1757 if not allow:
1756 if not allow:
1758 self.ui.status(_("this update spans a branch"
1757 self.ui.status(_("this update spans a branch"
1759 " affecting the following files:\n"))
1758 " affecting the following files:\n"))
1760 fl = merge.keys() + get.keys()
1759 fl = merge.keys() + get.keys()
1761 fl.sort()
1760 fl.sort()
1762 for f in fl:
1761 for f in fl:
1763 cf = ""
1762 cf = ""
1764 if f in merge:
1763 if f in merge:
1765 cf = _(" (resolve)")
1764 cf = _(" (resolve)")
1766 self.ui.status(" %s%s\n" % (f, cf))
1765 self.ui.status(" %s%s\n" % (f, cf))
1767 self.ui.warn(_("aborting update spanning branches!\n"))
1766 self.ui.warn(_("aborting update spanning branches!\n"))
1768 self.ui.status(_("(use 'hg merge' to merge across branches"
1767 self.ui.status(_("(use 'hg merge' to merge across branches"
1769 " or 'hg update -C' to lose changes)\n"))
1768 " or 'hg update -C' to lose changes)\n"))
1770 return 1
1769 return 1
1771 branch_merge = True
1770 branch_merge = True
1772
1771
1773 xp1 = hex(p1)
1772 xp1 = hex(p1)
1774 xp2 = hex(p2)
1773 xp2 = hex(p2)
1775 if p2 == nullid: xxp2 = ''
1774 if p2 == nullid: xxp2 = ''
1776 else: xxp2 = xp2
1775 else: xxp2 = xp2
1777
1776
1778 self.hook('preupdate', throw=True, parent1=xp1, parent2=xxp2)
1777 self.hook('preupdate', throw=True, parent1=xp1, parent2=xxp2)
1779
1778
1780 # get the files we don't need to change
1779 # get the files we don't need to change
1781 files = get.keys()
1780 files = get.keys()
1782 files.sort()
1781 files.sort()
1783 for f in files:
1782 for f in files:
1784 if f[0] == "/":
1783 if f[0] == "/":
1785 continue
1784 continue
1786 self.ui.note(_("getting %s\n") % f)
1785 self.ui.note(_("getting %s\n") % f)
1787 t = self.file(f).read(get[f])
1786 t = self.file(f).read(get[f])
1788 self.wwrite(f, t)
1787 self.wwrite(f, t)
1789 util.set_exec(self.wjoin(f), mf2[f])
1788 util.set_exec(self.wjoin(f), mf2[f])
1790 if moddirstate:
1789 if moddirstate:
1791 if branch_merge:
1790 if branch_merge:
1792 self.dirstate.update([f], 'n', st_mtime=-1)
1791 self.dirstate.update([f], 'n', st_mtime=-1)
1793 else:
1792 else:
1794 self.dirstate.update([f], 'n')
1793 self.dirstate.update([f], 'n')
1795
1794
1796 # merge the tricky bits
1795 # merge the tricky bits
1797 failedmerge = []
1796 failedmerge = []
1798 files = merge.keys()
1797 files = merge.keys()
1799 files.sort()
1798 files.sort()
1800 for f in files:
1799 for f in files:
1801 self.ui.status(_("merging %s\n") % f)
1800 self.ui.status(_("merging %s\n") % f)
1802 my, other, flag = merge[f]
1801 my, other, flag = merge[f]
1803 ret = self.merge3(f, my, other, xp1, xp2)
1802 ret = self.merge3(f, my, other, xp1, xp2)
1804 if ret:
1803 if ret:
1805 err = True
1804 err = True
1806 failedmerge.append(f)
1805 failedmerge.append(f)
1807 util.set_exec(self.wjoin(f), flag)
1806 util.set_exec(self.wjoin(f), flag)
1808 if moddirstate:
1807 if moddirstate:
1809 if branch_merge:
1808 if branch_merge:
1810 # We've done a branch merge, mark this file as merged
1809 # We've done a branch merge, mark this file as merged
1811 # so that we properly record the merger later
1810 # so that we properly record the merger later
1812 self.dirstate.update([f], 'm')
1811 self.dirstate.update([f], 'm')
1813 else:
1812 else:
1814 # We've update-merged a locally modified file, so
1813 # We've update-merged a locally modified file, so
1815 # we set the dirstate to emulate a normal checkout
1814 # we set the dirstate to emulate a normal checkout
1816 # of that file some time in the past. Thus our
1815 # of that file some time in the past. Thus our
1817 # merge will appear as a normal local file
1816 # merge will appear as a normal local file
1818 # modification.
1817 # modification.
1819 f_len = len(self.file(f).read(other))
1818 f_len = len(self.file(f).read(other))
1820 self.dirstate.update([f], 'n', st_size=f_len, st_mtime=-1)
1819 self.dirstate.update([f], 'n', st_size=f_len, st_mtime=-1)
1821
1820
1822 remove.sort()
1821 remove.sort()
1823 for f in remove:
1822 for f in remove:
1824 self.ui.note(_("removing %s\n") % f)
1823 self.ui.note(_("removing %s\n") % f)
1825 util.audit_path(f)
1824 util.audit_path(f)
1826 try:
1825 try:
1827 util.unlink(self.wjoin(f))
1826 util.unlink(self.wjoin(f))
1828 except OSError, inst:
1827 except OSError, inst:
1829 if inst.errno != errno.ENOENT:
1828 if inst.errno != errno.ENOENT:
1830 self.ui.warn(_("update failed to remove %s: %s!\n") %
1829 self.ui.warn(_("update failed to remove %s: %s!\n") %
1831 (f, inst.strerror))
1830 (f, inst.strerror))
1832 if moddirstate:
1831 if moddirstate:
1833 if branch_merge:
1832 if branch_merge:
1834 self.dirstate.update(remove, 'r')
1833 self.dirstate.update(remove, 'r')
1835 else:
1834 else:
1836 self.dirstate.forget(remove)
1835 self.dirstate.forget(remove)
1837
1836
1838 if moddirstate:
1837 if moddirstate:
1839 self.dirstate.setparents(p1, p2)
1838 self.dirstate.setparents(p1, p2)
1840
1839
1841 if show_stats:
1840 if show_stats:
1842 stats = ((len(get), _("updated")),
1841 stats = ((len(get), _("updated")),
1843 (len(merge) - len(failedmerge), _("merged")),
1842 (len(merge) - len(failedmerge), _("merged")),
1844 (len(remove), _("removed")),
1843 (len(remove), _("removed")),
1845 (len(failedmerge), _("unresolved")))
1844 (len(failedmerge), _("unresolved")))
1846 note = ", ".join([_("%d files %s") % s for s in stats])
1845 note = ", ".join([_("%d files %s") % s for s in stats])
1847 self.ui.status("%s\n" % note)
1846 self.ui.status("%s\n" % note)
1848 if moddirstate:
1847 if moddirstate:
1849 if branch_merge:
1848 if branch_merge:
1850 if failedmerge:
1849 if failedmerge:
1851 self.ui.status(_("There are unresolved merges,"
1850 self.ui.status(_("There are unresolved merges,"
1852 " you can redo the full merge using:\n"
1851 " you can redo the full merge using:\n"
1853 " hg update -C %s\n"
1852 " hg update -C %s\n"
1854 " hg merge %s\n"
1853 " hg merge %s\n"
1855 % (self.changelog.rev(p1),
1854 % (self.changelog.rev(p1),
1856 self.changelog.rev(p2))))
1855 self.changelog.rev(p2))))
1857 else:
1856 else:
1858 self.ui.status(_("(branch merge, don't forget to commit)\n"))
1857 self.ui.status(_("(branch merge, don't forget to commit)\n"))
1859 elif failedmerge:
1858 elif failedmerge:
1860 self.ui.status(_("There are unresolved merges with"
1859 self.ui.status(_("There are unresolved merges with"
1861 " locally modified files.\n"))
1860 " locally modified files.\n"))
1862
1861
1863 self.hook('update', parent1=xp1, parent2=xxp2, error=int(err))
1862 self.hook('update', parent1=xp1, parent2=xxp2, error=int(err))
1864 return err
1863 return err
1865
1864
1866 def merge3(self, fn, my, other, p1, p2):
1865 def merge3(self, fn, my, other, p1, p2):
1867 """perform a 3-way merge in the working directory"""
1866 """perform a 3-way merge in the working directory"""
1868
1867
1869 def temp(prefix, node):
1868 def temp(prefix, node):
1870 pre = "%s~%s." % (os.path.basename(fn), prefix)
1869 pre = "%s~%s." % (os.path.basename(fn), prefix)
1871 (fd, name) = tempfile.mkstemp(prefix=pre)
1870 (fd, name) = tempfile.mkstemp(prefix=pre)
1872 f = os.fdopen(fd, "wb")
1871 f = os.fdopen(fd, "wb")
1873 self.wwrite(fn, fl.read(node), f)
1872 self.wwrite(fn, fl.read(node), f)
1874 f.close()
1873 f.close()
1875 return name
1874 return name
1876
1875
1877 fl = self.file(fn)
1876 fl = self.file(fn)
1878 base = fl.ancestor(my, other)
1877 base = fl.ancestor(my, other)
1879 a = self.wjoin(fn)
1878 a = self.wjoin(fn)
1880 b = temp("base", base)
1879 b = temp("base", base)
1881 c = temp("other", other)
1880 c = temp("other", other)
1882
1881
1883 self.ui.note(_("resolving %s\n") % fn)
1882 self.ui.note(_("resolving %s\n") % fn)
1884 self.ui.debug(_("file %s: my %s other %s ancestor %s\n") %
1883 self.ui.debug(_("file %s: my %s other %s ancestor %s\n") %
1885 (fn, short(my), short(other), short(base)))
1884 (fn, short(my), short(other), short(base)))
1886
1885
1887 cmd = (os.environ.get("HGMERGE") or self.ui.config("ui", "merge")
1886 cmd = (os.environ.get("HGMERGE") or self.ui.config("ui", "merge")
1888 or "hgmerge")
1887 or "hgmerge")
1889 r = util.system('%s "%s" "%s" "%s"' % (cmd, a, b, c), cwd=self.root,
1888 r = util.system('%s "%s" "%s" "%s"' % (cmd, a, b, c), cwd=self.root,
1890 environ={'HG_FILE': fn,
1889 environ={'HG_FILE': fn,
1891 'HG_MY_NODE': p1,
1890 'HG_MY_NODE': p1,
1892 'HG_OTHER_NODE': p2,
1891 'HG_OTHER_NODE': p2,
1893 'HG_FILE_MY_NODE': hex(my),
1892 'HG_FILE_MY_NODE': hex(my),
1894 'HG_FILE_OTHER_NODE': hex(other),
1893 'HG_FILE_OTHER_NODE': hex(other),
1895 'HG_FILE_BASE_NODE': hex(base)})
1894 'HG_FILE_BASE_NODE': hex(base)})
1896 if r:
1895 if r:
1897 self.ui.warn(_("merging %s failed!\n") % fn)
1896 self.ui.warn(_("merging %s failed!\n") % fn)
1898
1897
1899 os.unlink(b)
1898 os.unlink(b)
1900 os.unlink(c)
1899 os.unlink(c)
1901 return r
1900 return r
1902
1901
1903 def verify(self):
1902 def verify(self):
1904 filelinkrevs = {}
1903 filelinkrevs = {}
1905 filenodes = {}
1904 filenodes = {}
1906 changesets = revisions = files = 0
1905 changesets = revisions = files = 0
1907 errors = [0]
1906 errors = [0]
1908 warnings = [0]
1907 warnings = [0]
1909 neededmanifests = {}
1908 neededmanifests = {}
1910
1909
1911 def err(msg):
1910 def err(msg):
1912 self.ui.warn(msg + "\n")
1911 self.ui.warn(msg + "\n")
1913 errors[0] += 1
1912 errors[0] += 1
1914
1913
1915 def warn(msg):
1914 def warn(msg):
1916 self.ui.warn(msg + "\n")
1915 self.ui.warn(msg + "\n")
1917 warnings[0] += 1
1916 warnings[0] += 1
1918
1917
1919 def checksize(obj, name):
1918 def checksize(obj, name):
1920 d = obj.checksize()
1919 d = obj.checksize()
1921 if d[0]:
1920 if d[0]:
1922 err(_("%s data length off by %d bytes") % (name, d[0]))
1921 err(_("%s data length off by %d bytes") % (name, d[0]))
1923 if d[1]:
1922 if d[1]:
1924 err(_("%s index contains %d extra bytes") % (name, d[1]))
1923 err(_("%s index contains %d extra bytes") % (name, d[1]))
1925
1924
1926 def checkversion(obj, name):
1925 def checkversion(obj, name):
1927 if obj.version != revlog.REVLOGV0:
1926 if obj.version != revlog.REVLOGV0:
1928 if not revlogv1:
1927 if not revlogv1:
1929 warn(_("warning: `%s' uses revlog format 1") % name)
1928 warn(_("warning: `%s' uses revlog format 1") % name)
1930 elif revlogv1:
1929 elif revlogv1:
1931 warn(_("warning: `%s' uses revlog format 0") % name)
1930 warn(_("warning: `%s' uses revlog format 0") % name)
1932
1931
1933 revlogv1 = self.revlogversion != revlog.REVLOGV0
1932 revlogv1 = self.revlogversion != revlog.REVLOGV0
1934 if self.ui.verbose or revlogv1 != self.revlogv1:
1933 if self.ui.verbose or revlogv1 != self.revlogv1:
1935 self.ui.status(_("repository uses revlog format %d\n") %
1934 self.ui.status(_("repository uses revlog format %d\n") %
1936 (revlogv1 and 1 or 0))
1935 (revlogv1 and 1 or 0))
1937
1936
1938 seen = {}
1937 seen = {}
1939 self.ui.status(_("checking changesets\n"))
1938 self.ui.status(_("checking changesets\n"))
1940 checksize(self.changelog, "changelog")
1939 checksize(self.changelog, "changelog")
1941
1940
1942 for i in range(self.changelog.count()):
1941 for i in range(self.changelog.count()):
1943 changesets += 1
1942 changesets += 1
1944 n = self.changelog.node(i)
1943 n = self.changelog.node(i)
1945 l = self.changelog.linkrev(n)
1944 l = self.changelog.linkrev(n)
1946 if l != i:
1945 if l != i:
1947 err(_("incorrect link (%d) for changeset revision %d") %(l, i))
1946 err(_("incorrect link (%d) for changeset revision %d") %(l, i))
1948 if n in seen:
1947 if n in seen:
1949 err(_("duplicate changeset at revision %d") % i)
1948 err(_("duplicate changeset at revision %d") % i)
1950 seen[n] = 1
1949 seen[n] = 1
1951
1950
1952 for p in self.changelog.parents(n):
1951 for p in self.changelog.parents(n):
1953 if p not in self.changelog.nodemap:
1952 if p not in self.changelog.nodemap:
1954 err(_("changeset %s has unknown parent %s") %
1953 err(_("changeset %s has unknown parent %s") %
1955 (short(n), short(p)))
1954 (short(n), short(p)))
1956 try:
1955 try:
1957 changes = self.changelog.read(n)
1956 changes = self.changelog.read(n)
1958 except KeyboardInterrupt:
1957 except KeyboardInterrupt:
1959 self.ui.warn(_("interrupted"))
1958 self.ui.warn(_("interrupted"))
1960 raise
1959 raise
1961 except Exception, inst:
1960 except Exception, inst:
1962 err(_("unpacking changeset %s: %s") % (short(n), inst))
1961 err(_("unpacking changeset %s: %s") % (short(n), inst))
1963 continue
1962 continue
1964
1963
1965 neededmanifests[changes[0]] = n
1964 neededmanifests[changes[0]] = n
1966
1965
1967 for f in changes[3]:
1966 for f in changes[3]:
1968 filelinkrevs.setdefault(f, []).append(i)
1967 filelinkrevs.setdefault(f, []).append(i)
1969
1968
1970 seen = {}
1969 seen = {}
1971 self.ui.status(_("checking manifests\n"))
1970 self.ui.status(_("checking manifests\n"))
1972 checkversion(self.manifest, "manifest")
1971 checkversion(self.manifest, "manifest")
1973 checksize(self.manifest, "manifest")
1972 checksize(self.manifest, "manifest")
1974
1973
1975 for i in range(self.manifest.count()):
1974 for i in range(self.manifest.count()):
1976 n = self.manifest.node(i)
1975 n = self.manifest.node(i)
1977 l = self.manifest.linkrev(n)
1976 l = self.manifest.linkrev(n)
1978
1977
1979 if l < 0 or l >= self.changelog.count():
1978 if l < 0 or l >= self.changelog.count():
1980 err(_("bad manifest link (%d) at revision %d") % (l, i))
1979 err(_("bad manifest link (%d) at revision %d") % (l, i))
1981
1980
1982 if n in neededmanifests:
1981 if n in neededmanifests:
1983 del neededmanifests[n]
1982 del neededmanifests[n]
1984
1983
1985 if n in seen:
1984 if n in seen:
1986 err(_("duplicate manifest at revision %d") % i)
1985 err(_("duplicate manifest at revision %d") % i)
1987
1986
1988 seen[n] = 1
1987 seen[n] = 1
1989
1988
1990 for p in self.manifest.parents(n):
1989 for p in self.manifest.parents(n):
1991 if p not in self.manifest.nodemap:
1990 if p not in self.manifest.nodemap:
1992 err(_("manifest %s has unknown parent %s") %
1991 err(_("manifest %s has unknown parent %s") %
1993 (short(n), short(p)))
1992 (short(n), short(p)))
1994
1993
1995 try:
1994 try:
1996 delta = mdiff.patchtext(self.manifest.delta(n))
1995 delta = mdiff.patchtext(self.manifest.delta(n))
1997 except KeyboardInterrupt:
1996 except KeyboardInterrupt:
1998 self.ui.warn(_("interrupted"))
1997 self.ui.warn(_("interrupted"))
1999 raise
1998 raise
2000 except Exception, inst:
1999 except Exception, inst:
2001 err(_("unpacking manifest %s: %s") % (short(n), inst))
2000 err(_("unpacking manifest %s: %s") % (short(n), inst))
2002 continue
2001 continue
2003
2002
2004 try:
2003 try:
2005 ff = [ l.split('\0') for l in delta.splitlines() ]
2004 ff = [ l.split('\0') for l in delta.splitlines() ]
2006 for f, fn in ff:
2005 for f, fn in ff:
2007 filenodes.setdefault(f, {})[bin(fn[:40])] = 1
2006 filenodes.setdefault(f, {})[bin(fn[:40])] = 1
2008 except (ValueError, TypeError), inst:
2007 except (ValueError, TypeError), inst:
2009 err(_("broken delta in manifest %s: %s") % (short(n), inst))
2008 err(_("broken delta in manifest %s: %s") % (short(n), inst))
2010
2009
2011 self.ui.status(_("crosschecking files in changesets and manifests\n"))
2010 self.ui.status(_("crosschecking files in changesets and manifests\n"))
2012
2011
2013 for m, c in neededmanifests.items():
2012 for m, c in neededmanifests.items():
2014 err(_("Changeset %s refers to unknown manifest %s") %
2013 err(_("Changeset %s refers to unknown manifest %s") %
2015 (short(m), short(c)))
2014 (short(m), short(c)))
2016 del neededmanifests
2015 del neededmanifests
2017
2016
2018 for f in filenodes:
2017 for f in filenodes:
2019 if f not in filelinkrevs:
2018 if f not in filelinkrevs:
2020 err(_("file %s in manifest but not in changesets") % f)
2019 err(_("file %s in manifest but not in changesets") % f)
2021
2020
2022 for f in filelinkrevs:
2021 for f in filelinkrevs:
2023 if f not in filenodes:
2022 if f not in filenodes:
2024 err(_("file %s in changeset but not in manifest") % f)
2023 err(_("file %s in changeset but not in manifest") % f)
2025
2024
2026 self.ui.status(_("checking files\n"))
2025 self.ui.status(_("checking files\n"))
2027 ff = filenodes.keys()
2026 ff = filenodes.keys()
2028 ff.sort()
2027 ff.sort()
2029 for f in ff:
2028 for f in ff:
2030 if f == "/dev/null":
2029 if f == "/dev/null":
2031 continue
2030 continue
2032 files += 1
2031 files += 1
2033 if not f:
2032 if not f:
2034 err(_("file without name in manifest %s") % short(n))
2033 err(_("file without name in manifest %s") % short(n))
2035 continue
2034 continue
2036 fl = self.file(f)
2035 fl = self.file(f)
2037 checkversion(fl, f)
2036 checkversion(fl, f)
2038 checksize(fl, f)
2037 checksize(fl, f)
2039
2038
2040 nodes = {nullid: 1}
2039 nodes = {nullid: 1}
2041 seen = {}
2040 seen = {}
2042 for i in range(fl.count()):
2041 for i in range(fl.count()):
2043 revisions += 1
2042 revisions += 1
2044 n = fl.node(i)
2043 n = fl.node(i)
2045
2044
2046 if n in seen:
2045 if n in seen:
2047 err(_("%s: duplicate revision %d") % (f, i))
2046 err(_("%s: duplicate revision %d") % (f, i))
2048 if n not in filenodes[f]:
2047 if n not in filenodes[f]:
2049 err(_("%s: %d:%s not in manifests") % (f, i, short(n)))
2048 err(_("%s: %d:%s not in manifests") % (f, i, short(n)))
2050 else:
2049 else:
2051 del filenodes[f][n]
2050 del filenodes[f][n]
2052
2051
2053 flr = fl.linkrev(n)
2052 flr = fl.linkrev(n)
2054 if flr not in filelinkrevs.get(f, []):
2053 if flr not in filelinkrevs.get(f, []):
2055 err(_("%s:%s points to unexpected changeset %d")
2054 err(_("%s:%s points to unexpected changeset %d")
2056 % (f, short(n), flr))
2055 % (f, short(n), flr))
2057 else:
2056 else:
2058 filelinkrevs[f].remove(flr)
2057 filelinkrevs[f].remove(flr)
2059
2058
2060 # verify contents
2059 # verify contents
2061 try:
2060 try:
2062 t = fl.read(n)
2061 t = fl.read(n)
2063 except KeyboardInterrupt:
2062 except KeyboardInterrupt:
2064 self.ui.warn(_("interrupted"))
2063 self.ui.warn(_("interrupted"))
2065 raise
2064 raise
2066 except Exception, inst:
2065 except Exception, inst:
2067 err(_("unpacking file %s %s: %s") % (f, short(n), inst))
2066 err(_("unpacking file %s %s: %s") % (f, short(n), inst))
2068
2067
2069 # verify parents
2068 # verify parents
2070 (p1, p2) = fl.parents(n)
2069 (p1, p2) = fl.parents(n)
2071 if p1 not in nodes:
2070 if p1 not in nodes:
2072 err(_("file %s:%s unknown parent 1 %s") %
2071 err(_("file %s:%s unknown parent 1 %s") %
2073 (f, short(n), short(p1)))
2072 (f, short(n), short(p1)))
2074 if p2 not in nodes:
2073 if p2 not in nodes:
2075 err(_("file %s:%s unknown parent 2 %s") %
2074 err(_("file %s:%s unknown parent 2 %s") %
2076 (f, short(n), short(p1)))
2075 (f, short(n), short(p1)))
2077 nodes[n] = 1
2076 nodes[n] = 1
2078
2077
2079 # cross-check
2078 # cross-check
2080 for node in filenodes[f]:
2079 for node in filenodes[f]:
2081 err(_("node %s in manifests not in %s") % (hex(node), f))
2080 err(_("node %s in manifests not in %s") % (hex(node), f))
2082
2081
2083 self.ui.status(_("%d files, %d changesets, %d total revisions\n") %
2082 self.ui.status(_("%d files, %d changesets, %d total revisions\n") %
2084 (files, changesets, revisions))
2083 (files, changesets, revisions))
2085
2084
2086 if warnings[0]:
2085 if warnings[0]:
2087 self.ui.warn(_("%d warnings encountered!\n") % warnings[0])
2086 self.ui.warn(_("%d warnings encountered!\n") % warnings[0])
2088 if errors[0]:
2087 if errors[0]:
2089 self.ui.warn(_("%d integrity errors encountered!\n") % errors[0])
2088 self.ui.warn(_("%d integrity errors encountered!\n") % errors[0])
2090 return 1
2089 return 1
2091
2090
2092 # used to avoid circular references so destructors work
2091 # used to avoid circular references so destructors work
2093 def aftertrans(base):
2092 def aftertrans(base):
2094 p = base
2093 p = base
2095 def a():
2094 def a():
2096 util.rename(os.path.join(p, "journal"), os.path.join(p, "undo"))
2095 util.rename(os.path.join(p, "journal"), os.path.join(p, "undo"))
2097 util.rename(os.path.join(p, "journal.dirstate"),
2096 util.rename(os.path.join(p, "journal.dirstate"),
2098 os.path.join(p, "undo.dirstate"))
2097 os.path.join(p, "undo.dirstate"))
2099 return a
2098 return a
2100
2099
General Comments 0
You need to be logged in to leave comments. Login now