##// END OF EJS Templates
add a timeout when a lock is held (default 1024 sec)...
Benoit Boissinot -
r1787:e431344e default
parent child Browse files
Show More
@@ -1,310 +1,313
1 HGRC(5)
1 HGRC(5)
2 =======
2 =======
3 Bryan O'Sullivan <bos@serpentine.com>
3 Bryan O'Sullivan <bos@serpentine.com>
4
4
5 NAME
5 NAME
6 ----
6 ----
7 hgrc - configuration files for Mercurial
7 hgrc - configuration files for Mercurial
8
8
9 SYNOPSIS
9 SYNOPSIS
10 --------
10 --------
11
11
12 The Mercurial system uses a set of configuration files to control
12 The Mercurial system uses a set of configuration files to control
13 aspects of its behaviour.
13 aspects of its behaviour.
14
14
15 FILES
15 FILES
16 -----
16 -----
17
17
18 Mercurial reads configuration data from several files, if they exist.
18 Mercurial reads configuration data from several files, if they exist.
19 The names of these files depend on the system on which Mercurial is
19 The names of these files depend on the system on which Mercurial is
20 installed.
20 installed.
21
21
22 (Unix) <install-root>/etc/mercurial/hgrc.d/*.rc::
22 (Unix) <install-root>/etc/mercurial/hgrc.d/*.rc::
23 (Unix) <install-root>/etc/mercurial/hgrc::
23 (Unix) <install-root>/etc/mercurial/hgrc::
24 Per-installation configuration files, searched for in the
24 Per-installation configuration files, searched for in the
25 directory where Mercurial is installed. For example, if installed
25 directory where Mercurial is installed. For example, if installed
26 in /shared/tools, Mercurial will look in
26 in /shared/tools, Mercurial will look in
27 /shared/tools/etc/mercurial/hgrc. Options in these files apply to
27 /shared/tools/etc/mercurial/hgrc. Options in these files apply to
28 all Mercurial commands executed by any user in any directory.
28 all Mercurial commands executed by any user in any directory.
29
29
30 (Unix) /etc/mercurial/hgrc.d/*.rc::
30 (Unix) /etc/mercurial/hgrc.d/*.rc::
31 (Unix) /etc/mercurial/hgrc::
31 (Unix) /etc/mercurial/hgrc::
32 (Windows) C:\Mercurial\Mercurial.ini::
32 (Windows) C:\Mercurial\Mercurial.ini::
33 Per-system configuration files, for the system on which Mercurial
33 Per-system configuration files, for the system on which Mercurial
34 is running. Options in these files apply to all Mercurial
34 is running. Options in these files apply to all Mercurial
35 commands executed by any user in any directory. Options in these
35 commands executed by any user in any directory. Options in these
36 files override per-installation options.
36 files override per-installation options.
37
37
38 (Unix) $HOME/.hgrc::
38 (Unix) $HOME/.hgrc::
39 (Windows) C:\Documents and Settings\USERNAME\Mercurial.ini
39 (Windows) C:\Documents and Settings\USERNAME\Mercurial.ini
40 Per-user configuration file, for the user running Mercurial.
40 Per-user configuration file, for the user running Mercurial.
41 Options in this file apply to all Mercurial commands executed by
41 Options in this file apply to all Mercurial commands executed by
42 any user in any directory. Options in this file override
42 any user in any directory. Options in this file override
43 per-installation and per-system options.
43 per-installation and per-system options.
44
44
45 (Unix, Windows) <repo>/.hg/hgrc::
45 (Unix, Windows) <repo>/.hg/hgrc::
46 Per-repository configuration options that only apply in a
46 Per-repository configuration options that only apply in a
47 particular repository. This file is not version-controlled, and
47 particular repository. This file is not version-controlled, and
48 will not get transferred during a "clone" operation. Options in
48 will not get transferred during a "clone" operation. Options in
49 this file override options in all other configuration files.
49 this file override options in all other configuration files.
50
50
51 SYNTAX
51 SYNTAX
52 ------
52 ------
53
53
54 A configuration file consists of sections, led by a "[section]" header
54 A configuration file consists of sections, led by a "[section]" header
55 and followed by "name: value" entries; "name=value" is also accepted.
55 and followed by "name: value" entries; "name=value" is also accepted.
56
56
57 [spam]
57 [spam]
58 eggs=ham
58 eggs=ham
59 green=
59 green=
60 eggs
60 eggs
61
61
62 Each line contains one entry. If the lines that follow are indented,
62 Each line contains one entry. If the lines that follow are indented,
63 they are treated as continuations of that entry.
63 they are treated as continuations of that entry.
64
64
65 Leading whitespace is removed from values. Empty lines are skipped.
65 Leading whitespace is removed from values. Empty lines are skipped.
66
66
67 The optional values can contain format strings which refer to other
67 The optional values can contain format strings which refer to other
68 values in the same section, or values in a special DEFAULT section.
68 values in the same section, or values in a special DEFAULT section.
69
69
70 Lines beginning with "#" or ";" are ignored and may be used to provide
70 Lines beginning with "#" or ";" are ignored and may be used to provide
71 comments.
71 comments.
72
72
73 SECTIONS
73 SECTIONS
74 --------
74 --------
75
75
76 This section describes the different sections that may appear in a
76 This section describes the different sections that may appear in a
77 Mercurial "hgrc" file, the purpose of each section, its possible
77 Mercurial "hgrc" file, the purpose of each section, its possible
78 keys, and their possible values.
78 keys, and their possible values.
79
79
80 decode/encode::
80 decode/encode::
81 Filters for transforming files on checkout/checkin. This would
81 Filters for transforming files on checkout/checkin. This would
82 typically be used for newline processing or other
82 typically be used for newline processing or other
83 localization/canonicalization of files.
83 localization/canonicalization of files.
84
84
85 Filters consist of a filter pattern followed by a filter command.
85 Filters consist of a filter pattern followed by a filter command.
86 Filter patterns are globs by default, rooted at the repository
86 Filter patterns are globs by default, rooted at the repository
87 root. For example, to match any file ending in ".txt" in the root
87 root. For example, to match any file ending in ".txt" in the root
88 directory only, use the pattern "*.txt". To match any file ending
88 directory only, use the pattern "*.txt". To match any file ending
89 in ".c" anywhere in the repository, use the pattern "**.c".
89 in ".c" anywhere in the repository, use the pattern "**.c".
90
90
91 The filter command can start with a specifier, either "pipe:" or
91 The filter command can start with a specifier, either "pipe:" or
92 "tempfile:". If no specifier is given, "pipe:" is used by default.
92 "tempfile:". If no specifier is given, "pipe:" is used by default.
93
93
94 A "pipe:" command must accept data on stdin and return the
94 A "pipe:" command must accept data on stdin and return the
95 transformed data on stdout.
95 transformed data on stdout.
96
96
97 Pipe example:
97 Pipe example:
98
98
99 [encode]
99 [encode]
100 # uncompress gzip files on checkin to improve delta compression
100 # uncompress gzip files on checkin to improve delta compression
101 # note: not necessarily a good idea, just an example
101 # note: not necessarily a good idea, just an example
102 *.gz = pipe: gunzip
102 *.gz = pipe: gunzip
103
103
104 [decode]
104 [decode]
105 # recompress gzip files when writing them to the working dir (we
105 # recompress gzip files when writing them to the working dir (we
106 # can safely omit "pipe:", because it's the default)
106 # can safely omit "pipe:", because it's the default)
107 *.gz = gzip
107 *.gz = gzip
108
108
109 A "tempfile:" command is a template. The string INFILE is replaced
109 A "tempfile:" command is a template. The string INFILE is replaced
110 with the name of a temporary file that contains the data to be
110 with the name of a temporary file that contains the data to be
111 filtered by the command. The string OUTFILE is replaced with the
111 filtered by the command. The string OUTFILE is replaced with the
112 name of an empty temporary file, where the filtered data must be
112 name of an empty temporary file, where the filtered data must be
113 written by the command.
113 written by the command.
114
114
115 NOTE: the tempfile mechanism is recommended for Windows systems,
115 NOTE: the tempfile mechanism is recommended for Windows systems,
116 where the standard shell I/O redirection operators often have
116 where the standard shell I/O redirection operators often have
117 strange effects. In particular, if you are doing line ending
117 strange effects. In particular, if you are doing line ending
118 conversion on Windows using the popular dos2unix and unix2dos
118 conversion on Windows using the popular dos2unix and unix2dos
119 programs, you *must* use the tempfile mechanism, as using pipes will
119 programs, you *must* use the tempfile mechanism, as using pipes will
120 corrupt the contents of your files.
120 corrupt the contents of your files.
121
121
122 Tempfile example:
122 Tempfile example:
123
123
124 [encode]
124 [encode]
125 # convert files to unix line ending conventions on checkin
125 # convert files to unix line ending conventions on checkin
126 **.txt = tempfile: dos2unix -n INFILE OUTFILE
126 **.txt = tempfile: dos2unix -n INFILE OUTFILE
127
127
128 [decode]
128 [decode]
129 # convert files to windows line ending conventions when writing
129 # convert files to windows line ending conventions when writing
130 # them to the working dir
130 # them to the working dir
131 **.txt = tempfile: unix2dos -n INFILE OUTFILE
131 **.txt = tempfile: unix2dos -n INFILE OUTFILE
132
132
133 hooks::
133 hooks::
134 Commands that get automatically executed by various actions such as
134 Commands that get automatically executed by various actions such as
135 starting or finishing a commit. Multiple commands can be run for
135 starting or finishing a commit. Multiple commands can be run for
136 the same action by appending a suffix to the action. Overriding a
136 the same action by appending a suffix to the action. Overriding a
137 site-wide hook can be done by changing its value or setting it to
137 site-wide hook can be done by changing its value or setting it to
138 an empty string.
138 an empty string.
139
139
140 Example .hg/hgrc:
140 Example .hg/hgrc:
141
141
142 [hooks]
142 [hooks]
143 # do not use the site-wide hook
143 # do not use the site-wide hook
144 incoming =
144 incoming =
145 incoming.email = /my/email/hook
145 incoming.email = /my/email/hook
146 incoming.autobuild = /my/build/hook
146 incoming.autobuild = /my/build/hook
147
147
148 Most hooks are run with environment variables set that give added
148 Most hooks are run with environment variables set that give added
149 useful information. For each hook below, the environment variables
149 useful information. For each hook below, the environment variables
150 it is passed are listed with names of the form "$HG_foo".
150 it is passed are listed with names of the form "$HG_foo".
151
151
152 changegroup;;
152 changegroup;;
153 Run after a changegroup has been added via push, pull or
153 Run after a changegroup has been added via push, pull or
154 unbundle. ID of the first new changeset is in $HG_NODE.
154 unbundle. ID of the first new changeset is in $HG_NODE.
155 commit;;
155 commit;;
156 Run after a changeset has been created in the local repository.
156 Run after a changeset has been created in the local repository.
157 ID of the newly created changeset is in $HG_NODE. Parent
157 ID of the newly created changeset is in $HG_NODE. Parent
158 changeset IDs are in $HG_PARENT1 and $HG_PARENT2.
158 changeset IDs are in $HG_PARENT1 and $HG_PARENT2.
159 incoming;;
159 incoming;;
160 Run after a changeset has been pulled, pushed, or unbundled into
160 Run after a changeset has been pulled, pushed, or unbundled into
161 the local repository. The ID of the newly arrived changeset is in
161 the local repository. The ID of the newly arrived changeset is in
162 $HG_NODE.
162 $HG_NODE.
163 outgoing;;
163 outgoing;;
164 Run after sending changes from local repository to another. ID of
164 Run after sending changes from local repository to another. ID of
165 first changeset sent is in $HG_NODE. Source of operation is in
165 first changeset sent is in $HG_NODE. Source of operation is in
166 $HG_SOURCE; see "preoutgoing" hook for description.
166 $HG_SOURCE; see "preoutgoing" hook for description.
167 prechangegroup;;
167 prechangegroup;;
168 Run before a changegroup is added via push, pull or unbundle.
168 Run before a changegroup is added via push, pull or unbundle.
169 Exit status 0 allows the changegroup to proceed. Non-zero status
169 Exit status 0 allows the changegroup to proceed. Non-zero status
170 will cause the push, pull or unbundle to fail.
170 will cause the push, pull or unbundle to fail.
171 precommit;;
171 precommit;;
172 Run before starting a local commit. Exit status 0 allows the
172 Run before starting a local commit. Exit status 0 allows the
173 commit to proceed. Non-zero status will cause the commit to fail.
173 commit to proceed. Non-zero status will cause the commit to fail.
174 Parent changeset IDs are in $HG_PARENT1 and $HG_PARENT2.
174 Parent changeset IDs are in $HG_PARENT1 and $HG_PARENT2.
175 preoutgoing;;
175 preoutgoing;;
176 Run before computing changes to send from the local repository to
176 Run before computing changes to send from the local repository to
177 another. Non-zero status will cause failure. This lets you
177 another. Non-zero status will cause failure. This lets you
178 prevent pull over http or ssh. Also prevents against local pull,
178 prevent pull over http or ssh. Also prevents against local pull,
179 push (outbound) or bundle commands, but not effective, since you
179 push (outbound) or bundle commands, but not effective, since you
180 can just copy files instead then. Source of operation is in
180 can just copy files instead then. Source of operation is in
181 $HG_SOURCE. If "serve", operation is happening on behalf of
181 $HG_SOURCE. If "serve", operation is happening on behalf of
182 remote ssh or http repository. If "push", "pull" or "bundle",
182 remote ssh or http repository. If "push", "pull" or "bundle",
183 operation is happening on behalf of repository on same system.
183 operation is happening on behalf of repository on same system.
184 pretag;;
184 pretag;;
185 Run before creating a tag. Exit status 0 allows the tag to be
185 Run before creating a tag. Exit status 0 allows the tag to be
186 created. Non-zero status will cause the tag to fail. ID of
186 created. Non-zero status will cause the tag to fail. ID of
187 changeset to tag is in $HG_NODE. Name of tag is in $HG_TAG. Tag
187 changeset to tag is in $HG_NODE. Name of tag is in $HG_TAG. Tag
188 is local if $HG_LOCAL=1, in repo if $HG_LOCAL=0.
188 is local if $HG_LOCAL=1, in repo if $HG_LOCAL=0.
189 pretxnchangegroup;;
189 pretxnchangegroup;;
190 Run after a changegroup has been added via push, pull or unbundle,
190 Run after a changegroup has been added via push, pull or unbundle,
191 but before the transaction has been committed. Changegroup is
191 but before the transaction has been committed. Changegroup is
192 visible to hook program. This lets you validate incoming changes
192 visible to hook program. This lets you validate incoming changes
193 before accepting them. Passed the ID of the first new changeset
193 before accepting them. Passed the ID of the first new changeset
194 in $HG_NODE. Exit status 0 allows the transaction to commit.
194 in $HG_NODE. Exit status 0 allows the transaction to commit.
195 Non-zero status will cause the transaction to be rolled back and
195 Non-zero status will cause the transaction to be rolled back and
196 the push, pull or unbundle will fail.
196 the push, pull or unbundle will fail.
197 pretxncommit;;
197 pretxncommit;;
198 Run after a changeset has been created but the transaction not yet
198 Run after a changeset has been created but the transaction not yet
199 committed. Changeset is visible to hook program. This lets you
199 committed. Changeset is visible to hook program. This lets you
200 validate commit message and changes. Exit status 0 allows the
200 validate commit message and changes. Exit status 0 allows the
201 commit to proceed. Non-zero status will cause the transaction to
201 commit to proceed. Non-zero status will cause the transaction to
202 be rolled back. ID of changeset is in $HG_NODE. Parent changeset
202 be rolled back. ID of changeset is in $HG_NODE. Parent changeset
203 IDs are in $HG_PARENT1 and $HG_PARENT2.
203 IDs are in $HG_PARENT1 and $HG_PARENT2.
204 tag;;
204 tag;;
205 Run after a tag is created. ID of tagged changeset is in
205 Run after a tag is created. ID of tagged changeset is in
206 $HG_NODE. Name of tag is in $HG_TAG. Tag is local if
206 $HG_NODE. Name of tag is in $HG_TAG. Tag is local if
207 $HG_LOCAL=1, in repo if $HG_LOCAL=0.
207 $HG_LOCAL=1, in repo if $HG_LOCAL=0.
208
208
209 In earlier releases, the names of hook environment variables did not
209 In earlier releases, the names of hook environment variables did not
210 have a "HG_" prefix. These unprefixed names are still provided in
210 have a "HG_" prefix. These unprefixed names are still provided in
211 the environment for backwards compatibility, but their use is
211 the environment for backwards compatibility, but their use is
212 deprecated, and they will be removed in a future release.
212 deprecated, and they will be removed in a future release.
213
213
214 http_proxy::
214 http_proxy::
215 Used to access web-based Mercurial repositories through a HTTP
215 Used to access web-based Mercurial repositories through a HTTP
216 proxy.
216 proxy.
217 host;;
217 host;;
218 Host name and (optional) port of the proxy server, for example
218 Host name and (optional) port of the proxy server, for example
219 "myproxy:8000".
219 "myproxy:8000".
220 no;;
220 no;;
221 Optional. Comma-separated list of host names that should bypass
221 Optional. Comma-separated list of host names that should bypass
222 the proxy.
222 the proxy.
223 passwd;;
223 passwd;;
224 Optional. Password to authenticate with at the proxy server.
224 Optional. Password to authenticate with at the proxy server.
225 user;;
225 user;;
226 Optional. User name to authenticate with at the proxy server.
226 Optional. User name to authenticate with at the proxy server.
227
227
228 paths::
228 paths::
229 Assigns symbolic names to repositories. The left side is the
229 Assigns symbolic names to repositories. The left side is the
230 symbolic name, and the right gives the directory or URL that is the
230 symbolic name, and the right gives the directory or URL that is the
231 location of the repository.
231 location of the repository.
232
232
233 ui::
233 ui::
234 User interface controls.
234 User interface controls.
235 debug;;
235 debug;;
236 Print debugging information. True or False. Default is False.
236 Print debugging information. True or False. Default is False.
237 editor;;
237 editor;;
238 The editor to use during a commit. Default is $EDITOR or "vi".
238 The editor to use during a commit. Default is $EDITOR or "vi".
239 interactive;;
239 interactive;;
240 Allow to prompt the user. True or False. Default is True.
240 Allow to prompt the user. True or False. Default is True.
241 merge;;
241 merge;;
242 The conflict resolution program to use during a manual merge.
242 The conflict resolution program to use during a manual merge.
243 Default is "hgmerge".
243 Default is "hgmerge".
244 quiet;;
244 quiet;;
245 Reduce the amount of output printed. True or False. Default is False.
245 Reduce the amount of output printed. True or False. Default is False.
246 remotecmd;;
246 remotecmd;;
247 remote command to use for clone/push/pull operations. Default is 'hg'.
247 remote command to use for clone/push/pull operations. Default is 'hg'.
248 ssh;;
248 ssh;;
249 command to use for SSH connections. Default is 'ssh'.
249 command to use for SSH connections. Default is 'ssh'.
250 timeout;;
251 The timeout used when a lock is held (in seconds), a negative value
252 means no timeout. Default is 1024.
250 username;;
253 username;;
251 The committer of a changeset created when running "commit".
254 The committer of a changeset created when running "commit".
252 Typically a person's name and email address, e.g. "Fred Widget
255 Typically a person's name and email address, e.g. "Fred Widget
253 <fred@example.com>". Default is $EMAIL or username@hostname.
256 <fred@example.com>". Default is $EMAIL or username@hostname.
254 verbose;;
257 verbose;;
255 Increase the amount of output printed. True or False. Default is False.
258 Increase the amount of output printed. True or False. Default is False.
256
259
257
260
258 web::
261 web::
259 Web interface configuration.
262 Web interface configuration.
260 accesslog;;
263 accesslog;;
261 Where to output the access log. Default is stdout.
264 Where to output the access log. Default is stdout.
262 address;;
265 address;;
263 Interface address to bind to. Default is all.
266 Interface address to bind to. Default is all.
264 allowbz2;;
267 allowbz2;;
265 Whether to allow .tar.bz2 downloading of repo revisions. Default is false.
268 Whether to allow .tar.bz2 downloading of repo revisions. Default is false.
266 allowgz;;
269 allowgz;;
267 Whether to allow .tar.gz downloading of repo revisions. Default is false.
270 Whether to allow .tar.gz downloading of repo revisions. Default is false.
268 allowpull;;
271 allowpull;;
269 Whether to allow pulling from the repository. Default is true.
272 Whether to allow pulling from the repository. Default is true.
270 allowzip;;
273 allowzip;;
271 Whether to allow .zip downloading of repo revisions. Default is false.
274 Whether to allow .zip downloading of repo revisions. Default is false.
272 This feature creates temporary files.
275 This feature creates temporary files.
273 description;;
276 description;;
274 Textual description of the repository's purpose or contents.
277 Textual description of the repository's purpose or contents.
275 Default is "unknown".
278 Default is "unknown".
276 errorlog;;
279 errorlog;;
277 Where to output the error log. Default is stderr.
280 Where to output the error log. Default is stderr.
278 ipv6;;
281 ipv6;;
279 Whether to use IPv6. Default is false.
282 Whether to use IPv6. Default is false.
280 name;;
283 name;;
281 Repository name to use in the web interface. Default is current
284 Repository name to use in the web interface. Default is current
282 working directory.
285 working directory.
283 maxchanges;;
286 maxchanges;;
284 Maximum number of changes to list on the changelog. Default is 10.
287 Maximum number of changes to list on the changelog. Default is 10.
285 maxfiles;;
288 maxfiles;;
286 Maximum number of files to list per changeset. Default is 10.
289 Maximum number of files to list per changeset. Default is 10.
287 port;;
290 port;;
288 Port to listen on. Default is 8000.
291 Port to listen on. Default is 8000.
289 style;;
292 style;;
290 Which template map style to use.
293 Which template map style to use.
291 templates;;
294 templates;;
292 Where to find the HTML templates. Default is install path.
295 Where to find the HTML templates. Default is install path.
293
296
294
297
295 AUTHOR
298 AUTHOR
296 ------
299 ------
297 Bryan O'Sullivan <bos@serpentine.com>.
300 Bryan O'Sullivan <bos@serpentine.com>.
298
301
299 Mercurial was written by Matt Mackall <mpm@selenic.com>.
302 Mercurial was written by Matt Mackall <mpm@selenic.com>.
300
303
301 SEE ALSO
304 SEE ALSO
302 --------
305 --------
303 hg(1)
306 hg(1)
304
307
305 COPYING
308 COPYING
306 -------
309 -------
307 This manual page is copyright 2005 Bryan O'Sullivan.
310 This manual page is copyright 2005 Bryan O'Sullivan.
308 Mercurial is copyright 2005 Matt Mackall.
311 Mercurial is copyright 2005 Matt Mackall.
309 Free use of this software is granted under the terms of the GNU General
312 Free use of this software is granted under the terms of the GNU General
310 Public License (GPL).
313 Public License (GPL).
@@ -1,1861 +1,1868
1 # localrepo.py - read/write repository class for mercurial
1 # localrepo.py - read/write repository class for mercurial
2 #
2 #
3 # Copyright 2005 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms
5 # This software may be used and distributed according to the terms
6 # of the GNU General Public License, incorporated herein by reference.
6 # of the GNU General Public License, incorporated herein by reference.
7
7
8 import struct, os, util
8 import struct, os, util
9 import filelog, manifest, changelog, dirstate, repo
9 import filelog, manifest, changelog, dirstate, repo
10 from node import *
10 from node import *
11 from i18n import gettext as _
11 from i18n import gettext as _
12 from demandload import *
12 from demandload import *
13 demandload(globals(), "re lock transaction tempfile stat mdiff errno")
13 demandload(globals(), "re lock transaction tempfile stat mdiff errno")
14
14
15 class localrepository(object):
15 class localrepository(object):
16 def __init__(self, ui, path=None, create=0):
16 def __init__(self, ui, path=None, create=0):
17 if not path:
17 if not path:
18 p = os.getcwd()
18 p = os.getcwd()
19 while not os.path.isdir(os.path.join(p, ".hg")):
19 while not os.path.isdir(os.path.join(p, ".hg")):
20 oldp = p
20 oldp = p
21 p = os.path.dirname(p)
21 p = os.path.dirname(p)
22 if p == oldp:
22 if p == oldp:
23 raise repo.RepoError(_("no repo found"))
23 raise repo.RepoError(_("no repo found"))
24 path = p
24 path = p
25 self.path = os.path.join(path, ".hg")
25 self.path = os.path.join(path, ".hg")
26
26
27 if not create and not os.path.isdir(self.path):
27 if not create and not os.path.isdir(self.path):
28 raise repo.RepoError(_("repository %s not found") % path)
28 raise repo.RepoError(_("repository %s not found") % path)
29
29
30 self.root = os.path.abspath(path)
30 self.root = os.path.abspath(path)
31 self.ui = ui
31 self.ui = ui
32 self.opener = util.opener(self.path)
32 self.opener = util.opener(self.path)
33 self.wopener = util.opener(self.root)
33 self.wopener = util.opener(self.root)
34 self.manifest = manifest.manifest(self.opener)
34 self.manifest = manifest.manifest(self.opener)
35 self.changelog = changelog.changelog(self.opener)
35 self.changelog = changelog.changelog(self.opener)
36 self.tagscache = None
36 self.tagscache = None
37 self.nodetagscache = None
37 self.nodetagscache = None
38 self.encodepats = None
38 self.encodepats = None
39 self.decodepats = None
39 self.decodepats = None
40
40
41 if create:
41 if create:
42 os.mkdir(self.path)
42 os.mkdir(self.path)
43 os.mkdir(self.join("data"))
43 os.mkdir(self.join("data"))
44
44
45 self.dirstate = dirstate.dirstate(self.opener, ui, self.root)
45 self.dirstate = dirstate.dirstate(self.opener, ui, self.root)
46 try:
46 try:
47 self.ui.readconfig(self.join("hgrc"))
47 self.ui.readconfig(self.join("hgrc"))
48 except IOError:
48 except IOError:
49 pass
49 pass
50
50
51 def hook(self, name, throw=False, **args):
51 def hook(self, name, throw=False, **args):
52 def runhook(name, cmd):
52 def runhook(name, cmd):
53 self.ui.note(_("running hook %s: %s\n") % (name, cmd))
53 self.ui.note(_("running hook %s: %s\n") % (name, cmd))
54 old = {}
54 old = {}
55 for k, v in args.items():
55 for k, v in args.items():
56 k = k.upper()
56 k = k.upper()
57 old['HG_' + k] = os.environ.get(k, None)
57 old['HG_' + k] = os.environ.get(k, None)
58 old[k] = os.environ.get(k, None)
58 old[k] = os.environ.get(k, None)
59 os.environ['HG_' + k] = str(v)
59 os.environ['HG_' + k] = str(v)
60 os.environ[k] = str(v)
60 os.environ[k] = str(v)
61
61
62 try:
62 try:
63 # Hooks run in the repository root
63 # Hooks run in the repository root
64 olddir = os.getcwd()
64 olddir = os.getcwd()
65 os.chdir(self.root)
65 os.chdir(self.root)
66 r = os.system(cmd)
66 r = os.system(cmd)
67 finally:
67 finally:
68 for k, v in old.items():
68 for k, v in old.items():
69 if v is not None:
69 if v is not None:
70 os.environ[k] = v
70 os.environ[k] = v
71 else:
71 else:
72 del os.environ[k]
72 del os.environ[k]
73
73
74 os.chdir(olddir)
74 os.chdir(olddir)
75
75
76 if r:
76 if r:
77 desc, r = util.explain_exit(r)
77 desc, r = util.explain_exit(r)
78 if throw:
78 if throw:
79 raise util.Abort(_('%s hook %s') % (name, desc))
79 raise util.Abort(_('%s hook %s') % (name, desc))
80 self.ui.warn(_('error: %s hook %s\n') % (name, desc))
80 self.ui.warn(_('error: %s hook %s\n') % (name, desc))
81 return False
81 return False
82 return True
82 return True
83
83
84 r = True
84 r = True
85 for hname, cmd in self.ui.configitems("hooks"):
85 for hname, cmd in self.ui.configitems("hooks"):
86 s = hname.split(".")
86 s = hname.split(".")
87 if s[0] == name and cmd:
87 if s[0] == name and cmd:
88 r = runhook(hname, cmd) and r
88 r = runhook(hname, cmd) and r
89 return r
89 return r
90
90
91 def tags(self):
91 def tags(self):
92 '''return a mapping of tag to node'''
92 '''return a mapping of tag to node'''
93 if not self.tagscache:
93 if not self.tagscache:
94 self.tagscache = {}
94 self.tagscache = {}
95 def addtag(self, k, n):
95 def addtag(self, k, n):
96 try:
96 try:
97 bin_n = bin(n)
97 bin_n = bin(n)
98 except TypeError:
98 except TypeError:
99 bin_n = ''
99 bin_n = ''
100 self.tagscache[k.strip()] = bin_n
100 self.tagscache[k.strip()] = bin_n
101
101
102 try:
102 try:
103 # read each head of the tags file, ending with the tip
103 # read each head of the tags file, ending with the tip
104 # and add each tag found to the map, with "newer" ones
104 # and add each tag found to the map, with "newer" ones
105 # taking precedence
105 # taking precedence
106 fl = self.file(".hgtags")
106 fl = self.file(".hgtags")
107 h = fl.heads()
107 h = fl.heads()
108 h.reverse()
108 h.reverse()
109 for r in h:
109 for r in h:
110 for l in fl.read(r).splitlines():
110 for l in fl.read(r).splitlines():
111 if l:
111 if l:
112 n, k = l.split(" ", 1)
112 n, k = l.split(" ", 1)
113 addtag(self, k, n)
113 addtag(self, k, n)
114 except KeyError:
114 except KeyError:
115 pass
115 pass
116
116
117 try:
117 try:
118 f = self.opener("localtags")
118 f = self.opener("localtags")
119 for l in f:
119 for l in f:
120 n, k = l.split(" ", 1)
120 n, k = l.split(" ", 1)
121 addtag(self, k, n)
121 addtag(self, k, n)
122 except IOError:
122 except IOError:
123 pass
123 pass
124
124
125 self.tagscache['tip'] = self.changelog.tip()
125 self.tagscache['tip'] = self.changelog.tip()
126
126
127 return self.tagscache
127 return self.tagscache
128
128
129 def tagslist(self):
129 def tagslist(self):
130 '''return a list of tags ordered by revision'''
130 '''return a list of tags ordered by revision'''
131 l = []
131 l = []
132 for t, n in self.tags().items():
132 for t, n in self.tags().items():
133 try:
133 try:
134 r = self.changelog.rev(n)
134 r = self.changelog.rev(n)
135 except:
135 except:
136 r = -2 # sort to the beginning of the list if unknown
136 r = -2 # sort to the beginning of the list if unknown
137 l.append((r, t, n))
137 l.append((r, t, n))
138 l.sort()
138 l.sort()
139 return [(t, n) for r, t, n in l]
139 return [(t, n) for r, t, n in l]
140
140
141 def nodetags(self, node):
141 def nodetags(self, node):
142 '''return the tags associated with a node'''
142 '''return the tags associated with a node'''
143 if not self.nodetagscache:
143 if not self.nodetagscache:
144 self.nodetagscache = {}
144 self.nodetagscache = {}
145 for t, n in self.tags().items():
145 for t, n in self.tags().items():
146 self.nodetagscache.setdefault(n, []).append(t)
146 self.nodetagscache.setdefault(n, []).append(t)
147 return self.nodetagscache.get(node, [])
147 return self.nodetagscache.get(node, [])
148
148
149 def lookup(self, key):
149 def lookup(self, key):
150 try:
150 try:
151 return self.tags()[key]
151 return self.tags()[key]
152 except KeyError:
152 except KeyError:
153 try:
153 try:
154 return self.changelog.lookup(key)
154 return self.changelog.lookup(key)
155 except:
155 except:
156 raise repo.RepoError(_("unknown revision '%s'") % key)
156 raise repo.RepoError(_("unknown revision '%s'") % key)
157
157
158 def dev(self):
158 def dev(self):
159 return os.stat(self.path).st_dev
159 return os.stat(self.path).st_dev
160
160
161 def local(self):
161 def local(self):
162 return True
162 return True
163
163
164 def join(self, f):
164 def join(self, f):
165 return os.path.join(self.path, f)
165 return os.path.join(self.path, f)
166
166
167 def wjoin(self, f):
167 def wjoin(self, f):
168 return os.path.join(self.root, f)
168 return os.path.join(self.root, f)
169
169
170 def file(self, f):
170 def file(self, f):
171 if f[0] == '/':
171 if f[0] == '/':
172 f = f[1:]
172 f = f[1:]
173 return filelog.filelog(self.opener, f)
173 return filelog.filelog(self.opener, f)
174
174
175 def getcwd(self):
175 def getcwd(self):
176 return self.dirstate.getcwd()
176 return self.dirstate.getcwd()
177
177
178 def wfile(self, f, mode='r'):
178 def wfile(self, f, mode='r'):
179 return self.wopener(f, mode)
179 return self.wopener(f, mode)
180
180
181 def wread(self, filename):
181 def wread(self, filename):
182 if self.encodepats == None:
182 if self.encodepats == None:
183 l = []
183 l = []
184 for pat, cmd in self.ui.configitems("encode"):
184 for pat, cmd in self.ui.configitems("encode"):
185 mf = util.matcher("", "/", [pat], [], [])[1]
185 mf = util.matcher("", "/", [pat], [], [])[1]
186 l.append((mf, cmd))
186 l.append((mf, cmd))
187 self.encodepats = l
187 self.encodepats = l
188
188
189 data = self.wopener(filename, 'r').read()
189 data = self.wopener(filename, 'r').read()
190
190
191 for mf, cmd in self.encodepats:
191 for mf, cmd in self.encodepats:
192 if mf(filename):
192 if mf(filename):
193 self.ui.debug(_("filtering %s through %s\n") % (filename, cmd))
193 self.ui.debug(_("filtering %s through %s\n") % (filename, cmd))
194 data = util.filter(data, cmd)
194 data = util.filter(data, cmd)
195 break
195 break
196
196
197 return data
197 return data
198
198
199 def wwrite(self, filename, data, fd=None):
199 def wwrite(self, filename, data, fd=None):
200 if self.decodepats == None:
200 if self.decodepats == None:
201 l = []
201 l = []
202 for pat, cmd in self.ui.configitems("decode"):
202 for pat, cmd in self.ui.configitems("decode"):
203 mf = util.matcher("", "/", [pat], [], [])[1]
203 mf = util.matcher("", "/", [pat], [], [])[1]
204 l.append((mf, cmd))
204 l.append((mf, cmd))
205 self.decodepats = l
205 self.decodepats = l
206
206
207 for mf, cmd in self.decodepats:
207 for mf, cmd in self.decodepats:
208 if mf(filename):
208 if mf(filename):
209 self.ui.debug(_("filtering %s through %s\n") % (filename, cmd))
209 self.ui.debug(_("filtering %s through %s\n") % (filename, cmd))
210 data = util.filter(data, cmd)
210 data = util.filter(data, cmd)
211 break
211 break
212
212
213 if fd:
213 if fd:
214 return fd.write(data)
214 return fd.write(data)
215 return self.wopener(filename, 'w').write(data)
215 return self.wopener(filename, 'w').write(data)
216
216
217 def transaction(self):
217 def transaction(self):
218 # save dirstate for undo
218 # save dirstate for undo
219 try:
219 try:
220 ds = self.opener("dirstate").read()
220 ds = self.opener("dirstate").read()
221 except IOError:
221 except IOError:
222 ds = ""
222 ds = ""
223 self.opener("journal.dirstate", "w").write(ds)
223 self.opener("journal.dirstate", "w").write(ds)
224
224
225 def after():
225 def after():
226 util.rename(self.join("journal"), self.join("undo"))
226 util.rename(self.join("journal"), self.join("undo"))
227 util.rename(self.join("journal.dirstate"),
227 util.rename(self.join("journal.dirstate"),
228 self.join("undo.dirstate"))
228 self.join("undo.dirstate"))
229
229
230 return transaction.transaction(self.ui.warn, self.opener,
230 return transaction.transaction(self.ui.warn, self.opener,
231 self.join("journal"), after)
231 self.join("journal"), after)
232
232
233 def recover(self):
233 def recover(self):
234 l = self.lock()
234 l = self.lock()
235 if os.path.exists(self.join("journal")):
235 if os.path.exists(self.join("journal")):
236 self.ui.status(_("rolling back interrupted transaction\n"))
236 self.ui.status(_("rolling back interrupted transaction\n"))
237 transaction.rollback(self.opener, self.join("journal"))
237 transaction.rollback(self.opener, self.join("journal"))
238 self.manifest = manifest.manifest(self.opener)
238 self.manifest = manifest.manifest(self.opener)
239 self.changelog = changelog.changelog(self.opener)
239 self.changelog = changelog.changelog(self.opener)
240 return True
240 return True
241 else:
241 else:
242 self.ui.warn(_("no interrupted transaction available\n"))
242 self.ui.warn(_("no interrupted transaction available\n"))
243 return False
243 return False
244
244
245 def undo(self, wlock=None):
245 def undo(self, wlock=None):
246 if not wlock:
246 if not wlock:
247 wlock = self.wlock()
247 wlock = self.wlock()
248 l = self.lock()
248 l = self.lock()
249 if os.path.exists(self.join("undo")):
249 if os.path.exists(self.join("undo")):
250 self.ui.status(_("rolling back last transaction\n"))
250 self.ui.status(_("rolling back last transaction\n"))
251 transaction.rollback(self.opener, self.join("undo"))
251 transaction.rollback(self.opener, self.join("undo"))
252 util.rename(self.join("undo.dirstate"), self.join("dirstate"))
252 util.rename(self.join("undo.dirstate"), self.join("dirstate"))
253 self.dirstate.read()
253 self.dirstate.read()
254 else:
254 else:
255 self.ui.warn(_("no undo information available\n"))
255 self.ui.warn(_("no undo information available\n"))
256
256
257 def do_lock(self, lockname, wait, releasefn=None, acquirefn=None):
257 def do_lock(self, lockname, wait, releasefn=None, acquirefn=None):
258 try:
258 try:
259 l = lock.lock(self.join(lockname), 0, releasefn)
259 l = lock.lock(self.join(lockname), 0, releasefn)
260 except lock.LockHeld, inst:
260 except lock.LockHeld, inst:
261 if not wait:
261 if not wait:
262 raise inst
262 raise inst
263 self.ui.warn(_("waiting for lock held by %s\n") % inst.args[0])
263 self.ui.warn(_("waiting for lock held by %s\n") % inst.args[0])
264 l = lock.lock(self.join(lockname), wait, releasefn)
264 try:
265 # default to 1024 seconds timeout
266 l = lock.lock(self.join(lockname),
267 int(self.ui.config("ui", "timeout") or 1024),
268 releasefn)
269 except lock.LockHeld, inst:
270 raise util.Abort(_("timeout while waiting for "
271 "lock held by %s") % inst.args[0])
265 if acquirefn:
272 if acquirefn:
266 acquirefn()
273 acquirefn()
267 return l
274 return l
268
275
269 def lock(self, wait=1):
276 def lock(self, wait=1):
270 return self.do_lock("lock", wait)
277 return self.do_lock("lock", wait)
271
278
272 def wlock(self, wait=1):
279 def wlock(self, wait=1):
273 return self.do_lock("wlock", wait,
280 return self.do_lock("wlock", wait,
274 self.dirstate.write,
281 self.dirstate.write,
275 self.dirstate.read)
282 self.dirstate.read)
276
283
277 def checkfilemerge(self, filename, text, filelog, manifest1, manifest2):
284 def checkfilemerge(self, filename, text, filelog, manifest1, manifest2):
278 "determine whether a new filenode is needed"
285 "determine whether a new filenode is needed"
279 fp1 = manifest1.get(filename, nullid)
286 fp1 = manifest1.get(filename, nullid)
280 fp2 = manifest2.get(filename, nullid)
287 fp2 = manifest2.get(filename, nullid)
281
288
282 if fp2 != nullid:
289 if fp2 != nullid:
283 # is one parent an ancestor of the other?
290 # is one parent an ancestor of the other?
284 fpa = filelog.ancestor(fp1, fp2)
291 fpa = filelog.ancestor(fp1, fp2)
285 if fpa == fp1:
292 if fpa == fp1:
286 fp1, fp2 = fp2, nullid
293 fp1, fp2 = fp2, nullid
287 elif fpa == fp2:
294 elif fpa == fp2:
288 fp2 = nullid
295 fp2 = nullid
289
296
290 # is the file unmodified from the parent? report existing entry
297 # is the file unmodified from the parent? report existing entry
291 if fp2 == nullid and text == filelog.read(fp1):
298 if fp2 == nullid and text == filelog.read(fp1):
292 return (fp1, None, None)
299 return (fp1, None, None)
293
300
294 return (None, fp1, fp2)
301 return (None, fp1, fp2)
295
302
296 def rawcommit(self, files, text, user, date, p1=None, p2=None, wlock=None):
303 def rawcommit(self, files, text, user, date, p1=None, p2=None, wlock=None):
297 orig_parent = self.dirstate.parents()[0] or nullid
304 orig_parent = self.dirstate.parents()[0] or nullid
298 p1 = p1 or self.dirstate.parents()[0] or nullid
305 p1 = p1 or self.dirstate.parents()[0] or nullid
299 p2 = p2 or self.dirstate.parents()[1] or nullid
306 p2 = p2 or self.dirstate.parents()[1] or nullid
300 c1 = self.changelog.read(p1)
307 c1 = self.changelog.read(p1)
301 c2 = self.changelog.read(p2)
308 c2 = self.changelog.read(p2)
302 m1 = self.manifest.read(c1[0])
309 m1 = self.manifest.read(c1[0])
303 mf1 = self.manifest.readflags(c1[0])
310 mf1 = self.manifest.readflags(c1[0])
304 m2 = self.manifest.read(c2[0])
311 m2 = self.manifest.read(c2[0])
305 changed = []
312 changed = []
306
313
307 if orig_parent == p1:
314 if orig_parent == p1:
308 update_dirstate = 1
315 update_dirstate = 1
309 else:
316 else:
310 update_dirstate = 0
317 update_dirstate = 0
311
318
312 if not wlock:
319 if not wlock:
313 wlock = self.wlock()
320 wlock = self.wlock()
314 l = self.lock()
321 l = self.lock()
315 tr = self.transaction()
322 tr = self.transaction()
316 mm = m1.copy()
323 mm = m1.copy()
317 mfm = mf1.copy()
324 mfm = mf1.copy()
318 linkrev = self.changelog.count()
325 linkrev = self.changelog.count()
319 for f in files:
326 for f in files:
320 try:
327 try:
321 t = self.wread(f)
328 t = self.wread(f)
322 tm = util.is_exec(self.wjoin(f), mfm.get(f, False))
329 tm = util.is_exec(self.wjoin(f), mfm.get(f, False))
323 r = self.file(f)
330 r = self.file(f)
324 mfm[f] = tm
331 mfm[f] = tm
325
332
326 (entry, fp1, fp2) = self.checkfilemerge(f, t, r, m1, m2)
333 (entry, fp1, fp2) = self.checkfilemerge(f, t, r, m1, m2)
327 if entry:
334 if entry:
328 mm[f] = entry
335 mm[f] = entry
329 continue
336 continue
330
337
331 mm[f] = r.add(t, {}, tr, linkrev, fp1, fp2)
338 mm[f] = r.add(t, {}, tr, linkrev, fp1, fp2)
332 changed.append(f)
339 changed.append(f)
333 if update_dirstate:
340 if update_dirstate:
334 self.dirstate.update([f], "n")
341 self.dirstate.update([f], "n")
335 except IOError:
342 except IOError:
336 try:
343 try:
337 del mm[f]
344 del mm[f]
338 del mfm[f]
345 del mfm[f]
339 if update_dirstate:
346 if update_dirstate:
340 self.dirstate.forget([f])
347 self.dirstate.forget([f])
341 except:
348 except:
342 # deleted from p2?
349 # deleted from p2?
343 pass
350 pass
344
351
345 mnode = self.manifest.add(mm, mfm, tr, linkrev, c1[0], c2[0])
352 mnode = self.manifest.add(mm, mfm, tr, linkrev, c1[0], c2[0])
346 user = user or self.ui.username()
353 user = user or self.ui.username()
347 n = self.changelog.add(mnode, changed, text, tr, p1, p2, user, date)
354 n = self.changelog.add(mnode, changed, text, tr, p1, p2, user, date)
348 tr.close()
355 tr.close()
349 if update_dirstate:
356 if update_dirstate:
350 self.dirstate.setparents(n, nullid)
357 self.dirstate.setparents(n, nullid)
351
358
352 def commit(self, files=None, text="", user=None, date=None,
359 def commit(self, files=None, text="", user=None, date=None,
353 match=util.always, force=False, wlock=None):
360 match=util.always, force=False, wlock=None):
354 commit = []
361 commit = []
355 remove = []
362 remove = []
356 changed = []
363 changed = []
357
364
358 if files:
365 if files:
359 for f in files:
366 for f in files:
360 s = self.dirstate.state(f)
367 s = self.dirstate.state(f)
361 if s in 'nmai':
368 if s in 'nmai':
362 commit.append(f)
369 commit.append(f)
363 elif s == 'r':
370 elif s == 'r':
364 remove.append(f)
371 remove.append(f)
365 else:
372 else:
366 self.ui.warn(_("%s not tracked!\n") % f)
373 self.ui.warn(_("%s not tracked!\n") % f)
367 else:
374 else:
368 modified, added, removed, deleted, unknown = self.changes(match=match)
375 modified, added, removed, deleted, unknown = self.changes(match=match)
369 commit = modified + added
376 commit = modified + added
370 remove = removed
377 remove = removed
371
378
372 p1, p2 = self.dirstate.parents()
379 p1, p2 = self.dirstate.parents()
373 c1 = self.changelog.read(p1)
380 c1 = self.changelog.read(p1)
374 c2 = self.changelog.read(p2)
381 c2 = self.changelog.read(p2)
375 m1 = self.manifest.read(c1[0])
382 m1 = self.manifest.read(c1[0])
376 mf1 = self.manifest.readflags(c1[0])
383 mf1 = self.manifest.readflags(c1[0])
377 m2 = self.manifest.read(c2[0])
384 m2 = self.manifest.read(c2[0])
378
385
379 if not commit and not remove and not force and p2 == nullid:
386 if not commit and not remove and not force and p2 == nullid:
380 self.ui.status(_("nothing changed\n"))
387 self.ui.status(_("nothing changed\n"))
381 return None
388 return None
382
389
383 xp1 = hex(p1)
390 xp1 = hex(p1)
384 if p2 == nullid: xp2 = ''
391 if p2 == nullid: xp2 = ''
385 else: xp2 = hex(p2)
392 else: xp2 = hex(p2)
386
393
387 self.hook("precommit", throw=True, parent1=xp1, parent2=xp2)
394 self.hook("precommit", throw=True, parent1=xp1, parent2=xp2)
388
395
389 if not wlock:
396 if not wlock:
390 wlock = self.wlock()
397 wlock = self.wlock()
391 l = self.lock()
398 l = self.lock()
392 tr = self.transaction()
399 tr = self.transaction()
393
400
394 # check in files
401 # check in files
395 new = {}
402 new = {}
396 linkrev = self.changelog.count()
403 linkrev = self.changelog.count()
397 commit.sort()
404 commit.sort()
398 for f in commit:
405 for f in commit:
399 self.ui.note(f + "\n")
406 self.ui.note(f + "\n")
400 try:
407 try:
401 mf1[f] = util.is_exec(self.wjoin(f), mf1.get(f, False))
408 mf1[f] = util.is_exec(self.wjoin(f), mf1.get(f, False))
402 t = self.wread(f)
409 t = self.wread(f)
403 except IOError:
410 except IOError:
404 self.ui.warn(_("trouble committing %s!\n") % f)
411 self.ui.warn(_("trouble committing %s!\n") % f)
405 raise
412 raise
406
413
407 r = self.file(f)
414 r = self.file(f)
408
415
409 meta = {}
416 meta = {}
410 cp = self.dirstate.copied(f)
417 cp = self.dirstate.copied(f)
411 if cp:
418 if cp:
412 meta["copy"] = cp
419 meta["copy"] = cp
413 meta["copyrev"] = hex(m1.get(cp, m2.get(cp, nullid)))
420 meta["copyrev"] = hex(m1.get(cp, m2.get(cp, nullid)))
414 self.ui.debug(_(" %s: copy %s:%s\n") % (f, cp, meta["copyrev"]))
421 self.ui.debug(_(" %s: copy %s:%s\n") % (f, cp, meta["copyrev"]))
415 fp1, fp2 = nullid, nullid
422 fp1, fp2 = nullid, nullid
416 else:
423 else:
417 entry, fp1, fp2 = self.checkfilemerge(f, t, r, m1, m2)
424 entry, fp1, fp2 = self.checkfilemerge(f, t, r, m1, m2)
418 if entry:
425 if entry:
419 new[f] = entry
426 new[f] = entry
420 continue
427 continue
421
428
422 new[f] = r.add(t, meta, tr, linkrev, fp1, fp2)
429 new[f] = r.add(t, meta, tr, linkrev, fp1, fp2)
423 # remember what we've added so that we can later calculate
430 # remember what we've added so that we can later calculate
424 # the files to pull from a set of changesets
431 # the files to pull from a set of changesets
425 changed.append(f)
432 changed.append(f)
426
433
427 # update manifest
434 # update manifest
428 m1 = m1.copy()
435 m1 = m1.copy()
429 m1.update(new)
436 m1.update(new)
430 for f in remove:
437 for f in remove:
431 if f in m1:
438 if f in m1:
432 del m1[f]
439 del m1[f]
433 mn = self.manifest.add(m1, mf1, tr, linkrev, c1[0], c2[0],
440 mn = self.manifest.add(m1, mf1, tr, linkrev, c1[0], c2[0],
434 (new, remove))
441 (new, remove))
435
442
436 # add changeset
443 # add changeset
437 new = new.keys()
444 new = new.keys()
438 new.sort()
445 new.sort()
439
446
440 if not text:
447 if not text:
441 edittext = [""]
448 edittext = [""]
442 if p2 != nullid:
449 if p2 != nullid:
443 edittext.append("HG: branch merge")
450 edittext.append("HG: branch merge")
444 edittext.extend(["HG: changed %s" % f for f in changed])
451 edittext.extend(["HG: changed %s" % f for f in changed])
445 edittext.extend(["HG: removed %s" % f for f in remove])
452 edittext.extend(["HG: removed %s" % f for f in remove])
446 if not changed and not remove:
453 if not changed and not remove:
447 edittext.append("HG: no files changed")
454 edittext.append("HG: no files changed")
448 edittext.append("")
455 edittext.append("")
449 # run editor in the repository root
456 # run editor in the repository root
450 olddir = os.getcwd()
457 olddir = os.getcwd()
451 os.chdir(self.root)
458 os.chdir(self.root)
452 edittext = self.ui.edit("\n".join(edittext))
459 edittext = self.ui.edit("\n".join(edittext))
453 os.chdir(olddir)
460 os.chdir(olddir)
454 if not edittext.rstrip():
461 if not edittext.rstrip():
455 return None
462 return None
456 text = edittext
463 text = edittext
457
464
458 user = user or self.ui.username()
465 user = user or self.ui.username()
459 n = self.changelog.add(mn, changed + remove, text, tr, p1, p2, user, date)
466 n = self.changelog.add(mn, changed + remove, text, tr, p1, p2, user, date)
460 self.hook('pretxncommit', throw=True, node=hex(n), parent1=xp1,
467 self.hook('pretxncommit', throw=True, node=hex(n), parent1=xp1,
461 parent2=xp2)
468 parent2=xp2)
462 tr.close()
469 tr.close()
463
470
464 self.dirstate.setparents(n)
471 self.dirstate.setparents(n)
465 self.dirstate.update(new, "n")
472 self.dirstate.update(new, "n")
466 self.dirstate.forget(remove)
473 self.dirstate.forget(remove)
467
474
468 self.hook("commit", node=hex(n), parent1=xp1, parent2=xp2)
475 self.hook("commit", node=hex(n), parent1=xp1, parent2=xp2)
469 return n
476 return n
470
477
471 def walk(self, node=None, files=[], match=util.always):
478 def walk(self, node=None, files=[], match=util.always):
472 if node:
479 if node:
473 fdict = dict.fromkeys(files)
480 fdict = dict.fromkeys(files)
474 for fn in self.manifest.read(self.changelog.read(node)[0]):
481 for fn in self.manifest.read(self.changelog.read(node)[0]):
475 fdict.pop(fn, None)
482 fdict.pop(fn, None)
476 if match(fn):
483 if match(fn):
477 yield 'm', fn
484 yield 'm', fn
478 for fn in fdict:
485 for fn in fdict:
479 self.ui.warn(_('%s: No such file in rev %s\n') % (
486 self.ui.warn(_('%s: No such file in rev %s\n') % (
480 util.pathto(self.getcwd(), fn), short(node)))
487 util.pathto(self.getcwd(), fn), short(node)))
481 else:
488 else:
482 for src, fn in self.dirstate.walk(files, match):
489 for src, fn in self.dirstate.walk(files, match):
483 yield src, fn
490 yield src, fn
484
491
485 def changes(self, node1=None, node2=None, files=[], match=util.always,
492 def changes(self, node1=None, node2=None, files=[], match=util.always,
486 wlock=None):
493 wlock=None):
487 """return changes between two nodes or node and working directory
494 """return changes between two nodes or node and working directory
488
495
489 If node1 is None, use the first dirstate parent instead.
496 If node1 is None, use the first dirstate parent instead.
490 If node2 is None, compare node1 with working directory.
497 If node2 is None, compare node1 with working directory.
491 """
498 """
492
499
493 def fcmp(fn, mf):
500 def fcmp(fn, mf):
494 t1 = self.wread(fn)
501 t1 = self.wread(fn)
495 t2 = self.file(fn).read(mf.get(fn, nullid))
502 t2 = self.file(fn).read(mf.get(fn, nullid))
496 return cmp(t1, t2)
503 return cmp(t1, t2)
497
504
498 def mfmatches(node):
505 def mfmatches(node):
499 change = self.changelog.read(node)
506 change = self.changelog.read(node)
500 mf = dict(self.manifest.read(change[0]))
507 mf = dict(self.manifest.read(change[0]))
501 for fn in mf.keys():
508 for fn in mf.keys():
502 if not match(fn):
509 if not match(fn):
503 del mf[fn]
510 del mf[fn]
504 return mf
511 return mf
505
512
506 # are we comparing the working directory?
513 # are we comparing the working directory?
507 if not node2:
514 if not node2:
508 if not wlock:
515 if not wlock:
509 try:
516 try:
510 wlock = self.wlock(wait=0)
517 wlock = self.wlock(wait=0)
511 except lock.LockException:
518 except lock.LockException:
512 wlock = None
519 wlock = None
513 lookup, modified, added, removed, deleted, unknown = (
520 lookup, modified, added, removed, deleted, unknown = (
514 self.dirstate.changes(files, match))
521 self.dirstate.changes(files, match))
515
522
516 # are we comparing working dir against its parent?
523 # are we comparing working dir against its parent?
517 if not node1:
524 if not node1:
518 if lookup:
525 if lookup:
519 # do a full compare of any files that might have changed
526 # do a full compare of any files that might have changed
520 mf2 = mfmatches(self.dirstate.parents()[0])
527 mf2 = mfmatches(self.dirstate.parents()[0])
521 for f in lookup:
528 for f in lookup:
522 if fcmp(f, mf2):
529 if fcmp(f, mf2):
523 modified.append(f)
530 modified.append(f)
524 elif wlock is not None:
531 elif wlock is not None:
525 self.dirstate.update([f], "n")
532 self.dirstate.update([f], "n")
526 else:
533 else:
527 # we are comparing working dir against non-parent
534 # we are comparing working dir against non-parent
528 # generate a pseudo-manifest for the working dir
535 # generate a pseudo-manifest for the working dir
529 mf2 = mfmatches(self.dirstate.parents()[0])
536 mf2 = mfmatches(self.dirstate.parents()[0])
530 for f in lookup + modified + added:
537 for f in lookup + modified + added:
531 mf2[f] = ""
538 mf2[f] = ""
532 for f in removed:
539 for f in removed:
533 if f in mf2:
540 if f in mf2:
534 del mf2[f]
541 del mf2[f]
535 else:
542 else:
536 # we are comparing two revisions
543 # we are comparing two revisions
537 deleted, unknown = [], []
544 deleted, unknown = [], []
538 mf2 = mfmatches(node2)
545 mf2 = mfmatches(node2)
539
546
540 if node1:
547 if node1:
541 # flush lists from dirstate before comparing manifests
548 # flush lists from dirstate before comparing manifests
542 modified, added = [], []
549 modified, added = [], []
543
550
544 mf1 = mfmatches(node1)
551 mf1 = mfmatches(node1)
545
552
546 for fn in mf2:
553 for fn in mf2:
547 if mf1.has_key(fn):
554 if mf1.has_key(fn):
548 if mf1[fn] != mf2[fn] and (mf2[fn] != "" or fcmp(fn, mf1)):
555 if mf1[fn] != mf2[fn] and (mf2[fn] != "" or fcmp(fn, mf1)):
549 modified.append(fn)
556 modified.append(fn)
550 del mf1[fn]
557 del mf1[fn]
551 else:
558 else:
552 added.append(fn)
559 added.append(fn)
553
560
554 removed = mf1.keys()
561 removed = mf1.keys()
555
562
556 # sort and return results:
563 # sort and return results:
557 for l in modified, added, removed, deleted, unknown:
564 for l in modified, added, removed, deleted, unknown:
558 l.sort()
565 l.sort()
559 return (modified, added, removed, deleted, unknown)
566 return (modified, added, removed, deleted, unknown)
560
567
561 def add(self, list, wlock=None):
568 def add(self, list, wlock=None):
562 if not wlock:
569 if not wlock:
563 wlock = self.wlock()
570 wlock = self.wlock()
564 for f in list:
571 for f in list:
565 p = self.wjoin(f)
572 p = self.wjoin(f)
566 if not os.path.exists(p):
573 if not os.path.exists(p):
567 self.ui.warn(_("%s does not exist!\n") % f)
574 self.ui.warn(_("%s does not exist!\n") % f)
568 elif not os.path.isfile(p):
575 elif not os.path.isfile(p):
569 self.ui.warn(_("%s not added: only files supported currently\n")
576 self.ui.warn(_("%s not added: only files supported currently\n")
570 % f)
577 % f)
571 elif self.dirstate.state(f) in 'an':
578 elif self.dirstate.state(f) in 'an':
572 self.ui.warn(_("%s already tracked!\n") % f)
579 self.ui.warn(_("%s already tracked!\n") % f)
573 else:
580 else:
574 self.dirstate.update([f], "a")
581 self.dirstate.update([f], "a")
575
582
576 def forget(self, list, wlock=None):
583 def forget(self, list, wlock=None):
577 if not wlock:
584 if not wlock:
578 wlock = self.wlock()
585 wlock = self.wlock()
579 for f in list:
586 for f in list:
580 if self.dirstate.state(f) not in 'ai':
587 if self.dirstate.state(f) not in 'ai':
581 self.ui.warn(_("%s not added!\n") % f)
588 self.ui.warn(_("%s not added!\n") % f)
582 else:
589 else:
583 self.dirstate.forget([f])
590 self.dirstate.forget([f])
584
591
585 def remove(self, list, unlink=False, wlock=None):
592 def remove(self, list, unlink=False, wlock=None):
586 if unlink:
593 if unlink:
587 for f in list:
594 for f in list:
588 try:
595 try:
589 util.unlink(self.wjoin(f))
596 util.unlink(self.wjoin(f))
590 except OSError, inst:
597 except OSError, inst:
591 if inst.errno != errno.ENOENT:
598 if inst.errno != errno.ENOENT:
592 raise
599 raise
593 if not wlock:
600 if not wlock:
594 wlock = self.wlock()
601 wlock = self.wlock()
595 for f in list:
602 for f in list:
596 p = self.wjoin(f)
603 p = self.wjoin(f)
597 if os.path.exists(p):
604 if os.path.exists(p):
598 self.ui.warn(_("%s still exists!\n") % f)
605 self.ui.warn(_("%s still exists!\n") % f)
599 elif self.dirstate.state(f) == 'a':
606 elif self.dirstate.state(f) == 'a':
600 self.dirstate.forget([f])
607 self.dirstate.forget([f])
601 elif f not in self.dirstate:
608 elif f not in self.dirstate:
602 self.ui.warn(_("%s not tracked!\n") % f)
609 self.ui.warn(_("%s not tracked!\n") % f)
603 else:
610 else:
604 self.dirstate.update([f], "r")
611 self.dirstate.update([f], "r")
605
612
606 def undelete(self, list, wlock=None):
613 def undelete(self, list, wlock=None):
607 p = self.dirstate.parents()[0]
614 p = self.dirstate.parents()[0]
608 mn = self.changelog.read(p)[0]
615 mn = self.changelog.read(p)[0]
609 mf = self.manifest.readflags(mn)
616 mf = self.manifest.readflags(mn)
610 m = self.manifest.read(mn)
617 m = self.manifest.read(mn)
611 if not wlock:
618 if not wlock:
612 wlock = self.wlock()
619 wlock = self.wlock()
613 for f in list:
620 for f in list:
614 if self.dirstate.state(f) not in "r":
621 if self.dirstate.state(f) not in "r":
615 self.ui.warn("%s not removed!\n" % f)
622 self.ui.warn("%s not removed!\n" % f)
616 else:
623 else:
617 t = self.file(f).read(m[f])
624 t = self.file(f).read(m[f])
618 self.wwrite(f, t)
625 self.wwrite(f, t)
619 util.set_exec(self.wjoin(f), mf[f])
626 util.set_exec(self.wjoin(f), mf[f])
620 self.dirstate.update([f], "n")
627 self.dirstate.update([f], "n")
621
628
622 def copy(self, source, dest, wlock=None):
629 def copy(self, source, dest, wlock=None):
623 p = self.wjoin(dest)
630 p = self.wjoin(dest)
624 if not os.path.exists(p):
631 if not os.path.exists(p):
625 self.ui.warn(_("%s does not exist!\n") % dest)
632 self.ui.warn(_("%s does not exist!\n") % dest)
626 elif not os.path.isfile(p):
633 elif not os.path.isfile(p):
627 self.ui.warn(_("copy failed: %s is not a file\n") % dest)
634 self.ui.warn(_("copy failed: %s is not a file\n") % dest)
628 else:
635 else:
629 if not wlock:
636 if not wlock:
630 wlock = self.wlock()
637 wlock = self.wlock()
631 if self.dirstate.state(dest) == '?':
638 if self.dirstate.state(dest) == '?':
632 self.dirstate.update([dest], "a")
639 self.dirstate.update([dest], "a")
633 self.dirstate.copy(source, dest)
640 self.dirstate.copy(source, dest)
634
641
635 def heads(self, start=None):
642 def heads(self, start=None):
636 heads = self.changelog.heads(start)
643 heads = self.changelog.heads(start)
637 # sort the output in rev descending order
644 # sort the output in rev descending order
638 heads = [(-self.changelog.rev(h), h) for h in heads]
645 heads = [(-self.changelog.rev(h), h) for h in heads]
639 heads.sort()
646 heads.sort()
640 return [n for (r, n) in heads]
647 return [n for (r, n) in heads]
641
648
642 # branchlookup returns a dict giving a list of branches for
649 # branchlookup returns a dict giving a list of branches for
643 # each head. A branch is defined as the tag of a node or
650 # each head. A branch is defined as the tag of a node or
644 # the branch of the node's parents. If a node has multiple
651 # the branch of the node's parents. If a node has multiple
645 # branch tags, tags are eliminated if they are visible from other
652 # branch tags, tags are eliminated if they are visible from other
646 # branch tags.
653 # branch tags.
647 #
654 #
648 # So, for this graph: a->b->c->d->e
655 # So, for this graph: a->b->c->d->e
649 # \ /
656 # \ /
650 # aa -----/
657 # aa -----/
651 # a has tag 2.6.12
658 # a has tag 2.6.12
652 # d has tag 2.6.13
659 # d has tag 2.6.13
653 # e would have branch tags for 2.6.12 and 2.6.13. Because the node
660 # e would have branch tags for 2.6.12 and 2.6.13. Because the node
654 # for 2.6.12 can be reached from the node 2.6.13, that is eliminated
661 # for 2.6.12 can be reached from the node 2.6.13, that is eliminated
655 # from the list.
662 # from the list.
656 #
663 #
657 # It is possible that more than one head will have the same branch tag.
664 # It is possible that more than one head will have the same branch tag.
658 # callers need to check the result for multiple heads under the same
665 # callers need to check the result for multiple heads under the same
659 # branch tag if that is a problem for them (ie checkout of a specific
666 # branch tag if that is a problem for them (ie checkout of a specific
660 # branch).
667 # branch).
661 #
668 #
662 # passing in a specific branch will limit the depth of the search
669 # passing in a specific branch will limit the depth of the search
663 # through the parents. It won't limit the branches returned in the
670 # through the parents. It won't limit the branches returned in the
664 # result though.
671 # result though.
665 def branchlookup(self, heads=None, branch=None):
672 def branchlookup(self, heads=None, branch=None):
666 if not heads:
673 if not heads:
667 heads = self.heads()
674 heads = self.heads()
668 headt = [ h for h in heads ]
675 headt = [ h for h in heads ]
669 chlog = self.changelog
676 chlog = self.changelog
670 branches = {}
677 branches = {}
671 merges = []
678 merges = []
672 seenmerge = {}
679 seenmerge = {}
673
680
674 # traverse the tree once for each head, recording in the branches
681 # traverse the tree once for each head, recording in the branches
675 # dict which tags are visible from this head. The branches
682 # dict which tags are visible from this head. The branches
676 # dict also records which tags are visible from each tag
683 # dict also records which tags are visible from each tag
677 # while we traverse.
684 # while we traverse.
678 while headt or merges:
685 while headt or merges:
679 if merges:
686 if merges:
680 n, found = merges.pop()
687 n, found = merges.pop()
681 visit = [n]
688 visit = [n]
682 else:
689 else:
683 h = headt.pop()
690 h = headt.pop()
684 visit = [h]
691 visit = [h]
685 found = [h]
692 found = [h]
686 seen = {}
693 seen = {}
687 while visit:
694 while visit:
688 n = visit.pop()
695 n = visit.pop()
689 if n in seen:
696 if n in seen:
690 continue
697 continue
691 pp = chlog.parents(n)
698 pp = chlog.parents(n)
692 tags = self.nodetags(n)
699 tags = self.nodetags(n)
693 if tags:
700 if tags:
694 for x in tags:
701 for x in tags:
695 if x == 'tip':
702 if x == 'tip':
696 continue
703 continue
697 for f in found:
704 for f in found:
698 branches.setdefault(f, {})[n] = 1
705 branches.setdefault(f, {})[n] = 1
699 branches.setdefault(n, {})[n] = 1
706 branches.setdefault(n, {})[n] = 1
700 break
707 break
701 if n not in found:
708 if n not in found:
702 found.append(n)
709 found.append(n)
703 if branch in tags:
710 if branch in tags:
704 continue
711 continue
705 seen[n] = 1
712 seen[n] = 1
706 if pp[1] != nullid and n not in seenmerge:
713 if pp[1] != nullid and n not in seenmerge:
707 merges.append((pp[1], [x for x in found]))
714 merges.append((pp[1], [x for x in found]))
708 seenmerge[n] = 1
715 seenmerge[n] = 1
709 if pp[0] != nullid:
716 if pp[0] != nullid:
710 visit.append(pp[0])
717 visit.append(pp[0])
711 # traverse the branches dict, eliminating branch tags from each
718 # traverse the branches dict, eliminating branch tags from each
712 # head that are visible from another branch tag for that head.
719 # head that are visible from another branch tag for that head.
713 out = {}
720 out = {}
714 viscache = {}
721 viscache = {}
715 for h in heads:
722 for h in heads:
716 def visible(node):
723 def visible(node):
717 if node in viscache:
724 if node in viscache:
718 return viscache[node]
725 return viscache[node]
719 ret = {}
726 ret = {}
720 visit = [node]
727 visit = [node]
721 while visit:
728 while visit:
722 x = visit.pop()
729 x = visit.pop()
723 if x in viscache:
730 if x in viscache:
724 ret.update(viscache[x])
731 ret.update(viscache[x])
725 elif x not in ret:
732 elif x not in ret:
726 ret[x] = 1
733 ret[x] = 1
727 if x in branches:
734 if x in branches:
728 visit[len(visit):] = branches[x].keys()
735 visit[len(visit):] = branches[x].keys()
729 viscache[node] = ret
736 viscache[node] = ret
730 return ret
737 return ret
731 if h not in branches:
738 if h not in branches:
732 continue
739 continue
733 # O(n^2), but somewhat limited. This only searches the
740 # O(n^2), but somewhat limited. This only searches the
734 # tags visible from a specific head, not all the tags in the
741 # tags visible from a specific head, not all the tags in the
735 # whole repo.
742 # whole repo.
736 for b in branches[h]:
743 for b in branches[h]:
737 vis = False
744 vis = False
738 for bb in branches[h].keys():
745 for bb in branches[h].keys():
739 if b != bb:
746 if b != bb:
740 if b in visible(bb):
747 if b in visible(bb):
741 vis = True
748 vis = True
742 break
749 break
743 if not vis:
750 if not vis:
744 l = out.setdefault(h, [])
751 l = out.setdefault(h, [])
745 l[len(l):] = self.nodetags(b)
752 l[len(l):] = self.nodetags(b)
746 return out
753 return out
747
754
748 def branches(self, nodes):
755 def branches(self, nodes):
749 if not nodes:
756 if not nodes:
750 nodes = [self.changelog.tip()]
757 nodes = [self.changelog.tip()]
751 b = []
758 b = []
752 for n in nodes:
759 for n in nodes:
753 t = n
760 t = n
754 while n:
761 while n:
755 p = self.changelog.parents(n)
762 p = self.changelog.parents(n)
756 if p[1] != nullid or p[0] == nullid:
763 if p[1] != nullid or p[0] == nullid:
757 b.append((t, n, p[0], p[1]))
764 b.append((t, n, p[0], p[1]))
758 break
765 break
759 n = p[0]
766 n = p[0]
760 return b
767 return b
761
768
762 def between(self, pairs):
769 def between(self, pairs):
763 r = []
770 r = []
764
771
765 for top, bottom in pairs:
772 for top, bottom in pairs:
766 n, l, i = top, [], 0
773 n, l, i = top, [], 0
767 f = 1
774 f = 1
768
775
769 while n != bottom:
776 while n != bottom:
770 p = self.changelog.parents(n)[0]
777 p = self.changelog.parents(n)[0]
771 if i == f:
778 if i == f:
772 l.append(n)
779 l.append(n)
773 f = f * 2
780 f = f * 2
774 n = p
781 n = p
775 i += 1
782 i += 1
776
783
777 r.append(l)
784 r.append(l)
778
785
779 return r
786 return r
780
787
781 def findincoming(self, remote, base=None, heads=None):
788 def findincoming(self, remote, base=None, heads=None):
782 m = self.changelog.nodemap
789 m = self.changelog.nodemap
783 search = []
790 search = []
784 fetch = {}
791 fetch = {}
785 seen = {}
792 seen = {}
786 seenbranch = {}
793 seenbranch = {}
787 if base == None:
794 if base == None:
788 base = {}
795 base = {}
789
796
790 # assume we're closer to the tip than the root
797 # assume we're closer to the tip than the root
791 # and start by examining the heads
798 # and start by examining the heads
792 self.ui.status(_("searching for changes\n"))
799 self.ui.status(_("searching for changes\n"))
793
800
794 if not heads:
801 if not heads:
795 heads = remote.heads()
802 heads = remote.heads()
796
803
797 unknown = []
804 unknown = []
798 for h in heads:
805 for h in heads:
799 if h not in m:
806 if h not in m:
800 unknown.append(h)
807 unknown.append(h)
801 else:
808 else:
802 base[h] = 1
809 base[h] = 1
803
810
804 if not unknown:
811 if not unknown:
805 return None
812 return None
806
813
807 rep = {}
814 rep = {}
808 reqcnt = 0
815 reqcnt = 0
809
816
810 # search through remote branches
817 # search through remote branches
811 # a 'branch' here is a linear segment of history, with four parts:
818 # a 'branch' here is a linear segment of history, with four parts:
812 # head, root, first parent, second parent
819 # head, root, first parent, second parent
813 # (a branch always has two parents (or none) by definition)
820 # (a branch always has two parents (or none) by definition)
814 unknown = remote.branches(unknown)
821 unknown = remote.branches(unknown)
815 while unknown:
822 while unknown:
816 r = []
823 r = []
817 while unknown:
824 while unknown:
818 n = unknown.pop(0)
825 n = unknown.pop(0)
819 if n[0] in seen:
826 if n[0] in seen:
820 continue
827 continue
821
828
822 self.ui.debug(_("examining %s:%s\n")
829 self.ui.debug(_("examining %s:%s\n")
823 % (short(n[0]), short(n[1])))
830 % (short(n[0]), short(n[1])))
824 if n[0] == nullid:
831 if n[0] == nullid:
825 break
832 break
826 if n in seenbranch:
833 if n in seenbranch:
827 self.ui.debug(_("branch already found\n"))
834 self.ui.debug(_("branch already found\n"))
828 continue
835 continue
829 if n[1] and n[1] in m: # do we know the base?
836 if n[1] and n[1] in m: # do we know the base?
830 self.ui.debug(_("found incomplete branch %s:%s\n")
837 self.ui.debug(_("found incomplete branch %s:%s\n")
831 % (short(n[0]), short(n[1])))
838 % (short(n[0]), short(n[1])))
832 search.append(n) # schedule branch range for scanning
839 search.append(n) # schedule branch range for scanning
833 seenbranch[n] = 1
840 seenbranch[n] = 1
834 else:
841 else:
835 if n[1] not in seen and n[1] not in fetch:
842 if n[1] not in seen and n[1] not in fetch:
836 if n[2] in m and n[3] in m:
843 if n[2] in m and n[3] in m:
837 self.ui.debug(_("found new changeset %s\n") %
844 self.ui.debug(_("found new changeset %s\n") %
838 short(n[1]))
845 short(n[1]))
839 fetch[n[1]] = 1 # earliest unknown
846 fetch[n[1]] = 1 # earliest unknown
840 base[n[2]] = 1 # latest known
847 base[n[2]] = 1 # latest known
841 continue
848 continue
842
849
843 for a in n[2:4]:
850 for a in n[2:4]:
844 if a not in rep:
851 if a not in rep:
845 r.append(a)
852 r.append(a)
846 rep[a] = 1
853 rep[a] = 1
847
854
848 seen[n[0]] = 1
855 seen[n[0]] = 1
849
856
850 if r:
857 if r:
851 reqcnt += 1
858 reqcnt += 1
852 self.ui.debug(_("request %d: %s\n") %
859 self.ui.debug(_("request %d: %s\n") %
853 (reqcnt, " ".join(map(short, r))))
860 (reqcnt, " ".join(map(short, r))))
854 for p in range(0, len(r), 10):
861 for p in range(0, len(r), 10):
855 for b in remote.branches(r[p:p+10]):
862 for b in remote.branches(r[p:p+10]):
856 self.ui.debug(_("received %s:%s\n") %
863 self.ui.debug(_("received %s:%s\n") %
857 (short(b[0]), short(b[1])))
864 (short(b[0]), short(b[1])))
858 if b[0] in m:
865 if b[0] in m:
859 self.ui.debug(_("found base node %s\n")
866 self.ui.debug(_("found base node %s\n")
860 % short(b[0]))
867 % short(b[0]))
861 base[b[0]] = 1
868 base[b[0]] = 1
862 elif b[0] not in seen:
869 elif b[0] not in seen:
863 unknown.append(b)
870 unknown.append(b)
864
871
865 # do binary search on the branches we found
872 # do binary search on the branches we found
866 while search:
873 while search:
867 n = search.pop(0)
874 n = search.pop(0)
868 reqcnt += 1
875 reqcnt += 1
869 l = remote.between([(n[0], n[1])])[0]
876 l = remote.between([(n[0], n[1])])[0]
870 l.append(n[1])
877 l.append(n[1])
871 p = n[0]
878 p = n[0]
872 f = 1
879 f = 1
873 for i in l:
880 for i in l:
874 self.ui.debug(_("narrowing %d:%d %s\n") % (f, len(l), short(i)))
881 self.ui.debug(_("narrowing %d:%d %s\n") % (f, len(l), short(i)))
875 if i in m:
882 if i in m:
876 if f <= 2:
883 if f <= 2:
877 self.ui.debug(_("found new branch changeset %s\n") %
884 self.ui.debug(_("found new branch changeset %s\n") %
878 short(p))
885 short(p))
879 fetch[p] = 1
886 fetch[p] = 1
880 base[i] = 1
887 base[i] = 1
881 else:
888 else:
882 self.ui.debug(_("narrowed branch search to %s:%s\n")
889 self.ui.debug(_("narrowed branch search to %s:%s\n")
883 % (short(p), short(i)))
890 % (short(p), short(i)))
884 search.append((p, i))
891 search.append((p, i))
885 break
892 break
886 p, f = i, f * 2
893 p, f = i, f * 2
887
894
888 # sanity check our fetch list
895 # sanity check our fetch list
889 for f in fetch.keys():
896 for f in fetch.keys():
890 if f in m:
897 if f in m:
891 raise repo.RepoError(_("already have changeset ") + short(f[:4]))
898 raise repo.RepoError(_("already have changeset ") + short(f[:4]))
892
899
893 if base.keys() == [nullid]:
900 if base.keys() == [nullid]:
894 self.ui.warn(_("warning: pulling from an unrelated repository!\n"))
901 self.ui.warn(_("warning: pulling from an unrelated repository!\n"))
895
902
896 self.ui.note(_("found new changesets starting at ") +
903 self.ui.note(_("found new changesets starting at ") +
897 " ".join([short(f) for f in fetch]) + "\n")
904 " ".join([short(f) for f in fetch]) + "\n")
898
905
899 self.ui.debug(_("%d total queries\n") % reqcnt)
906 self.ui.debug(_("%d total queries\n") % reqcnt)
900
907
901 return fetch.keys()
908 return fetch.keys()
902
909
903 def findoutgoing(self, remote, base=None, heads=None):
910 def findoutgoing(self, remote, base=None, heads=None):
904 if base == None:
911 if base == None:
905 base = {}
912 base = {}
906 self.findincoming(remote, base, heads)
913 self.findincoming(remote, base, heads)
907
914
908 self.ui.debug(_("common changesets up to ")
915 self.ui.debug(_("common changesets up to ")
909 + " ".join(map(short, base.keys())) + "\n")
916 + " ".join(map(short, base.keys())) + "\n")
910
917
911 remain = dict.fromkeys(self.changelog.nodemap)
918 remain = dict.fromkeys(self.changelog.nodemap)
912
919
913 # prune everything remote has from the tree
920 # prune everything remote has from the tree
914 del remain[nullid]
921 del remain[nullid]
915 remove = base.keys()
922 remove = base.keys()
916 while remove:
923 while remove:
917 n = remove.pop(0)
924 n = remove.pop(0)
918 if n in remain:
925 if n in remain:
919 del remain[n]
926 del remain[n]
920 for p in self.changelog.parents(n):
927 for p in self.changelog.parents(n):
921 remove.append(p)
928 remove.append(p)
922
929
923 # find every node whose parents have been pruned
930 # find every node whose parents have been pruned
924 subset = []
931 subset = []
925 for n in remain:
932 for n in remain:
926 p1, p2 = self.changelog.parents(n)
933 p1, p2 = self.changelog.parents(n)
927 if p1 not in remain and p2 not in remain:
934 if p1 not in remain and p2 not in remain:
928 subset.append(n)
935 subset.append(n)
929
936
930 # this is the set of all roots we have to push
937 # this is the set of all roots we have to push
931 return subset
938 return subset
932
939
933 def pull(self, remote, heads=None):
940 def pull(self, remote, heads=None):
934 l = self.lock()
941 l = self.lock()
935
942
936 # if we have an empty repo, fetch everything
943 # if we have an empty repo, fetch everything
937 if self.changelog.tip() == nullid:
944 if self.changelog.tip() == nullid:
938 self.ui.status(_("requesting all changes\n"))
945 self.ui.status(_("requesting all changes\n"))
939 fetch = [nullid]
946 fetch = [nullid]
940 else:
947 else:
941 fetch = self.findincoming(remote)
948 fetch = self.findincoming(remote)
942
949
943 if not fetch:
950 if not fetch:
944 self.ui.status(_("no changes found\n"))
951 self.ui.status(_("no changes found\n"))
945 return 1
952 return 1
946
953
947 if heads is None:
954 if heads is None:
948 cg = remote.changegroup(fetch, 'pull')
955 cg = remote.changegroup(fetch, 'pull')
949 else:
956 else:
950 cg = remote.changegroupsubset(fetch, heads, 'pull')
957 cg = remote.changegroupsubset(fetch, heads, 'pull')
951 return self.addchangegroup(cg)
958 return self.addchangegroup(cg)
952
959
953 def push(self, remote, force=False, revs=None):
960 def push(self, remote, force=False, revs=None):
954 lock = remote.lock()
961 lock = remote.lock()
955
962
956 base = {}
963 base = {}
957 heads = remote.heads()
964 heads = remote.heads()
958 inc = self.findincoming(remote, base, heads)
965 inc = self.findincoming(remote, base, heads)
959 if not force and inc:
966 if not force and inc:
960 self.ui.warn(_("abort: unsynced remote changes!\n"))
967 self.ui.warn(_("abort: unsynced remote changes!\n"))
961 self.ui.status(_("(did you forget to sync? use push -f to force)\n"))
968 self.ui.status(_("(did you forget to sync? use push -f to force)\n"))
962 return 1
969 return 1
963
970
964 update = self.findoutgoing(remote, base)
971 update = self.findoutgoing(remote, base)
965 if revs is not None:
972 if revs is not None:
966 msng_cl, bases, heads = self.changelog.nodesbetween(update, revs)
973 msng_cl, bases, heads = self.changelog.nodesbetween(update, revs)
967 else:
974 else:
968 bases, heads = update, self.changelog.heads()
975 bases, heads = update, self.changelog.heads()
969
976
970 if not bases:
977 if not bases:
971 self.ui.status(_("no changes found\n"))
978 self.ui.status(_("no changes found\n"))
972 return 1
979 return 1
973 elif not force:
980 elif not force:
974 if len(bases) < len(heads):
981 if len(bases) < len(heads):
975 self.ui.warn(_("abort: push creates new remote branches!\n"))
982 self.ui.warn(_("abort: push creates new remote branches!\n"))
976 self.ui.status(_("(did you forget to merge?"
983 self.ui.status(_("(did you forget to merge?"
977 " use push -f to force)\n"))
984 " use push -f to force)\n"))
978 return 1
985 return 1
979
986
980 if revs is None:
987 if revs is None:
981 cg = self.changegroup(update, 'push')
988 cg = self.changegroup(update, 'push')
982 else:
989 else:
983 cg = self.changegroupsubset(update, revs, 'push')
990 cg = self.changegroupsubset(update, revs, 'push')
984 return remote.addchangegroup(cg)
991 return remote.addchangegroup(cg)
985
992
986 def changegroupsubset(self, bases, heads, source):
993 def changegroupsubset(self, bases, heads, source):
987 """This function generates a changegroup consisting of all the nodes
994 """This function generates a changegroup consisting of all the nodes
988 that are descendents of any of the bases, and ancestors of any of
995 that are descendents of any of the bases, and ancestors of any of
989 the heads.
996 the heads.
990
997
991 It is fairly complex as determining which filenodes and which
998 It is fairly complex as determining which filenodes and which
992 manifest nodes need to be included for the changeset to be complete
999 manifest nodes need to be included for the changeset to be complete
993 is non-trivial.
1000 is non-trivial.
994
1001
995 Another wrinkle is doing the reverse, figuring out which changeset in
1002 Another wrinkle is doing the reverse, figuring out which changeset in
996 the changegroup a particular filenode or manifestnode belongs to."""
1003 the changegroup a particular filenode or manifestnode belongs to."""
997
1004
998 self.hook('preoutgoing', throw=True, source=source)
1005 self.hook('preoutgoing', throw=True, source=source)
999
1006
1000 # Set up some initial variables
1007 # Set up some initial variables
1001 # Make it easy to refer to self.changelog
1008 # Make it easy to refer to self.changelog
1002 cl = self.changelog
1009 cl = self.changelog
1003 # msng is short for missing - compute the list of changesets in this
1010 # msng is short for missing - compute the list of changesets in this
1004 # changegroup.
1011 # changegroup.
1005 msng_cl_lst, bases, heads = cl.nodesbetween(bases, heads)
1012 msng_cl_lst, bases, heads = cl.nodesbetween(bases, heads)
1006 # Some bases may turn out to be superfluous, and some heads may be
1013 # Some bases may turn out to be superfluous, and some heads may be
1007 # too. nodesbetween will return the minimal set of bases and heads
1014 # too. nodesbetween will return the minimal set of bases and heads
1008 # necessary to re-create the changegroup.
1015 # necessary to re-create the changegroup.
1009
1016
1010 # Known heads are the list of heads that it is assumed the recipient
1017 # Known heads are the list of heads that it is assumed the recipient
1011 # of this changegroup will know about.
1018 # of this changegroup will know about.
1012 knownheads = {}
1019 knownheads = {}
1013 # We assume that all parents of bases are known heads.
1020 # We assume that all parents of bases are known heads.
1014 for n in bases:
1021 for n in bases:
1015 for p in cl.parents(n):
1022 for p in cl.parents(n):
1016 if p != nullid:
1023 if p != nullid:
1017 knownheads[p] = 1
1024 knownheads[p] = 1
1018 knownheads = knownheads.keys()
1025 knownheads = knownheads.keys()
1019 if knownheads:
1026 if knownheads:
1020 # Now that we know what heads are known, we can compute which
1027 # Now that we know what heads are known, we can compute which
1021 # changesets are known. The recipient must know about all
1028 # changesets are known. The recipient must know about all
1022 # changesets required to reach the known heads from the null
1029 # changesets required to reach the known heads from the null
1023 # changeset.
1030 # changeset.
1024 has_cl_set, junk, junk = cl.nodesbetween(None, knownheads)
1031 has_cl_set, junk, junk = cl.nodesbetween(None, knownheads)
1025 junk = None
1032 junk = None
1026 # Transform the list into an ersatz set.
1033 # Transform the list into an ersatz set.
1027 has_cl_set = dict.fromkeys(has_cl_set)
1034 has_cl_set = dict.fromkeys(has_cl_set)
1028 else:
1035 else:
1029 # If there were no known heads, the recipient cannot be assumed to
1036 # If there were no known heads, the recipient cannot be assumed to
1030 # know about any changesets.
1037 # know about any changesets.
1031 has_cl_set = {}
1038 has_cl_set = {}
1032
1039
1033 # Make it easy to refer to self.manifest
1040 # Make it easy to refer to self.manifest
1034 mnfst = self.manifest
1041 mnfst = self.manifest
1035 # We don't know which manifests are missing yet
1042 # We don't know which manifests are missing yet
1036 msng_mnfst_set = {}
1043 msng_mnfst_set = {}
1037 # Nor do we know which filenodes are missing.
1044 # Nor do we know which filenodes are missing.
1038 msng_filenode_set = {}
1045 msng_filenode_set = {}
1039
1046
1040 junk = mnfst.index[mnfst.count() - 1] # Get around a bug in lazyindex
1047 junk = mnfst.index[mnfst.count() - 1] # Get around a bug in lazyindex
1041 junk = None
1048 junk = None
1042
1049
1043 # A changeset always belongs to itself, so the changenode lookup
1050 # A changeset always belongs to itself, so the changenode lookup
1044 # function for a changenode is identity.
1051 # function for a changenode is identity.
1045 def identity(x):
1052 def identity(x):
1046 return x
1053 return x
1047
1054
1048 # A function generating function. Sets up an environment for the
1055 # A function generating function. Sets up an environment for the
1049 # inner function.
1056 # inner function.
1050 def cmp_by_rev_func(revlog):
1057 def cmp_by_rev_func(revlog):
1051 # Compare two nodes by their revision number in the environment's
1058 # Compare two nodes by their revision number in the environment's
1052 # revision history. Since the revision number both represents the
1059 # revision history. Since the revision number both represents the
1053 # most efficient order to read the nodes in, and represents a
1060 # most efficient order to read the nodes in, and represents a
1054 # topological sorting of the nodes, this function is often useful.
1061 # topological sorting of the nodes, this function is often useful.
1055 def cmp_by_rev(a, b):
1062 def cmp_by_rev(a, b):
1056 return cmp(revlog.rev(a), revlog.rev(b))
1063 return cmp(revlog.rev(a), revlog.rev(b))
1057 return cmp_by_rev
1064 return cmp_by_rev
1058
1065
1059 # If we determine that a particular file or manifest node must be a
1066 # If we determine that a particular file or manifest node must be a
1060 # node that the recipient of the changegroup will already have, we can
1067 # node that the recipient of the changegroup will already have, we can
1061 # also assume the recipient will have all the parents. This function
1068 # also assume the recipient will have all the parents. This function
1062 # prunes them from the set of missing nodes.
1069 # prunes them from the set of missing nodes.
1063 def prune_parents(revlog, hasset, msngset):
1070 def prune_parents(revlog, hasset, msngset):
1064 haslst = hasset.keys()
1071 haslst = hasset.keys()
1065 haslst.sort(cmp_by_rev_func(revlog))
1072 haslst.sort(cmp_by_rev_func(revlog))
1066 for node in haslst:
1073 for node in haslst:
1067 parentlst = [p for p in revlog.parents(node) if p != nullid]
1074 parentlst = [p for p in revlog.parents(node) if p != nullid]
1068 while parentlst:
1075 while parentlst:
1069 n = parentlst.pop()
1076 n = parentlst.pop()
1070 if n not in hasset:
1077 if n not in hasset:
1071 hasset[n] = 1
1078 hasset[n] = 1
1072 p = [p for p in revlog.parents(n) if p != nullid]
1079 p = [p for p in revlog.parents(n) if p != nullid]
1073 parentlst.extend(p)
1080 parentlst.extend(p)
1074 for n in hasset:
1081 for n in hasset:
1075 msngset.pop(n, None)
1082 msngset.pop(n, None)
1076
1083
1077 # This is a function generating function used to set up an environment
1084 # This is a function generating function used to set up an environment
1078 # for the inner function to execute in.
1085 # for the inner function to execute in.
1079 def manifest_and_file_collector(changedfileset):
1086 def manifest_and_file_collector(changedfileset):
1080 # This is an information gathering function that gathers
1087 # This is an information gathering function that gathers
1081 # information from each changeset node that goes out as part of
1088 # information from each changeset node that goes out as part of
1082 # the changegroup. The information gathered is a list of which
1089 # the changegroup. The information gathered is a list of which
1083 # manifest nodes are potentially required (the recipient may
1090 # manifest nodes are potentially required (the recipient may
1084 # already have them) and total list of all files which were
1091 # already have them) and total list of all files which were
1085 # changed in any changeset in the changegroup.
1092 # changed in any changeset in the changegroup.
1086 #
1093 #
1087 # We also remember the first changenode we saw any manifest
1094 # We also remember the first changenode we saw any manifest
1088 # referenced by so we can later determine which changenode 'owns'
1095 # referenced by so we can later determine which changenode 'owns'
1089 # the manifest.
1096 # the manifest.
1090 def collect_manifests_and_files(clnode):
1097 def collect_manifests_and_files(clnode):
1091 c = cl.read(clnode)
1098 c = cl.read(clnode)
1092 for f in c[3]:
1099 for f in c[3]:
1093 # This is to make sure we only have one instance of each
1100 # This is to make sure we only have one instance of each
1094 # filename string for each filename.
1101 # filename string for each filename.
1095 changedfileset.setdefault(f, f)
1102 changedfileset.setdefault(f, f)
1096 msng_mnfst_set.setdefault(c[0], clnode)
1103 msng_mnfst_set.setdefault(c[0], clnode)
1097 return collect_manifests_and_files
1104 return collect_manifests_and_files
1098
1105
1099 # Figure out which manifest nodes (of the ones we think might be part
1106 # Figure out which manifest nodes (of the ones we think might be part
1100 # of the changegroup) the recipient must know about and remove them
1107 # of the changegroup) the recipient must know about and remove them
1101 # from the changegroup.
1108 # from the changegroup.
1102 def prune_manifests():
1109 def prune_manifests():
1103 has_mnfst_set = {}
1110 has_mnfst_set = {}
1104 for n in msng_mnfst_set:
1111 for n in msng_mnfst_set:
1105 # If a 'missing' manifest thinks it belongs to a changenode
1112 # If a 'missing' manifest thinks it belongs to a changenode
1106 # the recipient is assumed to have, obviously the recipient
1113 # the recipient is assumed to have, obviously the recipient
1107 # must have that manifest.
1114 # must have that manifest.
1108 linknode = cl.node(mnfst.linkrev(n))
1115 linknode = cl.node(mnfst.linkrev(n))
1109 if linknode in has_cl_set:
1116 if linknode in has_cl_set:
1110 has_mnfst_set[n] = 1
1117 has_mnfst_set[n] = 1
1111 prune_parents(mnfst, has_mnfst_set, msng_mnfst_set)
1118 prune_parents(mnfst, has_mnfst_set, msng_mnfst_set)
1112
1119
1113 # Use the information collected in collect_manifests_and_files to say
1120 # Use the information collected in collect_manifests_and_files to say
1114 # which changenode any manifestnode belongs to.
1121 # which changenode any manifestnode belongs to.
1115 def lookup_manifest_link(mnfstnode):
1122 def lookup_manifest_link(mnfstnode):
1116 return msng_mnfst_set[mnfstnode]
1123 return msng_mnfst_set[mnfstnode]
1117
1124
1118 # A function generating function that sets up the initial environment
1125 # A function generating function that sets up the initial environment
1119 # the inner function.
1126 # the inner function.
1120 def filenode_collector(changedfiles):
1127 def filenode_collector(changedfiles):
1121 next_rev = [0]
1128 next_rev = [0]
1122 # This gathers information from each manifestnode included in the
1129 # This gathers information from each manifestnode included in the
1123 # changegroup about which filenodes the manifest node references
1130 # changegroup about which filenodes the manifest node references
1124 # so we can include those in the changegroup too.
1131 # so we can include those in the changegroup too.
1125 #
1132 #
1126 # It also remembers which changenode each filenode belongs to. It
1133 # It also remembers which changenode each filenode belongs to. It
1127 # does this by assuming the a filenode belongs to the changenode
1134 # does this by assuming the a filenode belongs to the changenode
1128 # the first manifest that references it belongs to.
1135 # the first manifest that references it belongs to.
1129 def collect_msng_filenodes(mnfstnode):
1136 def collect_msng_filenodes(mnfstnode):
1130 r = mnfst.rev(mnfstnode)
1137 r = mnfst.rev(mnfstnode)
1131 if r == next_rev[0]:
1138 if r == next_rev[0]:
1132 # If the last rev we looked at was the one just previous,
1139 # If the last rev we looked at was the one just previous,
1133 # we only need to see a diff.
1140 # we only need to see a diff.
1134 delta = mdiff.patchtext(mnfst.delta(mnfstnode))
1141 delta = mdiff.patchtext(mnfst.delta(mnfstnode))
1135 # For each line in the delta
1142 # For each line in the delta
1136 for dline in delta.splitlines():
1143 for dline in delta.splitlines():
1137 # get the filename and filenode for that line
1144 # get the filename and filenode for that line
1138 f, fnode = dline.split('\0')
1145 f, fnode = dline.split('\0')
1139 fnode = bin(fnode[:40])
1146 fnode = bin(fnode[:40])
1140 f = changedfiles.get(f, None)
1147 f = changedfiles.get(f, None)
1141 # And if the file is in the list of files we care
1148 # And if the file is in the list of files we care
1142 # about.
1149 # about.
1143 if f is not None:
1150 if f is not None:
1144 # Get the changenode this manifest belongs to
1151 # Get the changenode this manifest belongs to
1145 clnode = msng_mnfst_set[mnfstnode]
1152 clnode = msng_mnfst_set[mnfstnode]
1146 # Create the set of filenodes for the file if
1153 # Create the set of filenodes for the file if
1147 # there isn't one already.
1154 # there isn't one already.
1148 ndset = msng_filenode_set.setdefault(f, {})
1155 ndset = msng_filenode_set.setdefault(f, {})
1149 # And set the filenode's changelog node to the
1156 # And set the filenode's changelog node to the
1150 # manifest's if it hasn't been set already.
1157 # manifest's if it hasn't been set already.
1151 ndset.setdefault(fnode, clnode)
1158 ndset.setdefault(fnode, clnode)
1152 else:
1159 else:
1153 # Otherwise we need a full manifest.
1160 # Otherwise we need a full manifest.
1154 m = mnfst.read(mnfstnode)
1161 m = mnfst.read(mnfstnode)
1155 # For every file in we care about.
1162 # For every file in we care about.
1156 for f in changedfiles:
1163 for f in changedfiles:
1157 fnode = m.get(f, None)
1164 fnode = m.get(f, None)
1158 # If it's in the manifest
1165 # If it's in the manifest
1159 if fnode is not None:
1166 if fnode is not None:
1160 # See comments above.
1167 # See comments above.
1161 clnode = msng_mnfst_set[mnfstnode]
1168 clnode = msng_mnfst_set[mnfstnode]
1162 ndset = msng_filenode_set.setdefault(f, {})
1169 ndset = msng_filenode_set.setdefault(f, {})
1163 ndset.setdefault(fnode, clnode)
1170 ndset.setdefault(fnode, clnode)
1164 # Remember the revision we hope to see next.
1171 # Remember the revision we hope to see next.
1165 next_rev[0] = r + 1
1172 next_rev[0] = r + 1
1166 return collect_msng_filenodes
1173 return collect_msng_filenodes
1167
1174
1168 # We have a list of filenodes we think we need for a file, lets remove
1175 # We have a list of filenodes we think we need for a file, lets remove
1169 # all those we now the recipient must have.
1176 # all those we now the recipient must have.
1170 def prune_filenodes(f, filerevlog):
1177 def prune_filenodes(f, filerevlog):
1171 msngset = msng_filenode_set[f]
1178 msngset = msng_filenode_set[f]
1172 hasset = {}
1179 hasset = {}
1173 # If a 'missing' filenode thinks it belongs to a changenode we
1180 # If a 'missing' filenode thinks it belongs to a changenode we
1174 # assume the recipient must have, then the recipient must have
1181 # assume the recipient must have, then the recipient must have
1175 # that filenode.
1182 # that filenode.
1176 for n in msngset:
1183 for n in msngset:
1177 clnode = cl.node(filerevlog.linkrev(n))
1184 clnode = cl.node(filerevlog.linkrev(n))
1178 if clnode in has_cl_set:
1185 if clnode in has_cl_set:
1179 hasset[n] = 1
1186 hasset[n] = 1
1180 prune_parents(filerevlog, hasset, msngset)
1187 prune_parents(filerevlog, hasset, msngset)
1181
1188
1182 # A function generator function that sets up the a context for the
1189 # A function generator function that sets up the a context for the
1183 # inner function.
1190 # inner function.
1184 def lookup_filenode_link_func(fname):
1191 def lookup_filenode_link_func(fname):
1185 msngset = msng_filenode_set[fname]
1192 msngset = msng_filenode_set[fname]
1186 # Lookup the changenode the filenode belongs to.
1193 # Lookup the changenode the filenode belongs to.
1187 def lookup_filenode_link(fnode):
1194 def lookup_filenode_link(fnode):
1188 return msngset[fnode]
1195 return msngset[fnode]
1189 return lookup_filenode_link
1196 return lookup_filenode_link
1190
1197
1191 # Now that we have all theses utility functions to help out and
1198 # Now that we have all theses utility functions to help out and
1192 # logically divide up the task, generate the group.
1199 # logically divide up the task, generate the group.
1193 def gengroup():
1200 def gengroup():
1194 # The set of changed files starts empty.
1201 # The set of changed files starts empty.
1195 changedfiles = {}
1202 changedfiles = {}
1196 # Create a changenode group generator that will call our functions
1203 # Create a changenode group generator that will call our functions
1197 # back to lookup the owning changenode and collect information.
1204 # back to lookup the owning changenode and collect information.
1198 group = cl.group(msng_cl_lst, identity,
1205 group = cl.group(msng_cl_lst, identity,
1199 manifest_and_file_collector(changedfiles))
1206 manifest_and_file_collector(changedfiles))
1200 for chnk in group:
1207 for chnk in group:
1201 yield chnk
1208 yield chnk
1202
1209
1203 # The list of manifests has been collected by the generator
1210 # The list of manifests has been collected by the generator
1204 # calling our functions back.
1211 # calling our functions back.
1205 prune_manifests()
1212 prune_manifests()
1206 msng_mnfst_lst = msng_mnfst_set.keys()
1213 msng_mnfst_lst = msng_mnfst_set.keys()
1207 # Sort the manifestnodes by revision number.
1214 # Sort the manifestnodes by revision number.
1208 msng_mnfst_lst.sort(cmp_by_rev_func(mnfst))
1215 msng_mnfst_lst.sort(cmp_by_rev_func(mnfst))
1209 # Create a generator for the manifestnodes that calls our lookup
1216 # Create a generator for the manifestnodes that calls our lookup
1210 # and data collection functions back.
1217 # and data collection functions back.
1211 group = mnfst.group(msng_mnfst_lst, lookup_manifest_link,
1218 group = mnfst.group(msng_mnfst_lst, lookup_manifest_link,
1212 filenode_collector(changedfiles))
1219 filenode_collector(changedfiles))
1213 for chnk in group:
1220 for chnk in group:
1214 yield chnk
1221 yield chnk
1215
1222
1216 # These are no longer needed, dereference and toss the memory for
1223 # These are no longer needed, dereference and toss the memory for
1217 # them.
1224 # them.
1218 msng_mnfst_lst = None
1225 msng_mnfst_lst = None
1219 msng_mnfst_set.clear()
1226 msng_mnfst_set.clear()
1220
1227
1221 changedfiles = changedfiles.keys()
1228 changedfiles = changedfiles.keys()
1222 changedfiles.sort()
1229 changedfiles.sort()
1223 # Go through all our files in order sorted by name.
1230 # Go through all our files in order sorted by name.
1224 for fname in changedfiles:
1231 for fname in changedfiles:
1225 filerevlog = self.file(fname)
1232 filerevlog = self.file(fname)
1226 # Toss out the filenodes that the recipient isn't really
1233 # Toss out the filenodes that the recipient isn't really
1227 # missing.
1234 # missing.
1228 if msng_filenode_set.has_key(fname):
1235 if msng_filenode_set.has_key(fname):
1229 prune_filenodes(fname, filerevlog)
1236 prune_filenodes(fname, filerevlog)
1230 msng_filenode_lst = msng_filenode_set[fname].keys()
1237 msng_filenode_lst = msng_filenode_set[fname].keys()
1231 else:
1238 else:
1232 msng_filenode_lst = []
1239 msng_filenode_lst = []
1233 # If any filenodes are left, generate the group for them,
1240 # If any filenodes are left, generate the group for them,
1234 # otherwise don't bother.
1241 # otherwise don't bother.
1235 if len(msng_filenode_lst) > 0:
1242 if len(msng_filenode_lst) > 0:
1236 yield struct.pack(">l", len(fname) + 4) + fname
1243 yield struct.pack(">l", len(fname) + 4) + fname
1237 # Sort the filenodes by their revision #
1244 # Sort the filenodes by their revision #
1238 msng_filenode_lst.sort(cmp_by_rev_func(filerevlog))
1245 msng_filenode_lst.sort(cmp_by_rev_func(filerevlog))
1239 # Create a group generator and only pass in a changenode
1246 # Create a group generator and only pass in a changenode
1240 # lookup function as we need to collect no information
1247 # lookup function as we need to collect no information
1241 # from filenodes.
1248 # from filenodes.
1242 group = filerevlog.group(msng_filenode_lst,
1249 group = filerevlog.group(msng_filenode_lst,
1243 lookup_filenode_link_func(fname))
1250 lookup_filenode_link_func(fname))
1244 for chnk in group:
1251 for chnk in group:
1245 yield chnk
1252 yield chnk
1246 if msng_filenode_set.has_key(fname):
1253 if msng_filenode_set.has_key(fname):
1247 # Don't need this anymore, toss it to free memory.
1254 # Don't need this anymore, toss it to free memory.
1248 del msng_filenode_set[fname]
1255 del msng_filenode_set[fname]
1249 # Signal that no more groups are left.
1256 # Signal that no more groups are left.
1250 yield struct.pack(">l", 0)
1257 yield struct.pack(">l", 0)
1251
1258
1252 self.hook('outgoing', node=hex(msng_cl_lst[0]), source=source)
1259 self.hook('outgoing', node=hex(msng_cl_lst[0]), source=source)
1253
1260
1254 return util.chunkbuffer(gengroup())
1261 return util.chunkbuffer(gengroup())
1255
1262
1256 def changegroup(self, basenodes, source):
1263 def changegroup(self, basenodes, source):
1257 """Generate a changegroup of all nodes that we have that a recipient
1264 """Generate a changegroup of all nodes that we have that a recipient
1258 doesn't.
1265 doesn't.
1259
1266
1260 This is much easier than the previous function as we can assume that
1267 This is much easier than the previous function as we can assume that
1261 the recipient has any changenode we aren't sending them."""
1268 the recipient has any changenode we aren't sending them."""
1262
1269
1263 self.hook('preoutgoing', throw=True, source=source)
1270 self.hook('preoutgoing', throw=True, source=source)
1264
1271
1265 cl = self.changelog
1272 cl = self.changelog
1266 nodes = cl.nodesbetween(basenodes, None)[0]
1273 nodes = cl.nodesbetween(basenodes, None)[0]
1267 revset = dict.fromkeys([cl.rev(n) for n in nodes])
1274 revset = dict.fromkeys([cl.rev(n) for n in nodes])
1268
1275
1269 def identity(x):
1276 def identity(x):
1270 return x
1277 return x
1271
1278
1272 def gennodelst(revlog):
1279 def gennodelst(revlog):
1273 for r in xrange(0, revlog.count()):
1280 for r in xrange(0, revlog.count()):
1274 n = revlog.node(r)
1281 n = revlog.node(r)
1275 if revlog.linkrev(n) in revset:
1282 if revlog.linkrev(n) in revset:
1276 yield n
1283 yield n
1277
1284
1278 def changed_file_collector(changedfileset):
1285 def changed_file_collector(changedfileset):
1279 def collect_changed_files(clnode):
1286 def collect_changed_files(clnode):
1280 c = cl.read(clnode)
1287 c = cl.read(clnode)
1281 for fname in c[3]:
1288 for fname in c[3]:
1282 changedfileset[fname] = 1
1289 changedfileset[fname] = 1
1283 return collect_changed_files
1290 return collect_changed_files
1284
1291
1285 def lookuprevlink_func(revlog):
1292 def lookuprevlink_func(revlog):
1286 def lookuprevlink(n):
1293 def lookuprevlink(n):
1287 return cl.node(revlog.linkrev(n))
1294 return cl.node(revlog.linkrev(n))
1288 return lookuprevlink
1295 return lookuprevlink
1289
1296
1290 def gengroup():
1297 def gengroup():
1291 # construct a list of all changed files
1298 # construct a list of all changed files
1292 changedfiles = {}
1299 changedfiles = {}
1293
1300
1294 for chnk in cl.group(nodes, identity,
1301 for chnk in cl.group(nodes, identity,
1295 changed_file_collector(changedfiles)):
1302 changed_file_collector(changedfiles)):
1296 yield chnk
1303 yield chnk
1297 changedfiles = changedfiles.keys()
1304 changedfiles = changedfiles.keys()
1298 changedfiles.sort()
1305 changedfiles.sort()
1299
1306
1300 mnfst = self.manifest
1307 mnfst = self.manifest
1301 nodeiter = gennodelst(mnfst)
1308 nodeiter = gennodelst(mnfst)
1302 for chnk in mnfst.group(nodeiter, lookuprevlink_func(mnfst)):
1309 for chnk in mnfst.group(nodeiter, lookuprevlink_func(mnfst)):
1303 yield chnk
1310 yield chnk
1304
1311
1305 for fname in changedfiles:
1312 for fname in changedfiles:
1306 filerevlog = self.file(fname)
1313 filerevlog = self.file(fname)
1307 nodeiter = gennodelst(filerevlog)
1314 nodeiter = gennodelst(filerevlog)
1308 nodeiter = list(nodeiter)
1315 nodeiter = list(nodeiter)
1309 if nodeiter:
1316 if nodeiter:
1310 yield struct.pack(">l", len(fname) + 4) + fname
1317 yield struct.pack(">l", len(fname) + 4) + fname
1311 lookup = lookuprevlink_func(filerevlog)
1318 lookup = lookuprevlink_func(filerevlog)
1312 for chnk in filerevlog.group(nodeiter, lookup):
1319 for chnk in filerevlog.group(nodeiter, lookup):
1313 yield chnk
1320 yield chnk
1314
1321
1315 yield struct.pack(">l", 0)
1322 yield struct.pack(">l", 0)
1316 self.hook('outgoing', node=hex(nodes[0]), source=source)
1323 self.hook('outgoing', node=hex(nodes[0]), source=source)
1317
1324
1318 return util.chunkbuffer(gengroup())
1325 return util.chunkbuffer(gengroup())
1319
1326
1320 def addchangegroup(self, source):
1327 def addchangegroup(self, source):
1321
1328
1322 def getchunk():
1329 def getchunk():
1323 d = source.read(4)
1330 d = source.read(4)
1324 if not d:
1331 if not d:
1325 return ""
1332 return ""
1326 l = struct.unpack(">l", d)[0]
1333 l = struct.unpack(">l", d)[0]
1327 if l <= 4:
1334 if l <= 4:
1328 return ""
1335 return ""
1329 d = source.read(l - 4)
1336 d = source.read(l - 4)
1330 if len(d) < l - 4:
1337 if len(d) < l - 4:
1331 raise repo.RepoError(_("premature EOF reading chunk"
1338 raise repo.RepoError(_("premature EOF reading chunk"
1332 " (got %d bytes, expected %d)")
1339 " (got %d bytes, expected %d)")
1333 % (len(d), l - 4))
1340 % (len(d), l - 4))
1334 return d
1341 return d
1335
1342
1336 def getgroup():
1343 def getgroup():
1337 while 1:
1344 while 1:
1338 c = getchunk()
1345 c = getchunk()
1339 if not c:
1346 if not c:
1340 break
1347 break
1341 yield c
1348 yield c
1342
1349
1343 def csmap(x):
1350 def csmap(x):
1344 self.ui.debug(_("add changeset %s\n") % short(x))
1351 self.ui.debug(_("add changeset %s\n") % short(x))
1345 return self.changelog.count()
1352 return self.changelog.count()
1346
1353
1347 def revmap(x):
1354 def revmap(x):
1348 return self.changelog.rev(x)
1355 return self.changelog.rev(x)
1349
1356
1350 if not source:
1357 if not source:
1351 return
1358 return
1352
1359
1353 self.hook('prechangegroup', throw=True)
1360 self.hook('prechangegroup', throw=True)
1354
1361
1355 changesets = files = revisions = 0
1362 changesets = files = revisions = 0
1356
1363
1357 tr = self.transaction()
1364 tr = self.transaction()
1358
1365
1359 oldheads = len(self.changelog.heads())
1366 oldheads = len(self.changelog.heads())
1360
1367
1361 # pull off the changeset group
1368 # pull off the changeset group
1362 self.ui.status(_("adding changesets\n"))
1369 self.ui.status(_("adding changesets\n"))
1363 co = self.changelog.tip()
1370 co = self.changelog.tip()
1364 cn = self.changelog.addgroup(getgroup(), csmap, tr, 1) # unique
1371 cn = self.changelog.addgroup(getgroup(), csmap, tr, 1) # unique
1365 cnr, cor = map(self.changelog.rev, (cn, co))
1372 cnr, cor = map(self.changelog.rev, (cn, co))
1366 if cn == nullid:
1373 if cn == nullid:
1367 cnr = cor
1374 cnr = cor
1368 changesets = cnr - cor
1375 changesets = cnr - cor
1369
1376
1370 # pull off the manifest group
1377 # pull off the manifest group
1371 self.ui.status(_("adding manifests\n"))
1378 self.ui.status(_("adding manifests\n"))
1372 mm = self.manifest.tip()
1379 mm = self.manifest.tip()
1373 mo = self.manifest.addgroup(getgroup(), revmap, tr)
1380 mo = self.manifest.addgroup(getgroup(), revmap, tr)
1374
1381
1375 # process the files
1382 # process the files
1376 self.ui.status(_("adding file changes\n"))
1383 self.ui.status(_("adding file changes\n"))
1377 while 1:
1384 while 1:
1378 f = getchunk()
1385 f = getchunk()
1379 if not f:
1386 if not f:
1380 break
1387 break
1381 self.ui.debug(_("adding %s revisions\n") % f)
1388 self.ui.debug(_("adding %s revisions\n") % f)
1382 fl = self.file(f)
1389 fl = self.file(f)
1383 o = fl.count()
1390 o = fl.count()
1384 n = fl.addgroup(getgroup(), revmap, tr)
1391 n = fl.addgroup(getgroup(), revmap, tr)
1385 revisions += fl.count() - o
1392 revisions += fl.count() - o
1386 files += 1
1393 files += 1
1387
1394
1388 newheads = len(self.changelog.heads())
1395 newheads = len(self.changelog.heads())
1389 heads = ""
1396 heads = ""
1390 if oldheads and newheads > oldheads:
1397 if oldheads and newheads > oldheads:
1391 heads = _(" (+%d heads)") % (newheads - oldheads)
1398 heads = _(" (+%d heads)") % (newheads - oldheads)
1392
1399
1393 self.ui.status(_("added %d changesets"
1400 self.ui.status(_("added %d changesets"
1394 " with %d changes to %d files%s\n")
1401 " with %d changes to %d files%s\n")
1395 % (changesets, revisions, files, heads))
1402 % (changesets, revisions, files, heads))
1396
1403
1397 self.hook('pretxnchangegroup', throw=True,
1404 self.hook('pretxnchangegroup', throw=True,
1398 node=hex(self.changelog.node(cor+1)))
1405 node=hex(self.changelog.node(cor+1)))
1399
1406
1400 tr.close()
1407 tr.close()
1401
1408
1402 if changesets > 0:
1409 if changesets > 0:
1403 self.hook("changegroup", node=hex(self.changelog.node(cor+1)))
1410 self.hook("changegroup", node=hex(self.changelog.node(cor+1)))
1404
1411
1405 for i in range(cor + 1, cnr + 1):
1412 for i in range(cor + 1, cnr + 1):
1406 self.hook("incoming", node=hex(self.changelog.node(i)))
1413 self.hook("incoming", node=hex(self.changelog.node(i)))
1407
1414
1408 def update(self, node, allow=False, force=False, choose=None,
1415 def update(self, node, allow=False, force=False, choose=None,
1409 moddirstate=True, forcemerge=False, wlock=None):
1416 moddirstate=True, forcemerge=False, wlock=None):
1410 pl = self.dirstate.parents()
1417 pl = self.dirstate.parents()
1411 if not force and pl[1] != nullid:
1418 if not force and pl[1] != nullid:
1412 self.ui.warn(_("aborting: outstanding uncommitted merges\n"))
1419 self.ui.warn(_("aborting: outstanding uncommitted merges\n"))
1413 return 1
1420 return 1
1414
1421
1415 err = False
1422 err = False
1416
1423
1417 p1, p2 = pl[0], node
1424 p1, p2 = pl[0], node
1418 pa = self.changelog.ancestor(p1, p2)
1425 pa = self.changelog.ancestor(p1, p2)
1419 m1n = self.changelog.read(p1)[0]
1426 m1n = self.changelog.read(p1)[0]
1420 m2n = self.changelog.read(p2)[0]
1427 m2n = self.changelog.read(p2)[0]
1421 man = self.manifest.ancestor(m1n, m2n)
1428 man = self.manifest.ancestor(m1n, m2n)
1422 m1 = self.manifest.read(m1n)
1429 m1 = self.manifest.read(m1n)
1423 mf1 = self.manifest.readflags(m1n)
1430 mf1 = self.manifest.readflags(m1n)
1424 m2 = self.manifest.read(m2n).copy()
1431 m2 = self.manifest.read(m2n).copy()
1425 mf2 = self.manifest.readflags(m2n)
1432 mf2 = self.manifest.readflags(m2n)
1426 ma = self.manifest.read(man)
1433 ma = self.manifest.read(man)
1427 mfa = self.manifest.readflags(man)
1434 mfa = self.manifest.readflags(man)
1428
1435
1429 modified, added, removed, deleted, unknown = self.changes()
1436 modified, added, removed, deleted, unknown = self.changes()
1430
1437
1431 # is this a jump, or a merge? i.e. is there a linear path
1438 # is this a jump, or a merge? i.e. is there a linear path
1432 # from p1 to p2?
1439 # from p1 to p2?
1433 linear_path = (pa == p1 or pa == p2)
1440 linear_path = (pa == p1 or pa == p2)
1434
1441
1435 if allow and linear_path:
1442 if allow and linear_path:
1436 raise util.Abort(_("there is nothing to merge, "
1443 raise util.Abort(_("there is nothing to merge, "
1437 "just use 'hg update'"))
1444 "just use 'hg update'"))
1438 if allow and not forcemerge:
1445 if allow and not forcemerge:
1439 if modified or added or removed:
1446 if modified or added or removed:
1440 raise util.Abort(_("outstanding uncommited changes"))
1447 raise util.Abort(_("outstanding uncommited changes"))
1441 if not forcemerge and not force:
1448 if not forcemerge and not force:
1442 for f in unknown:
1449 for f in unknown:
1443 if f in m2:
1450 if f in m2:
1444 t1 = self.wread(f)
1451 t1 = self.wread(f)
1445 t2 = self.file(f).read(m2[f])
1452 t2 = self.file(f).read(m2[f])
1446 if cmp(t1, t2) != 0:
1453 if cmp(t1, t2) != 0:
1447 raise util.Abort(_("'%s' already exists in the working"
1454 raise util.Abort(_("'%s' already exists in the working"
1448 " dir and differs from remote") % f)
1455 " dir and differs from remote") % f)
1449
1456
1450 # resolve the manifest to determine which files
1457 # resolve the manifest to determine which files
1451 # we care about merging
1458 # we care about merging
1452 self.ui.note(_("resolving manifests\n"))
1459 self.ui.note(_("resolving manifests\n"))
1453 self.ui.debug(_(" force %s allow %s moddirstate %s linear %s\n") %
1460 self.ui.debug(_(" force %s allow %s moddirstate %s linear %s\n") %
1454 (force, allow, moddirstate, linear_path))
1461 (force, allow, moddirstate, linear_path))
1455 self.ui.debug(_(" ancestor %s local %s remote %s\n") %
1462 self.ui.debug(_(" ancestor %s local %s remote %s\n") %
1456 (short(man), short(m1n), short(m2n)))
1463 (short(man), short(m1n), short(m2n)))
1457
1464
1458 merge = {}
1465 merge = {}
1459 get = {}
1466 get = {}
1460 remove = []
1467 remove = []
1461
1468
1462 # construct a working dir manifest
1469 # construct a working dir manifest
1463 mw = m1.copy()
1470 mw = m1.copy()
1464 mfw = mf1.copy()
1471 mfw = mf1.copy()
1465 umap = dict.fromkeys(unknown)
1472 umap = dict.fromkeys(unknown)
1466
1473
1467 for f in added + modified + unknown:
1474 for f in added + modified + unknown:
1468 mw[f] = ""
1475 mw[f] = ""
1469 mfw[f] = util.is_exec(self.wjoin(f), mfw.get(f, False))
1476 mfw[f] = util.is_exec(self.wjoin(f), mfw.get(f, False))
1470
1477
1471 if moddirstate and not wlock:
1478 if moddirstate and not wlock:
1472 wlock = self.wlock()
1479 wlock = self.wlock()
1473
1480
1474 for f in deleted + removed:
1481 for f in deleted + removed:
1475 if f in mw:
1482 if f in mw:
1476 del mw[f]
1483 del mw[f]
1477
1484
1478 # If we're jumping between revisions (as opposed to merging),
1485 # If we're jumping between revisions (as opposed to merging),
1479 # and if neither the working directory nor the target rev has
1486 # and if neither the working directory nor the target rev has
1480 # the file, then we need to remove it from the dirstate, to
1487 # the file, then we need to remove it from the dirstate, to
1481 # prevent the dirstate from listing the file when it is no
1488 # prevent the dirstate from listing the file when it is no
1482 # longer in the manifest.
1489 # longer in the manifest.
1483 if moddirstate and linear_path and f not in m2:
1490 if moddirstate and linear_path and f not in m2:
1484 self.dirstate.forget((f,))
1491 self.dirstate.forget((f,))
1485
1492
1486 # Compare manifests
1493 # Compare manifests
1487 for f, n in mw.iteritems():
1494 for f, n in mw.iteritems():
1488 if choose and not choose(f):
1495 if choose and not choose(f):
1489 continue
1496 continue
1490 if f in m2:
1497 if f in m2:
1491 s = 0
1498 s = 0
1492
1499
1493 # is the wfile new since m1, and match m2?
1500 # is the wfile new since m1, and match m2?
1494 if f not in m1:
1501 if f not in m1:
1495 t1 = self.wread(f)
1502 t1 = self.wread(f)
1496 t2 = self.file(f).read(m2[f])
1503 t2 = self.file(f).read(m2[f])
1497 if cmp(t1, t2) == 0:
1504 if cmp(t1, t2) == 0:
1498 n = m2[f]
1505 n = m2[f]
1499 del t1, t2
1506 del t1, t2
1500
1507
1501 # are files different?
1508 # are files different?
1502 if n != m2[f]:
1509 if n != m2[f]:
1503 a = ma.get(f, nullid)
1510 a = ma.get(f, nullid)
1504 # are both different from the ancestor?
1511 # are both different from the ancestor?
1505 if n != a and m2[f] != a:
1512 if n != a and m2[f] != a:
1506 self.ui.debug(_(" %s versions differ, resolve\n") % f)
1513 self.ui.debug(_(" %s versions differ, resolve\n") % f)
1507 # merge executable bits
1514 # merge executable bits
1508 # "if we changed or they changed, change in merge"
1515 # "if we changed or they changed, change in merge"
1509 a, b, c = mfa.get(f, 0), mfw[f], mf2[f]
1516 a, b, c = mfa.get(f, 0), mfw[f], mf2[f]
1510 mode = ((a^b) | (a^c)) ^ a
1517 mode = ((a^b) | (a^c)) ^ a
1511 merge[f] = (m1.get(f, nullid), m2[f], mode)
1518 merge[f] = (m1.get(f, nullid), m2[f], mode)
1512 s = 1
1519 s = 1
1513 # are we clobbering?
1520 # are we clobbering?
1514 # is remote's version newer?
1521 # is remote's version newer?
1515 # or are we going back in time?
1522 # or are we going back in time?
1516 elif force or m2[f] != a or (p2 == pa and mw[f] == m1[f]):
1523 elif force or m2[f] != a or (p2 == pa and mw[f] == m1[f]):
1517 self.ui.debug(_(" remote %s is newer, get\n") % f)
1524 self.ui.debug(_(" remote %s is newer, get\n") % f)
1518 get[f] = m2[f]
1525 get[f] = m2[f]
1519 s = 1
1526 s = 1
1520 elif f in umap:
1527 elif f in umap:
1521 # this unknown file is the same as the checkout
1528 # this unknown file is the same as the checkout
1522 get[f] = m2[f]
1529 get[f] = m2[f]
1523
1530
1524 if not s and mfw[f] != mf2[f]:
1531 if not s and mfw[f] != mf2[f]:
1525 if force:
1532 if force:
1526 self.ui.debug(_(" updating permissions for %s\n") % f)
1533 self.ui.debug(_(" updating permissions for %s\n") % f)
1527 util.set_exec(self.wjoin(f), mf2[f])
1534 util.set_exec(self.wjoin(f), mf2[f])
1528 else:
1535 else:
1529 a, b, c = mfa.get(f, 0), mfw[f], mf2[f]
1536 a, b, c = mfa.get(f, 0), mfw[f], mf2[f]
1530 mode = ((a^b) | (a^c)) ^ a
1537 mode = ((a^b) | (a^c)) ^ a
1531 if mode != b:
1538 if mode != b:
1532 self.ui.debug(_(" updating permissions for %s\n")
1539 self.ui.debug(_(" updating permissions for %s\n")
1533 % f)
1540 % f)
1534 util.set_exec(self.wjoin(f), mode)
1541 util.set_exec(self.wjoin(f), mode)
1535 del m2[f]
1542 del m2[f]
1536 elif f in ma:
1543 elif f in ma:
1537 if n != ma[f]:
1544 if n != ma[f]:
1538 r = _("d")
1545 r = _("d")
1539 if not force and (linear_path or allow):
1546 if not force and (linear_path or allow):
1540 r = self.ui.prompt(
1547 r = self.ui.prompt(
1541 (_(" local changed %s which remote deleted\n") % f) +
1548 (_(" local changed %s which remote deleted\n") % f) +
1542 _("(k)eep or (d)elete?"), _("[kd]"), _("k"))
1549 _("(k)eep or (d)elete?"), _("[kd]"), _("k"))
1543 if r == _("d"):
1550 if r == _("d"):
1544 remove.append(f)
1551 remove.append(f)
1545 else:
1552 else:
1546 self.ui.debug(_("other deleted %s\n") % f)
1553 self.ui.debug(_("other deleted %s\n") % f)
1547 remove.append(f) # other deleted it
1554 remove.append(f) # other deleted it
1548 else:
1555 else:
1549 # file is created on branch or in working directory
1556 # file is created on branch or in working directory
1550 if force and f not in umap:
1557 if force and f not in umap:
1551 self.ui.debug(_("remote deleted %s, clobbering\n") % f)
1558 self.ui.debug(_("remote deleted %s, clobbering\n") % f)
1552 remove.append(f)
1559 remove.append(f)
1553 elif n == m1.get(f, nullid): # same as parent
1560 elif n == m1.get(f, nullid): # same as parent
1554 if p2 == pa: # going backwards?
1561 if p2 == pa: # going backwards?
1555 self.ui.debug(_("remote deleted %s\n") % f)
1562 self.ui.debug(_("remote deleted %s\n") % f)
1556 remove.append(f)
1563 remove.append(f)
1557 else:
1564 else:
1558 self.ui.debug(_("local modified %s, keeping\n") % f)
1565 self.ui.debug(_("local modified %s, keeping\n") % f)
1559 else:
1566 else:
1560 self.ui.debug(_("working dir created %s, keeping\n") % f)
1567 self.ui.debug(_("working dir created %s, keeping\n") % f)
1561
1568
1562 for f, n in m2.iteritems():
1569 for f, n in m2.iteritems():
1563 if choose and not choose(f):
1570 if choose and not choose(f):
1564 continue
1571 continue
1565 if f[0] == "/":
1572 if f[0] == "/":
1566 continue
1573 continue
1567 if f in ma and n != ma[f]:
1574 if f in ma and n != ma[f]:
1568 r = _("k")
1575 r = _("k")
1569 if not force and (linear_path or allow):
1576 if not force and (linear_path or allow):
1570 r = self.ui.prompt(
1577 r = self.ui.prompt(
1571 (_("remote changed %s which local deleted\n") % f) +
1578 (_("remote changed %s which local deleted\n") % f) +
1572 _("(k)eep or (d)elete?"), _("[kd]"), _("k"))
1579 _("(k)eep or (d)elete?"), _("[kd]"), _("k"))
1573 if r == _("k"):
1580 if r == _("k"):
1574 get[f] = n
1581 get[f] = n
1575 elif f not in ma:
1582 elif f not in ma:
1576 self.ui.debug(_("remote created %s\n") % f)
1583 self.ui.debug(_("remote created %s\n") % f)
1577 get[f] = n
1584 get[f] = n
1578 else:
1585 else:
1579 if force or p2 == pa: # going backwards?
1586 if force or p2 == pa: # going backwards?
1580 self.ui.debug(_("local deleted %s, recreating\n") % f)
1587 self.ui.debug(_("local deleted %s, recreating\n") % f)
1581 get[f] = n
1588 get[f] = n
1582 else:
1589 else:
1583 self.ui.debug(_("local deleted %s\n") % f)
1590 self.ui.debug(_("local deleted %s\n") % f)
1584
1591
1585 del mw, m1, m2, ma
1592 del mw, m1, m2, ma
1586
1593
1587 if force:
1594 if force:
1588 for f in merge:
1595 for f in merge:
1589 get[f] = merge[f][1]
1596 get[f] = merge[f][1]
1590 merge = {}
1597 merge = {}
1591
1598
1592 if linear_path or force:
1599 if linear_path or force:
1593 # we don't need to do any magic, just jump to the new rev
1600 # we don't need to do any magic, just jump to the new rev
1594 branch_merge = False
1601 branch_merge = False
1595 p1, p2 = p2, nullid
1602 p1, p2 = p2, nullid
1596 else:
1603 else:
1597 if not allow:
1604 if not allow:
1598 self.ui.status(_("this update spans a branch"
1605 self.ui.status(_("this update spans a branch"
1599 " affecting the following files:\n"))
1606 " affecting the following files:\n"))
1600 fl = merge.keys() + get.keys()
1607 fl = merge.keys() + get.keys()
1601 fl.sort()
1608 fl.sort()
1602 for f in fl:
1609 for f in fl:
1603 cf = ""
1610 cf = ""
1604 if f in merge:
1611 if f in merge:
1605 cf = _(" (resolve)")
1612 cf = _(" (resolve)")
1606 self.ui.status(" %s%s\n" % (f, cf))
1613 self.ui.status(" %s%s\n" % (f, cf))
1607 self.ui.warn(_("aborting update spanning branches!\n"))
1614 self.ui.warn(_("aborting update spanning branches!\n"))
1608 self.ui.status(_("(use update -m to merge across branches"
1615 self.ui.status(_("(use update -m to merge across branches"
1609 " or -C to lose changes)\n"))
1616 " or -C to lose changes)\n"))
1610 return 1
1617 return 1
1611 branch_merge = True
1618 branch_merge = True
1612
1619
1613 # get the files we don't need to change
1620 # get the files we don't need to change
1614 files = get.keys()
1621 files = get.keys()
1615 files.sort()
1622 files.sort()
1616 for f in files:
1623 for f in files:
1617 if f[0] == "/":
1624 if f[0] == "/":
1618 continue
1625 continue
1619 self.ui.note(_("getting %s\n") % f)
1626 self.ui.note(_("getting %s\n") % f)
1620 t = self.file(f).read(get[f])
1627 t = self.file(f).read(get[f])
1621 self.wwrite(f, t)
1628 self.wwrite(f, t)
1622 util.set_exec(self.wjoin(f), mf2[f])
1629 util.set_exec(self.wjoin(f), mf2[f])
1623 if moddirstate:
1630 if moddirstate:
1624 if branch_merge:
1631 if branch_merge:
1625 self.dirstate.update([f], 'n', st_mtime=-1)
1632 self.dirstate.update([f], 'n', st_mtime=-1)
1626 else:
1633 else:
1627 self.dirstate.update([f], 'n')
1634 self.dirstate.update([f], 'n')
1628
1635
1629 # merge the tricky bits
1636 # merge the tricky bits
1630 files = merge.keys()
1637 files = merge.keys()
1631 files.sort()
1638 files.sort()
1632 for f in files:
1639 for f in files:
1633 self.ui.status(_("merging %s\n") % f)
1640 self.ui.status(_("merging %s\n") % f)
1634 my, other, flag = merge[f]
1641 my, other, flag = merge[f]
1635 ret = self.merge3(f, my, other)
1642 ret = self.merge3(f, my, other)
1636 if ret:
1643 if ret:
1637 err = True
1644 err = True
1638 util.set_exec(self.wjoin(f), flag)
1645 util.set_exec(self.wjoin(f), flag)
1639 if moddirstate:
1646 if moddirstate:
1640 if branch_merge:
1647 if branch_merge:
1641 # We've done a branch merge, mark this file as merged
1648 # We've done a branch merge, mark this file as merged
1642 # so that we properly record the merger later
1649 # so that we properly record the merger later
1643 self.dirstate.update([f], 'm')
1650 self.dirstate.update([f], 'm')
1644 else:
1651 else:
1645 # We've update-merged a locally modified file, so
1652 # We've update-merged a locally modified file, so
1646 # we set the dirstate to emulate a normal checkout
1653 # we set the dirstate to emulate a normal checkout
1647 # of that file some time in the past. Thus our
1654 # of that file some time in the past. Thus our
1648 # merge will appear as a normal local file
1655 # merge will appear as a normal local file
1649 # modification.
1656 # modification.
1650 f_len = len(self.file(f).read(other))
1657 f_len = len(self.file(f).read(other))
1651 self.dirstate.update([f], 'n', st_size=f_len, st_mtime=-1)
1658 self.dirstate.update([f], 'n', st_size=f_len, st_mtime=-1)
1652
1659
1653 remove.sort()
1660 remove.sort()
1654 for f in remove:
1661 for f in remove:
1655 self.ui.note(_("removing %s\n") % f)
1662 self.ui.note(_("removing %s\n") % f)
1656 try:
1663 try:
1657 util.unlink(self.wjoin(f))
1664 util.unlink(self.wjoin(f))
1658 except OSError, inst:
1665 except OSError, inst:
1659 if inst.errno != errno.ENOENT:
1666 if inst.errno != errno.ENOENT:
1660 self.ui.warn(_("update failed to remove %s: %s!\n") %
1667 self.ui.warn(_("update failed to remove %s: %s!\n") %
1661 (f, inst.strerror))
1668 (f, inst.strerror))
1662 if moddirstate:
1669 if moddirstate:
1663 if branch_merge:
1670 if branch_merge:
1664 self.dirstate.update(remove, 'r')
1671 self.dirstate.update(remove, 'r')
1665 else:
1672 else:
1666 self.dirstate.forget(remove)
1673 self.dirstate.forget(remove)
1667
1674
1668 if moddirstate:
1675 if moddirstate:
1669 self.dirstate.setparents(p1, p2)
1676 self.dirstate.setparents(p1, p2)
1670 return err
1677 return err
1671
1678
1672 def merge3(self, fn, my, other):
1679 def merge3(self, fn, my, other):
1673 """perform a 3-way merge in the working directory"""
1680 """perform a 3-way merge in the working directory"""
1674
1681
1675 def temp(prefix, node):
1682 def temp(prefix, node):
1676 pre = "%s~%s." % (os.path.basename(fn), prefix)
1683 pre = "%s~%s." % (os.path.basename(fn), prefix)
1677 (fd, name) = tempfile.mkstemp("", pre)
1684 (fd, name) = tempfile.mkstemp("", pre)
1678 f = os.fdopen(fd, "wb")
1685 f = os.fdopen(fd, "wb")
1679 self.wwrite(fn, fl.read(node), f)
1686 self.wwrite(fn, fl.read(node), f)
1680 f.close()
1687 f.close()
1681 return name
1688 return name
1682
1689
1683 fl = self.file(fn)
1690 fl = self.file(fn)
1684 base = fl.ancestor(my, other)
1691 base = fl.ancestor(my, other)
1685 a = self.wjoin(fn)
1692 a = self.wjoin(fn)
1686 b = temp("base", base)
1693 b = temp("base", base)
1687 c = temp("other", other)
1694 c = temp("other", other)
1688
1695
1689 self.ui.note(_("resolving %s\n") % fn)
1696 self.ui.note(_("resolving %s\n") % fn)
1690 self.ui.debug(_("file %s: my %s other %s ancestor %s\n") %
1697 self.ui.debug(_("file %s: my %s other %s ancestor %s\n") %
1691 (fn, short(my), short(other), short(base)))
1698 (fn, short(my), short(other), short(base)))
1692
1699
1693 cmd = (os.environ.get("HGMERGE") or self.ui.config("ui", "merge")
1700 cmd = (os.environ.get("HGMERGE") or self.ui.config("ui", "merge")
1694 or "hgmerge")
1701 or "hgmerge")
1695 r = os.system('%s "%s" "%s" "%s"' % (cmd, a, b, c))
1702 r = os.system('%s "%s" "%s" "%s"' % (cmd, a, b, c))
1696 if r:
1703 if r:
1697 self.ui.warn(_("merging %s failed!\n") % fn)
1704 self.ui.warn(_("merging %s failed!\n") % fn)
1698
1705
1699 os.unlink(b)
1706 os.unlink(b)
1700 os.unlink(c)
1707 os.unlink(c)
1701 return r
1708 return r
1702
1709
1703 def verify(self):
1710 def verify(self):
1704 filelinkrevs = {}
1711 filelinkrevs = {}
1705 filenodes = {}
1712 filenodes = {}
1706 changesets = revisions = files = 0
1713 changesets = revisions = files = 0
1707 errors = [0]
1714 errors = [0]
1708 neededmanifests = {}
1715 neededmanifests = {}
1709
1716
1710 def err(msg):
1717 def err(msg):
1711 self.ui.warn(msg + "\n")
1718 self.ui.warn(msg + "\n")
1712 errors[0] += 1
1719 errors[0] += 1
1713
1720
1714 def checksize(obj, name):
1721 def checksize(obj, name):
1715 d = obj.checksize()
1722 d = obj.checksize()
1716 if d[0]:
1723 if d[0]:
1717 err(_("%s data length off by %d bytes") % (name, d[0]))
1724 err(_("%s data length off by %d bytes") % (name, d[0]))
1718 if d[1]:
1725 if d[1]:
1719 err(_("%s index contains %d extra bytes") % (name, d[1]))
1726 err(_("%s index contains %d extra bytes") % (name, d[1]))
1720
1727
1721 seen = {}
1728 seen = {}
1722 self.ui.status(_("checking changesets\n"))
1729 self.ui.status(_("checking changesets\n"))
1723 checksize(self.changelog, "changelog")
1730 checksize(self.changelog, "changelog")
1724
1731
1725 for i in range(self.changelog.count()):
1732 for i in range(self.changelog.count()):
1726 changesets += 1
1733 changesets += 1
1727 n = self.changelog.node(i)
1734 n = self.changelog.node(i)
1728 l = self.changelog.linkrev(n)
1735 l = self.changelog.linkrev(n)
1729 if l != i:
1736 if l != i:
1730 err(_("incorrect link (%d) for changeset revision %d") %(l, i))
1737 err(_("incorrect link (%d) for changeset revision %d") %(l, i))
1731 if n in seen:
1738 if n in seen:
1732 err(_("duplicate changeset at revision %d") % i)
1739 err(_("duplicate changeset at revision %d") % i)
1733 seen[n] = 1
1740 seen[n] = 1
1734
1741
1735 for p in self.changelog.parents(n):
1742 for p in self.changelog.parents(n):
1736 if p not in self.changelog.nodemap:
1743 if p not in self.changelog.nodemap:
1737 err(_("changeset %s has unknown parent %s") %
1744 err(_("changeset %s has unknown parent %s") %
1738 (short(n), short(p)))
1745 (short(n), short(p)))
1739 try:
1746 try:
1740 changes = self.changelog.read(n)
1747 changes = self.changelog.read(n)
1741 except KeyboardInterrupt:
1748 except KeyboardInterrupt:
1742 self.ui.warn(_("interrupted"))
1749 self.ui.warn(_("interrupted"))
1743 raise
1750 raise
1744 except Exception, inst:
1751 except Exception, inst:
1745 err(_("unpacking changeset %s: %s") % (short(n), inst))
1752 err(_("unpacking changeset %s: %s") % (short(n), inst))
1746
1753
1747 neededmanifests[changes[0]] = n
1754 neededmanifests[changes[0]] = n
1748
1755
1749 for f in changes[3]:
1756 for f in changes[3]:
1750 filelinkrevs.setdefault(f, []).append(i)
1757 filelinkrevs.setdefault(f, []).append(i)
1751
1758
1752 seen = {}
1759 seen = {}
1753 self.ui.status(_("checking manifests\n"))
1760 self.ui.status(_("checking manifests\n"))
1754 checksize(self.manifest, "manifest")
1761 checksize(self.manifest, "manifest")
1755
1762
1756 for i in range(self.manifest.count()):
1763 for i in range(self.manifest.count()):
1757 n = self.manifest.node(i)
1764 n = self.manifest.node(i)
1758 l = self.manifest.linkrev(n)
1765 l = self.manifest.linkrev(n)
1759
1766
1760 if l < 0 or l >= self.changelog.count():
1767 if l < 0 or l >= self.changelog.count():
1761 err(_("bad manifest link (%d) at revision %d") % (l, i))
1768 err(_("bad manifest link (%d) at revision %d") % (l, i))
1762
1769
1763 if n in neededmanifests:
1770 if n in neededmanifests:
1764 del neededmanifests[n]
1771 del neededmanifests[n]
1765
1772
1766 if n in seen:
1773 if n in seen:
1767 err(_("duplicate manifest at revision %d") % i)
1774 err(_("duplicate manifest at revision %d") % i)
1768
1775
1769 seen[n] = 1
1776 seen[n] = 1
1770
1777
1771 for p in self.manifest.parents(n):
1778 for p in self.manifest.parents(n):
1772 if p not in self.manifest.nodemap:
1779 if p not in self.manifest.nodemap:
1773 err(_("manifest %s has unknown parent %s") %
1780 err(_("manifest %s has unknown parent %s") %
1774 (short(n), short(p)))
1781 (short(n), short(p)))
1775
1782
1776 try:
1783 try:
1777 delta = mdiff.patchtext(self.manifest.delta(n))
1784 delta = mdiff.patchtext(self.manifest.delta(n))
1778 except KeyboardInterrupt:
1785 except KeyboardInterrupt:
1779 self.ui.warn(_("interrupted"))
1786 self.ui.warn(_("interrupted"))
1780 raise
1787 raise
1781 except Exception, inst:
1788 except Exception, inst:
1782 err(_("unpacking manifest %s: %s") % (short(n), inst))
1789 err(_("unpacking manifest %s: %s") % (short(n), inst))
1783
1790
1784 ff = [ l.split('\0') for l in delta.splitlines() ]
1791 ff = [ l.split('\0') for l in delta.splitlines() ]
1785 for f, fn in ff:
1792 for f, fn in ff:
1786 filenodes.setdefault(f, {})[bin(fn[:40])] = 1
1793 filenodes.setdefault(f, {})[bin(fn[:40])] = 1
1787
1794
1788 self.ui.status(_("crosschecking files in changesets and manifests\n"))
1795 self.ui.status(_("crosschecking files in changesets and manifests\n"))
1789
1796
1790 for m, c in neededmanifests.items():
1797 for m, c in neededmanifests.items():
1791 err(_("Changeset %s refers to unknown manifest %s") %
1798 err(_("Changeset %s refers to unknown manifest %s") %
1792 (short(m), short(c)))
1799 (short(m), short(c)))
1793 del neededmanifests
1800 del neededmanifests
1794
1801
1795 for f in filenodes:
1802 for f in filenodes:
1796 if f not in filelinkrevs:
1803 if f not in filelinkrevs:
1797 err(_("file %s in manifest but not in changesets") % f)
1804 err(_("file %s in manifest but not in changesets") % f)
1798
1805
1799 for f in filelinkrevs:
1806 for f in filelinkrevs:
1800 if f not in filenodes:
1807 if f not in filenodes:
1801 err(_("file %s in changeset but not in manifest") % f)
1808 err(_("file %s in changeset but not in manifest") % f)
1802
1809
1803 self.ui.status(_("checking files\n"))
1810 self.ui.status(_("checking files\n"))
1804 ff = filenodes.keys()
1811 ff = filenodes.keys()
1805 ff.sort()
1812 ff.sort()
1806 for f in ff:
1813 for f in ff:
1807 if f == "/dev/null":
1814 if f == "/dev/null":
1808 continue
1815 continue
1809 files += 1
1816 files += 1
1810 fl = self.file(f)
1817 fl = self.file(f)
1811 checksize(fl, f)
1818 checksize(fl, f)
1812
1819
1813 nodes = {nullid: 1}
1820 nodes = {nullid: 1}
1814 seen = {}
1821 seen = {}
1815 for i in range(fl.count()):
1822 for i in range(fl.count()):
1816 revisions += 1
1823 revisions += 1
1817 n = fl.node(i)
1824 n = fl.node(i)
1818
1825
1819 if n in seen:
1826 if n in seen:
1820 err(_("%s: duplicate revision %d") % (f, i))
1827 err(_("%s: duplicate revision %d") % (f, i))
1821 if n not in filenodes[f]:
1828 if n not in filenodes[f]:
1822 err(_("%s: %d:%s not in manifests") % (f, i, short(n)))
1829 err(_("%s: %d:%s not in manifests") % (f, i, short(n)))
1823 else:
1830 else:
1824 del filenodes[f][n]
1831 del filenodes[f][n]
1825
1832
1826 flr = fl.linkrev(n)
1833 flr = fl.linkrev(n)
1827 if flr not in filelinkrevs[f]:
1834 if flr not in filelinkrevs[f]:
1828 err(_("%s:%s points to unexpected changeset %d")
1835 err(_("%s:%s points to unexpected changeset %d")
1829 % (f, short(n), flr))
1836 % (f, short(n), flr))
1830 else:
1837 else:
1831 filelinkrevs[f].remove(flr)
1838 filelinkrevs[f].remove(flr)
1832
1839
1833 # verify contents
1840 # verify contents
1834 try:
1841 try:
1835 t = fl.read(n)
1842 t = fl.read(n)
1836 except KeyboardInterrupt:
1843 except KeyboardInterrupt:
1837 self.ui.warn(_("interrupted"))
1844 self.ui.warn(_("interrupted"))
1838 raise
1845 raise
1839 except Exception, inst:
1846 except Exception, inst:
1840 err(_("unpacking file %s %s: %s") % (f, short(n), inst))
1847 err(_("unpacking file %s %s: %s") % (f, short(n), inst))
1841
1848
1842 # verify parents
1849 # verify parents
1843 (p1, p2) = fl.parents(n)
1850 (p1, p2) = fl.parents(n)
1844 if p1 not in nodes:
1851 if p1 not in nodes:
1845 err(_("file %s:%s unknown parent 1 %s") %
1852 err(_("file %s:%s unknown parent 1 %s") %
1846 (f, short(n), short(p1)))
1853 (f, short(n), short(p1)))
1847 if p2 not in nodes:
1854 if p2 not in nodes:
1848 err(_("file %s:%s unknown parent 2 %s") %
1855 err(_("file %s:%s unknown parent 2 %s") %
1849 (f, short(n), short(p1)))
1856 (f, short(n), short(p1)))
1850 nodes[n] = 1
1857 nodes[n] = 1
1851
1858
1852 # cross-check
1859 # cross-check
1853 for node in filenodes[f]:
1860 for node in filenodes[f]:
1854 err(_("node %s in manifests not in %s") % (hex(node), f))
1861 err(_("node %s in manifests not in %s") % (hex(node), f))
1855
1862
1856 self.ui.status(_("%d files, %d changesets, %d total revisions\n") %
1863 self.ui.status(_("%d files, %d changesets, %d total revisions\n") %
1857 (files, changesets, revisions))
1864 (files, changesets, revisions))
1858
1865
1859 if errors[0]:
1866 if errors[0]:
1860 self.ui.warn(_("%d integrity errors encountered!\n") % errors[0])
1867 self.ui.warn(_("%d integrity errors encountered!\n") % errors[0])
1861 return 1
1868 return 1
@@ -1,59 +1,62
1 # lock.py - simple locking scheme for mercurial
1 # lock.py - simple locking scheme for mercurial
2 #
2 #
3 # Copyright 2005 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms
5 # This software may be used and distributed according to the terms
6 # of the GNU General Public License, incorporated herein by reference.
6 # of the GNU General Public License, incorporated herein by reference.
7
7
8 import errno, os, time
8 import errno, os, time
9 import util
9 import util
10
10
11 class LockException(Exception):
11 class LockException(Exception):
12 pass
12 pass
13 class LockHeld(LockException):
13 class LockHeld(LockException):
14 pass
14 pass
15 class LockUnavailable(LockException):
15 class LockUnavailable(LockException):
16 pass
16 pass
17
17
18 class lock(object):
18 class lock(object):
19 def __init__(self, file, wait=1, releasefn=None):
19 def __init__(self, file, timeout=-1, releasefn=None):
20 self.f = file
20 self.f = file
21 self.held = 0
21 self.held = 0
22 self.wait = wait
22 self.timeout = timeout
23 self.releasefn = releasefn
23 self.releasefn = releasefn
24 self.lock()
24 self.lock()
25
25
26 def __del__(self):
26 def __del__(self):
27 self.release()
27 self.release()
28
28
29 def lock(self):
29 def lock(self):
30 timeout = self.timeout
30 while 1:
31 while 1:
31 try:
32 try:
32 self.trylock()
33 self.trylock()
33 return 1
34 return 1
34 except LockHeld, inst:
35 except LockHeld, inst:
35 if self.wait:
36 if timeout != 0:
36 time.sleep(1)
37 time.sleep(1)
38 if timeout > 0:
39 timeout -= 1
37 continue
40 continue
38 raise inst
41 raise inst
39
42
40 def trylock(self):
43 def trylock(self):
41 pid = os.getpid()
44 pid = os.getpid()
42 try:
45 try:
43 util.makelock(str(pid), self.f)
46 util.makelock(str(pid), self.f)
44 self.held = 1
47 self.held = 1
45 except (OSError, IOError), why:
48 except (OSError, IOError), why:
46 if why.errno == errno.EEXIST:
49 if why.errno == errno.EEXIST:
47 raise LockHeld(util.readlock(self.f))
50 raise LockHeld(util.readlock(self.f))
48 else:
51 else:
49 raise LockUnavailable(why)
52 raise LockUnavailable(why)
50
53
51 def release(self):
54 def release(self):
52 if self.held:
55 if self.held:
53 self.held = 0
56 self.held = 0
54 if self.releasefn:
57 if self.releasefn:
55 self.releasefn()
58 self.releasefn()
56 try:
59 try:
57 os.unlink(self.f)
60 os.unlink(self.f)
58 except: pass
61 except: pass
59
62
General Comments 0
You need to be logged in to leave comments. Login now