##// END OF EJS Templates
fix names of parent changeset ids in hooks....
Vadim Gelfer -
r1727:019e6a47 default
parent child Browse files
Show More
@@ -1,281 +1,285 b''
1 HGRC(5)
1 HGRC(5)
2 =======
2 =======
3 Bryan O'Sullivan <bos@serpentine.com>
3 Bryan O'Sullivan <bos@serpentine.com>
4
4
5 NAME
5 NAME
6 ----
6 ----
7 hgrc - configuration files for Mercurial
7 hgrc - configuration files for Mercurial
8
8
9 SYNOPSIS
9 SYNOPSIS
10 --------
10 --------
11
11
12 The Mercurial system uses a set of configuration files to control
12 The Mercurial system uses a set of configuration files to control
13 aspects of its behaviour.
13 aspects of its behaviour.
14
14
15 FILES
15 FILES
16 -----
16 -----
17
17
18 Mercurial reads configuration data from several files, if they exist.
18 Mercurial reads configuration data from several files, if they exist.
19 The names of these files depend on the system on which Mercurial is
19 The names of these files depend on the system on which Mercurial is
20 installed.
20 installed.
21
21
22 (Unix) <install-root>/etc/mercurial/hgrc.d/*.rc::
22 (Unix) <install-root>/etc/mercurial/hgrc.d/*.rc::
23 (Unix) <install-root>/etc/mercurial/hgrc::
23 (Unix) <install-root>/etc/mercurial/hgrc::
24 Per-installation configuration files, searched for in the
24 Per-installation configuration files, searched for in the
25 directory where Mercurial is installed. For example, if installed
25 directory where Mercurial is installed. For example, if installed
26 in /shared/tools, Mercurial will look in
26 in /shared/tools, Mercurial will look in
27 /shared/tools/etc/mercurial/hgrc. Options in these files apply to
27 /shared/tools/etc/mercurial/hgrc. Options in these files apply to
28 all Mercurial commands executed by any user in any directory.
28 all Mercurial commands executed by any user in any directory.
29
29
30 (Unix) /etc/mercurial/hgrc.d/*.rc::
30 (Unix) /etc/mercurial/hgrc.d/*.rc::
31 (Unix) /etc/mercurial/hgrc::
31 (Unix) /etc/mercurial/hgrc::
32 (Windows) C:\Mercurial\Mercurial.ini::
32 (Windows) C:\Mercurial\Mercurial.ini::
33 Per-system configuration files, for the system on which Mercurial
33 Per-system configuration files, for the system on which Mercurial
34 is running. Options in these files apply to all Mercurial
34 is running. Options in these files apply to all Mercurial
35 commands executed by any user in any directory. Options in these
35 commands executed by any user in any directory. Options in these
36 files override per-installation options.
36 files override per-installation options.
37
37
38 (Unix) $HOME/.hgrc::
38 (Unix) $HOME/.hgrc::
39 (Windows) C:\Documents and Settings\USERNAME\Mercurial.ini
39 (Windows) C:\Documents and Settings\USERNAME\Mercurial.ini
40 Per-user configuration file, for the user running Mercurial.
40 Per-user configuration file, for the user running Mercurial.
41 Options in this file apply to all Mercurial commands executed by
41 Options in this file apply to all Mercurial commands executed by
42 any user in any directory. Options in this file override
42 any user in any directory. Options in this file override
43 per-installation and per-system options.
43 per-installation and per-system options.
44
44
45 (Unix, Windows) <repo>/.hg/hgrc::
45 (Unix, Windows) <repo>/.hg/hgrc::
46 Per-repository configuration options that only apply in a
46 Per-repository configuration options that only apply in a
47 particular repository. This file is not version-controlled, and
47 particular repository. This file is not version-controlled, and
48 will not get transferred during a "clone" operation. Options in
48 will not get transferred during a "clone" operation. Options in
49 this file override options in all other configuration files.
49 this file override options in all other configuration files.
50
50
51 SYNTAX
51 SYNTAX
52 ------
52 ------
53
53
54 A configuration file consists of sections, led by a "[section]" header
54 A configuration file consists of sections, led by a "[section]" header
55 and followed by "name: value" entries; "name=value" is also accepted.
55 and followed by "name: value" entries; "name=value" is also accepted.
56
56
57 [spam]
57 [spam]
58 eggs=ham
58 eggs=ham
59 green=
59 green=
60 eggs
60 eggs
61
61
62 Each line contains one entry. If the lines that follow are indented,
62 Each line contains one entry. If the lines that follow are indented,
63 they are treated as continuations of that entry.
63 they are treated as continuations of that entry.
64
64
65 Leading whitespace is removed from values. Empty lines are skipped.
65 Leading whitespace is removed from values. Empty lines are skipped.
66
66
67 The optional values can contain format strings which refer to other
67 The optional values can contain format strings which refer to other
68 values in the same section, or values in a special DEFAULT section.
68 values in the same section, or values in a special DEFAULT section.
69
69
70 Lines beginning with "#" or ";" are ignored and may be used to provide
70 Lines beginning with "#" or ";" are ignored and may be used to provide
71 comments.
71 comments.
72
72
73 SECTIONS
73 SECTIONS
74 --------
74 --------
75
75
76 This section describes the different sections that may appear in a
76 This section describes the different sections that may appear in a
77 Mercurial "hgrc" file, the purpose of each section, its possible
77 Mercurial "hgrc" file, the purpose of each section, its possible
78 keys, and their possible values.
78 keys, and their possible values.
79
79
80 decode/encode::
80 decode/encode::
81 Filters for transforming files on checkout/checkin. This would
81 Filters for transforming files on checkout/checkin. This would
82 typically be used for newline processing or other
82 typically be used for newline processing or other
83 localization/canonicalization of files.
83 localization/canonicalization of files.
84
84
85 Filters consist of a filter pattern followed by a filter command.
85 Filters consist of a filter pattern followed by a filter command.
86 Filter patterns are globs by default, rooted at the repository
86 Filter patterns are globs by default, rooted at the repository
87 root. For example, to match any file ending in ".txt" in the root
87 root. For example, to match any file ending in ".txt" in the root
88 directory only, use the pattern "*.txt". To match any file ending
88 directory only, use the pattern "*.txt". To match any file ending
89 in ".c" anywhere in the repository, use the pattern "**.c".
89 in ".c" anywhere in the repository, use the pattern "**.c".
90
90
91 The filter command can start with a specifier, either "pipe:" or
91 The filter command can start with a specifier, either "pipe:" or
92 "tempfile:". If no specifier is given, "pipe:" is used by default.
92 "tempfile:". If no specifier is given, "pipe:" is used by default.
93
93
94 A "pipe:" command must accept data on stdin and return the
94 A "pipe:" command must accept data on stdin and return the
95 transformed data on stdout.
95 transformed data on stdout.
96
96
97 Pipe example:
97 Pipe example:
98
98
99 [encode]
99 [encode]
100 # uncompress gzip files on checkin to improve delta compression
100 # uncompress gzip files on checkin to improve delta compression
101 # note: not necessarily a good idea, just an example
101 # note: not necessarily a good idea, just an example
102 *.gz = pipe: gunzip
102 *.gz = pipe: gunzip
103
103
104 [decode]
104 [decode]
105 # recompress gzip files when writing them to the working dir (we
105 # recompress gzip files when writing them to the working dir (we
106 # can safely omit "pipe:", because it's the default)
106 # can safely omit "pipe:", because it's the default)
107 *.gz = gzip
107 *.gz = gzip
108
108
109 A "tempfile:" command is a template. The string INFILE is replaced
109 A "tempfile:" command is a template. The string INFILE is replaced
110 with the name of a temporary file that contains the data to be
110 with the name of a temporary file that contains the data to be
111 filtered by the command. The string OUTFILE is replaced with the
111 filtered by the command. The string OUTFILE is replaced with the
112 name of an empty temporary file, where the filtered data must be
112 name of an empty temporary file, where the filtered data must be
113 written by the command.
113 written by the command.
114
114
115 NOTE: the tempfile mechanism is recommended for Windows systems,
115 NOTE: the tempfile mechanism is recommended for Windows systems,
116 where the standard shell I/O redirection operators often have
116 where the standard shell I/O redirection operators often have
117 strange effects. In particular, if you are doing line ending
117 strange effects. In particular, if you are doing line ending
118 conversion on Windows using the popular dos2unix and unix2dos
118 conversion on Windows using the popular dos2unix and unix2dos
119 programs, you *must* use the tempfile mechanism, as using pipes will
119 programs, you *must* use the tempfile mechanism, as using pipes will
120 corrupt the contents of your files.
120 corrupt the contents of your files.
121
121
122 Tempfile example:
122 Tempfile example:
123
123
124 [encode]
124 [encode]
125 # convert files to unix line ending conventions on checkin
125 # convert files to unix line ending conventions on checkin
126 **.txt = tempfile: dos2unix -n INFILE OUTFILE
126 **.txt = tempfile: dos2unix -n INFILE OUTFILE
127
127
128 [decode]
128 [decode]
129 # convert files to windows line ending conventions when writing
129 # convert files to windows line ending conventions when writing
130 # them to the working dir
130 # them to the working dir
131 **.txt = tempfile: unix2dos -n INFILE OUTFILE
131 **.txt = tempfile: unix2dos -n INFILE OUTFILE
132
132
133 hooks::
133 hooks::
134 Commands that get automatically executed by various actions such as
134 Commands that get automatically executed by various actions such as
135 starting or finishing a commit. Multiple commands can be run for
135 starting or finishing a commit. Multiple commands can be run for
136 the same action by appending a suffix to the action. Overriding a
136 the same action by appending a suffix to the action. Overriding a
137 site-wide hook can be done by changing its value or setting it to
137 site-wide hook can be done by changing its value or setting it to
138 an empty string.
138 an empty string.
139
139
140 Example .hg/hgrc:
140 Example .hg/hgrc:
141
141
142 [hooks]
142 [hooks]
143 # do not use the site-wide hook
143 # do not use the site-wide hook
144 incoming =
144 incoming =
145 incoming.email = /my/email/hook
145 incoming.email = /my/email/hook
146 incoming.autobuild = /my/build/hook
146 incoming.autobuild = /my/build/hook
147
147
148 Most hooks are run with environment variables set that give added
149 useful information. For each hook below, the environment variables
150 it is passed are listed with names of the form "$HG_foo".
151
148 changegroup;;
152 changegroup;;
149 Run after a changegroup has been added via push or pull. Passed
153 Run after a changegroup has been added via push or pull. ID of the
150 the ID of the first new changeset in $HG_NODE.
154 first new changeset is in $HG_NODE.
151 commit;;
155 commit;;
152 Run after a changeset has been created in the local repository.
156 Run after a changeset has been created in the local repository.
153 Passed the ID of the newly created changeset in environment
157 ID of the newly created changeset is in $HG_NODE. Parent
154 variable $HG_NODE. Parent changeset IDs in $HG_P1 and $HG_P2.
158 changeset IDs are in $HG_PARENT1 and $HG_PARENT2.
155 incoming;;
159 incoming;;
156 Run after a changeset has been pulled, pushed, or unbundled into
160 Run after a changeset has been pulled, pushed, or unbundled into
157 the local repository. Passed the ID of the newly arrived
161 the local repository. The ID of the newly arrived changeset is in
158 changeset in environment variable $HG_NODE.
162 $HG_NODE.
159 precommit;;
163 precommit;;
160 Run before starting a local commit. Exit status 0 allows the
164 Run before starting a local commit. Exit status 0 allows the
161 commit to proceed. Non-zero status will cause the commit to fail.
165 commit to proceed. Non-zero status will cause the commit to fail.
162 Parent changeset IDs in $HG_P1 and $HG_P2.
166 Parent changeset IDs are in $HG_PARENT1 and $HG_PARENT2.
163 pretag;;
167 pretag;;
164 Run before creating a tag. Exit status 0 allows the tag to be
168 Run before creating a tag. Exit status 0 allows the tag to be
165 created. Non-zero status will cause the tag to fail. ID of
169 created. Non-zero status will cause the tag to fail. ID of
166 changeset to tag in $HG_NODE. Name of tag in $HG_TAG. Tag is
170 changeset to tag is in $HG_NODE. Name of tag is in $HG_TAG. Tag
167 local if $HG_LOCAL=1, in repo if $HG_LOCAL=0.
171 is local if $HG_LOCAL=1, in repo if $HG_LOCAL=0.
168 pretxncommit;;
172 pretxncommit;;
169 Run after a changeset has been created but the transaction not yet
173 Run after a changeset has been created but the transaction not yet
170 committed. Changeset is visible to hook program. This lets you
174 committed. Changeset is visible to hook program. This lets you
171 validate commit message and changes. Exit status 0 allows the
175 validate commit message and changes. Exit status 0 allows the
172 commit to proceed. Non-zero status will cause the transaction to
176 commit to proceed. Non-zero status will cause the transaction to
173 be rolled back. ID of changeset in $HG_NODE. Parent changeset
177 be rolled back. ID of changeset is in $HG_NODE. Parent changeset
174 IDs in $HG_P1 and $HG_P2.
178 IDs are in $HG_PARENT1 and $HG_PARENT2.
175 tag;;
179 tag;;
176 Run after a tag is created. ID of tagged changeset in $HG_NODE.
180 Run after a tag is created. ID of tagged changeset is in
177 Name of tag in $HG_TAG. Tag is local if $HG_LOCAL=1, in repo if
181 $HG_NODE. Name of tag is in $HG_TAG. Tag is local if
178 $HG_LOCAL=0.
182 $HG_LOCAL=1, in repo if $HG_LOCAL=0.
179
183
180 In earlier releases, the names of hook environment variables did not
184 In earlier releases, the names of hook environment variables did not
181 have a "HG_" prefix. These unprefixed names are still provided in
185 have a "HG_" prefix. These unprefixed names are still provided in
182 the environment for backwards compatibility, but their use is
186 the environment for backwards compatibility, but their use is
183 deprecated, and they will be removed in a future release.
187 deprecated, and they will be removed in a future release.
184
188
185 http_proxy::
189 http_proxy::
186 Used to access web-based Mercurial repositories through a HTTP
190 Used to access web-based Mercurial repositories through a HTTP
187 proxy.
191 proxy.
188 host;;
192 host;;
189 Host name and (optional) port of the proxy server, for example
193 Host name and (optional) port of the proxy server, for example
190 "myproxy:8000".
194 "myproxy:8000".
191 no;;
195 no;;
192 Optional. Comma-separated list of host names that should bypass
196 Optional. Comma-separated list of host names that should bypass
193 the proxy.
197 the proxy.
194 passwd;;
198 passwd;;
195 Optional. Password to authenticate with at the proxy server.
199 Optional. Password to authenticate with at the proxy server.
196 user;;
200 user;;
197 Optional. User name to authenticate with at the proxy server.
201 Optional. User name to authenticate with at the proxy server.
198
202
199 paths::
203 paths::
200 Assigns symbolic names to repositories. The left side is the
204 Assigns symbolic names to repositories. The left side is the
201 symbolic name, and the right gives the directory or URL that is the
205 symbolic name, and the right gives the directory or URL that is the
202 location of the repository.
206 location of the repository.
203
207
204 ui::
208 ui::
205 User interface controls.
209 User interface controls.
206 debug;;
210 debug;;
207 Print debugging information. True or False. Default is False.
211 Print debugging information. True or False. Default is False.
208 editor;;
212 editor;;
209 The editor to use during a commit. Default is $EDITOR or "vi".
213 The editor to use during a commit. Default is $EDITOR or "vi".
210 interactive;;
214 interactive;;
211 Allow to prompt the user. True or False. Default is True.
215 Allow to prompt the user. True or False. Default is True.
212 merge;;
216 merge;;
213 The conflict resolution program to use during a manual merge.
217 The conflict resolution program to use during a manual merge.
214 Default is "hgmerge".
218 Default is "hgmerge".
215 quiet;;
219 quiet;;
216 Reduce the amount of output printed. True or False. Default is False.
220 Reduce the amount of output printed. True or False. Default is False.
217 remotecmd;;
221 remotecmd;;
218 remote command to use for clone/push/pull operations. Default is 'hg'.
222 remote command to use for clone/push/pull operations. Default is 'hg'.
219 ssh;;
223 ssh;;
220 command to use for SSH connections. Default is 'ssh'.
224 command to use for SSH connections. Default is 'ssh'.
221 username;;
225 username;;
222 The committer of a changeset created when running "commit".
226 The committer of a changeset created when running "commit".
223 Typically a person's name and email address, e.g. "Fred Widget
227 Typically a person's name and email address, e.g. "Fred Widget
224 <fred@example.com>". Default is $EMAIL or username@hostname.
228 <fred@example.com>". Default is $EMAIL or username@hostname.
225 verbose;;
229 verbose;;
226 Increase the amount of output printed. True or False. Default is False.
230 Increase the amount of output printed. True or False. Default is False.
227
231
228
232
229 web::
233 web::
230 Web interface configuration.
234 Web interface configuration.
231 accesslog;;
235 accesslog;;
232 Where to output the access log. Default is stdout.
236 Where to output the access log. Default is stdout.
233 address;;
237 address;;
234 Interface address to bind to. Default is all.
238 Interface address to bind to. Default is all.
235 allowbz2;;
239 allowbz2;;
236 Whether to allow .tar.bz2 downloading of repo revisions. Default is false.
240 Whether to allow .tar.bz2 downloading of repo revisions. Default is false.
237 allowgz;;
241 allowgz;;
238 Whether to allow .tar.gz downloading of repo revisions. Default is false.
242 Whether to allow .tar.gz downloading of repo revisions. Default is false.
239 allowpull;;
243 allowpull;;
240 Whether to allow pulling from the repository. Default is true.
244 Whether to allow pulling from the repository. Default is true.
241 allowzip;;
245 allowzip;;
242 Whether to allow .zip downloading of repo revisions. Default is false.
246 Whether to allow .zip downloading of repo revisions. Default is false.
243 This feature creates temporary files.
247 This feature creates temporary files.
244 description;;
248 description;;
245 Textual description of the repository's purpose or contents.
249 Textual description of the repository's purpose or contents.
246 Default is "unknown".
250 Default is "unknown".
247 errorlog;;
251 errorlog;;
248 Where to output the error log. Default is stderr.
252 Where to output the error log. Default is stderr.
249 ipv6;;
253 ipv6;;
250 Whether to use IPv6. Default is false.
254 Whether to use IPv6. Default is false.
251 name;;
255 name;;
252 Repository name to use in the web interface. Default is current
256 Repository name to use in the web interface. Default is current
253 working directory.
257 working directory.
254 maxchanges;;
258 maxchanges;;
255 Maximum number of changes to list on the changelog. Default is 10.
259 Maximum number of changes to list on the changelog. Default is 10.
256 maxfiles;;
260 maxfiles;;
257 Maximum number of files to list per changeset. Default is 10.
261 Maximum number of files to list per changeset. Default is 10.
258 port;;
262 port;;
259 Port to listen on. Default is 8000.
263 Port to listen on. Default is 8000.
260 style;;
264 style;;
261 Which template map style to use.
265 Which template map style to use.
262 templates;;
266 templates;;
263 Where to find the HTML templates. Default is install path.
267 Where to find the HTML templates. Default is install path.
264
268
265
269
266 AUTHOR
270 AUTHOR
267 ------
271 ------
268 Bryan O'Sullivan <bos@serpentine.com>.
272 Bryan O'Sullivan <bos@serpentine.com>.
269
273
270 Mercurial was written by Matt Mackall <mpm@selenic.com>.
274 Mercurial was written by Matt Mackall <mpm@selenic.com>.
271
275
272 SEE ALSO
276 SEE ALSO
273 --------
277 --------
274 hg(1)
278 hg(1)
275
279
276 COPYING
280 COPYING
277 -------
281 -------
278 This manual page is copyright 2005 Bryan O'Sullivan.
282 This manual page is copyright 2005 Bryan O'Sullivan.
279 Mercurial is copyright 2005 Matt Mackall.
283 Mercurial is copyright 2005 Matt Mackall.
280 Free use of this software is granted under the terms of the GNU General
284 Free use of this software is granted under the terms of the GNU General
281 Public License (GPL).
285 Public License (GPL).
@@ -1,1850 +1,1851 b''
1 # localrepo.py - read/write repository class for mercurial
1 # localrepo.py - read/write repository class for mercurial
2 #
2 #
3 # Copyright 2005 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms
5 # This software may be used and distributed according to the terms
6 # of the GNU General Public License, incorporated herein by reference.
6 # of the GNU General Public License, incorporated herein by reference.
7
7
8 import struct, os, util
8 import struct, os, util
9 import filelog, manifest, changelog, dirstate, repo
9 import filelog, manifest, changelog, dirstate, repo
10 from node import *
10 from node import *
11 from i18n import gettext as _
11 from i18n import gettext as _
12 from demandload import *
12 from demandload import *
13 demandload(globals(), "re lock transaction tempfile stat mdiff errno")
13 demandload(globals(), "re lock transaction tempfile stat mdiff errno")
14
14
15 class localrepository(object):
15 class localrepository(object):
16 def __init__(self, ui, path=None, create=0):
16 def __init__(self, ui, path=None, create=0):
17 if not path:
17 if not path:
18 p = os.getcwd()
18 p = os.getcwd()
19 while not os.path.isdir(os.path.join(p, ".hg")):
19 while not os.path.isdir(os.path.join(p, ".hg")):
20 oldp = p
20 oldp = p
21 p = os.path.dirname(p)
21 p = os.path.dirname(p)
22 if p == oldp:
22 if p == oldp:
23 raise repo.RepoError(_("no repo found"))
23 raise repo.RepoError(_("no repo found"))
24 path = p
24 path = p
25 self.path = os.path.join(path, ".hg")
25 self.path = os.path.join(path, ".hg")
26
26
27 if not create and not os.path.isdir(self.path):
27 if not create and not os.path.isdir(self.path):
28 raise repo.RepoError(_("repository %s not found") % path)
28 raise repo.RepoError(_("repository %s not found") % path)
29
29
30 self.root = os.path.abspath(path)
30 self.root = os.path.abspath(path)
31 self.ui = ui
31 self.ui = ui
32 self.opener = util.opener(self.path)
32 self.opener = util.opener(self.path)
33 self.wopener = util.opener(self.root)
33 self.wopener = util.opener(self.root)
34 self.manifest = manifest.manifest(self.opener)
34 self.manifest = manifest.manifest(self.opener)
35 self.changelog = changelog.changelog(self.opener)
35 self.changelog = changelog.changelog(self.opener)
36 self.tagscache = None
36 self.tagscache = None
37 self.nodetagscache = None
37 self.nodetagscache = None
38 self.encodepats = None
38 self.encodepats = None
39 self.decodepats = None
39 self.decodepats = None
40
40
41 if create:
41 if create:
42 os.mkdir(self.path)
42 os.mkdir(self.path)
43 os.mkdir(self.join("data"))
43 os.mkdir(self.join("data"))
44
44
45 self.dirstate = dirstate.dirstate(self.opener, ui, self.root)
45 self.dirstate = dirstate.dirstate(self.opener, ui, self.root)
46 try:
46 try:
47 self.ui.readconfig(self.join("hgrc"))
47 self.ui.readconfig(self.join("hgrc"))
48 except IOError:
48 except IOError:
49 pass
49 pass
50
50
51 def hook(self, name, throw=False, **args):
51 def hook(self, name, throw=False, **args):
52 def runhook(name, cmd):
52 def runhook(name, cmd):
53 self.ui.note(_("running hook %s: %s\n") % (name, cmd))
53 self.ui.note(_("running hook %s: %s\n") % (name, cmd))
54 old = {}
54 old = {}
55 for k, v in args.items():
55 for k, v in args.items():
56 k = k.upper()
56 k = k.upper()
57 old['HG_' + k] = os.environ.get(k, None)
57 old['HG_' + k] = os.environ.get(k, None)
58 old[k] = os.environ.get(k, None)
58 old[k] = os.environ.get(k, None)
59 os.environ['HG_' + k] = str(v)
59 os.environ['HG_' + k] = str(v)
60 os.environ[k] = str(v)
60 os.environ[k] = str(v)
61
61
62 try:
62 try:
63 # Hooks run in the repository root
63 # Hooks run in the repository root
64 olddir = os.getcwd()
64 olddir = os.getcwd()
65 os.chdir(self.root)
65 os.chdir(self.root)
66 r = os.system(cmd)
66 r = os.system(cmd)
67 finally:
67 finally:
68 for k, v in old.items():
68 for k, v in old.items():
69 if v is not None:
69 if v is not None:
70 os.environ[k] = v
70 os.environ[k] = v
71 else:
71 else:
72 del os.environ[k]
72 del os.environ[k]
73
73
74 os.chdir(olddir)
74 os.chdir(olddir)
75
75
76 if r:
76 if r:
77 desc, r = util.explain_exit(r)
77 desc, r = util.explain_exit(r)
78 if throw:
78 if throw:
79 raise util.Abort(_('%s hook %s') % (name, desc))
79 raise util.Abort(_('%s hook %s') % (name, desc))
80 self.ui.warn(_('error: %s hook %s\n') % (name, desc))
80 self.ui.warn(_('error: %s hook %s\n') % (name, desc))
81 return False
81 return False
82 return True
82 return True
83
83
84 r = True
84 r = True
85 for hname, cmd in self.ui.configitems("hooks"):
85 for hname, cmd in self.ui.configitems("hooks"):
86 s = hname.split(".")
86 s = hname.split(".")
87 if s[0] == name and cmd:
87 if s[0] == name and cmd:
88 r = runhook(hname, cmd) and r
88 r = runhook(hname, cmd) and r
89 return r
89 return r
90
90
91 def tags(self):
91 def tags(self):
92 '''return a mapping of tag to node'''
92 '''return a mapping of tag to node'''
93 if not self.tagscache:
93 if not self.tagscache:
94 self.tagscache = {}
94 self.tagscache = {}
95 def addtag(self, k, n):
95 def addtag(self, k, n):
96 try:
96 try:
97 bin_n = bin(n)
97 bin_n = bin(n)
98 except TypeError:
98 except TypeError:
99 bin_n = ''
99 bin_n = ''
100 self.tagscache[k.strip()] = bin_n
100 self.tagscache[k.strip()] = bin_n
101
101
102 try:
102 try:
103 # read each head of the tags file, ending with the tip
103 # read each head of the tags file, ending with the tip
104 # and add each tag found to the map, with "newer" ones
104 # and add each tag found to the map, with "newer" ones
105 # taking precedence
105 # taking precedence
106 fl = self.file(".hgtags")
106 fl = self.file(".hgtags")
107 h = fl.heads()
107 h = fl.heads()
108 h.reverse()
108 h.reverse()
109 for r in h:
109 for r in h:
110 for l in fl.read(r).splitlines():
110 for l in fl.read(r).splitlines():
111 if l:
111 if l:
112 n, k = l.split(" ", 1)
112 n, k = l.split(" ", 1)
113 addtag(self, k, n)
113 addtag(self, k, n)
114 except KeyError:
114 except KeyError:
115 pass
115 pass
116
116
117 try:
117 try:
118 f = self.opener("localtags")
118 f = self.opener("localtags")
119 for l in f:
119 for l in f:
120 n, k = l.split(" ", 1)
120 n, k = l.split(" ", 1)
121 addtag(self, k, n)
121 addtag(self, k, n)
122 except IOError:
122 except IOError:
123 pass
123 pass
124
124
125 self.tagscache['tip'] = self.changelog.tip()
125 self.tagscache['tip'] = self.changelog.tip()
126
126
127 return self.tagscache
127 return self.tagscache
128
128
129 def tagslist(self):
129 def tagslist(self):
130 '''return a list of tags ordered by revision'''
130 '''return a list of tags ordered by revision'''
131 l = []
131 l = []
132 for t, n in self.tags().items():
132 for t, n in self.tags().items():
133 try:
133 try:
134 r = self.changelog.rev(n)
134 r = self.changelog.rev(n)
135 except:
135 except:
136 r = -2 # sort to the beginning of the list if unknown
136 r = -2 # sort to the beginning of the list if unknown
137 l.append((r, t, n))
137 l.append((r, t, n))
138 l.sort()
138 l.sort()
139 return [(t, n) for r, t, n in l]
139 return [(t, n) for r, t, n in l]
140
140
141 def nodetags(self, node):
141 def nodetags(self, node):
142 '''return the tags associated with a node'''
142 '''return the tags associated with a node'''
143 if not self.nodetagscache:
143 if not self.nodetagscache:
144 self.nodetagscache = {}
144 self.nodetagscache = {}
145 for t, n in self.tags().items():
145 for t, n in self.tags().items():
146 self.nodetagscache.setdefault(n, []).append(t)
146 self.nodetagscache.setdefault(n, []).append(t)
147 return self.nodetagscache.get(node, [])
147 return self.nodetagscache.get(node, [])
148
148
149 def lookup(self, key):
149 def lookup(self, key):
150 try:
150 try:
151 return self.tags()[key]
151 return self.tags()[key]
152 except KeyError:
152 except KeyError:
153 try:
153 try:
154 return self.changelog.lookup(key)
154 return self.changelog.lookup(key)
155 except:
155 except:
156 raise repo.RepoError(_("unknown revision '%s'") % key)
156 raise repo.RepoError(_("unknown revision '%s'") % key)
157
157
158 def dev(self):
158 def dev(self):
159 return os.stat(self.path).st_dev
159 return os.stat(self.path).st_dev
160
160
161 def local(self):
161 def local(self):
162 return True
162 return True
163
163
164 def join(self, f):
164 def join(self, f):
165 return os.path.join(self.path, f)
165 return os.path.join(self.path, f)
166
166
167 def wjoin(self, f):
167 def wjoin(self, f):
168 return os.path.join(self.root, f)
168 return os.path.join(self.root, f)
169
169
170 def file(self, f):
170 def file(self, f):
171 if f[0] == '/':
171 if f[0] == '/':
172 f = f[1:]
172 f = f[1:]
173 return filelog.filelog(self.opener, f)
173 return filelog.filelog(self.opener, f)
174
174
175 def getcwd(self):
175 def getcwd(self):
176 return self.dirstate.getcwd()
176 return self.dirstate.getcwd()
177
177
178 def wfile(self, f, mode='r'):
178 def wfile(self, f, mode='r'):
179 return self.wopener(f, mode)
179 return self.wopener(f, mode)
180
180
181 def wread(self, filename):
181 def wread(self, filename):
182 if self.encodepats == None:
182 if self.encodepats == None:
183 l = []
183 l = []
184 for pat, cmd in self.ui.configitems("encode"):
184 for pat, cmd in self.ui.configitems("encode"):
185 mf = util.matcher("", "/", [pat], [], [])[1]
185 mf = util.matcher("", "/", [pat], [], [])[1]
186 l.append((mf, cmd))
186 l.append((mf, cmd))
187 self.encodepats = l
187 self.encodepats = l
188
188
189 data = self.wopener(filename, 'r').read()
189 data = self.wopener(filename, 'r').read()
190
190
191 for mf, cmd in self.encodepats:
191 for mf, cmd in self.encodepats:
192 if mf(filename):
192 if mf(filename):
193 self.ui.debug(_("filtering %s through %s\n") % (filename, cmd))
193 self.ui.debug(_("filtering %s through %s\n") % (filename, cmd))
194 data = util.filter(data, cmd)
194 data = util.filter(data, cmd)
195 break
195 break
196
196
197 return data
197 return data
198
198
199 def wwrite(self, filename, data, fd=None):
199 def wwrite(self, filename, data, fd=None):
200 if self.decodepats == None:
200 if self.decodepats == None:
201 l = []
201 l = []
202 for pat, cmd in self.ui.configitems("decode"):
202 for pat, cmd in self.ui.configitems("decode"):
203 mf = util.matcher("", "/", [pat], [], [])[1]
203 mf = util.matcher("", "/", [pat], [], [])[1]
204 l.append((mf, cmd))
204 l.append((mf, cmd))
205 self.decodepats = l
205 self.decodepats = l
206
206
207 for mf, cmd in self.decodepats:
207 for mf, cmd in self.decodepats:
208 if mf(filename):
208 if mf(filename):
209 self.ui.debug(_("filtering %s through %s\n") % (filename, cmd))
209 self.ui.debug(_("filtering %s through %s\n") % (filename, cmd))
210 data = util.filter(data, cmd)
210 data = util.filter(data, cmd)
211 break
211 break
212
212
213 if fd:
213 if fd:
214 return fd.write(data)
214 return fd.write(data)
215 return self.wopener(filename, 'w').write(data)
215 return self.wopener(filename, 'w').write(data)
216
216
217 def transaction(self):
217 def transaction(self):
218 # save dirstate for undo
218 # save dirstate for undo
219 try:
219 try:
220 ds = self.opener("dirstate").read()
220 ds = self.opener("dirstate").read()
221 except IOError:
221 except IOError:
222 ds = ""
222 ds = ""
223 self.opener("journal.dirstate", "w").write(ds)
223 self.opener("journal.dirstate", "w").write(ds)
224
224
225 def after():
225 def after():
226 util.rename(self.join("journal"), self.join("undo"))
226 util.rename(self.join("journal"), self.join("undo"))
227 util.rename(self.join("journal.dirstate"),
227 util.rename(self.join("journal.dirstate"),
228 self.join("undo.dirstate"))
228 self.join("undo.dirstate"))
229
229
230 return transaction.transaction(self.ui.warn, self.opener,
230 return transaction.transaction(self.ui.warn, self.opener,
231 self.join("journal"), after)
231 self.join("journal"), after)
232
232
233 def recover(self):
233 def recover(self):
234 lock = self.lock()
234 lock = self.lock()
235 if os.path.exists(self.join("journal")):
235 if os.path.exists(self.join("journal")):
236 self.ui.status(_("rolling back interrupted transaction\n"))
236 self.ui.status(_("rolling back interrupted transaction\n"))
237 transaction.rollback(self.opener, self.join("journal"))
237 transaction.rollback(self.opener, self.join("journal"))
238 self.manifest = manifest.manifest(self.opener)
238 self.manifest = manifest.manifest(self.opener)
239 self.changelog = changelog.changelog(self.opener)
239 self.changelog = changelog.changelog(self.opener)
240 return True
240 return True
241 else:
241 else:
242 self.ui.warn(_("no interrupted transaction available\n"))
242 self.ui.warn(_("no interrupted transaction available\n"))
243 return False
243 return False
244
244
245 def undo(self, wlock=None):
245 def undo(self, wlock=None):
246 if not wlock:
246 if not wlock:
247 wlock = self.wlock()
247 wlock = self.wlock()
248 lock = self.lock()
248 lock = self.lock()
249 if os.path.exists(self.join("undo")):
249 if os.path.exists(self.join("undo")):
250 self.ui.status(_("rolling back last transaction\n"))
250 self.ui.status(_("rolling back last transaction\n"))
251 transaction.rollback(self.opener, self.join("undo"))
251 transaction.rollback(self.opener, self.join("undo"))
252 util.rename(self.join("undo.dirstate"), self.join("dirstate"))
252 util.rename(self.join("undo.dirstate"), self.join("dirstate"))
253 self.dirstate.read()
253 self.dirstate.read()
254 else:
254 else:
255 self.ui.warn(_("no undo information available\n"))
255 self.ui.warn(_("no undo information available\n"))
256
256
257 def lock(self, wait=1):
257 def lock(self, wait=1):
258 try:
258 try:
259 return lock.lock(self.join("lock"), 0)
259 return lock.lock(self.join("lock"), 0)
260 except lock.LockHeld, inst:
260 except lock.LockHeld, inst:
261 if wait:
261 if wait:
262 self.ui.warn(_("waiting for lock held by %s\n") % inst.args[0])
262 self.ui.warn(_("waiting for lock held by %s\n") % inst.args[0])
263 return lock.lock(self.join("lock"), wait)
263 return lock.lock(self.join("lock"), wait)
264 raise inst
264 raise inst
265
265
266 def wlock(self, wait=1):
266 def wlock(self, wait=1):
267 try:
267 try:
268 wlock = lock.lock(self.join("wlock"), 0, self.dirstate.write)
268 wlock = lock.lock(self.join("wlock"), 0, self.dirstate.write)
269 except lock.LockHeld, inst:
269 except lock.LockHeld, inst:
270 if not wait:
270 if not wait:
271 raise inst
271 raise inst
272 self.ui.warn(_("waiting for lock held by %s\n") % inst.args[0])
272 self.ui.warn(_("waiting for lock held by %s\n") % inst.args[0])
273 wlock = lock.lock(self.join("wlock"), wait, self.dirstate.write)
273 wlock = lock.lock(self.join("wlock"), wait, self.dirstate.write)
274 self.dirstate.read()
274 self.dirstate.read()
275 return wlock
275 return wlock
276
276
277 def rawcommit(self, files, text, user, date, p1=None, p2=None, wlock=None):
277 def rawcommit(self, files, text, user, date, p1=None, p2=None, wlock=None):
278 orig_parent = self.dirstate.parents()[0] or nullid
278 orig_parent = self.dirstate.parents()[0] or nullid
279 p1 = p1 or self.dirstate.parents()[0] or nullid
279 p1 = p1 or self.dirstate.parents()[0] or nullid
280 p2 = p2 or self.dirstate.parents()[1] or nullid
280 p2 = p2 or self.dirstate.parents()[1] or nullid
281 c1 = self.changelog.read(p1)
281 c1 = self.changelog.read(p1)
282 c2 = self.changelog.read(p2)
282 c2 = self.changelog.read(p2)
283 m1 = self.manifest.read(c1[0])
283 m1 = self.manifest.read(c1[0])
284 mf1 = self.manifest.readflags(c1[0])
284 mf1 = self.manifest.readflags(c1[0])
285 m2 = self.manifest.read(c2[0])
285 m2 = self.manifest.read(c2[0])
286 changed = []
286 changed = []
287
287
288 if orig_parent == p1:
288 if orig_parent == p1:
289 update_dirstate = 1
289 update_dirstate = 1
290 else:
290 else:
291 update_dirstate = 0
291 update_dirstate = 0
292
292
293 if not wlock:
293 if not wlock:
294 wlock = self.wlock()
294 wlock = self.wlock()
295 lock = self.lock()
295 lock = self.lock()
296 tr = self.transaction()
296 tr = self.transaction()
297 mm = m1.copy()
297 mm = m1.copy()
298 mfm = mf1.copy()
298 mfm = mf1.copy()
299 linkrev = self.changelog.count()
299 linkrev = self.changelog.count()
300 for f in files:
300 for f in files:
301 try:
301 try:
302 t = self.wread(f)
302 t = self.wread(f)
303 tm = util.is_exec(self.wjoin(f), mfm.get(f, False))
303 tm = util.is_exec(self.wjoin(f), mfm.get(f, False))
304 r = self.file(f)
304 r = self.file(f)
305 mfm[f] = tm
305 mfm[f] = tm
306
306
307 fp1 = m1.get(f, nullid)
307 fp1 = m1.get(f, nullid)
308 fp2 = m2.get(f, nullid)
308 fp2 = m2.get(f, nullid)
309
309
310 # is the same revision on two branches of a merge?
310 # is the same revision on two branches of a merge?
311 if fp2 == fp1:
311 if fp2 == fp1:
312 fp2 = nullid
312 fp2 = nullid
313
313
314 if fp2 != nullid:
314 if fp2 != nullid:
315 # is one parent an ancestor of the other?
315 # is one parent an ancestor of the other?
316 fpa = r.ancestor(fp1, fp2)
316 fpa = r.ancestor(fp1, fp2)
317 if fpa == fp1:
317 if fpa == fp1:
318 fp1, fp2 = fp2, nullid
318 fp1, fp2 = fp2, nullid
319 elif fpa == fp2:
319 elif fpa == fp2:
320 fp2 = nullid
320 fp2 = nullid
321
321
322 # is the file unmodified from the parent?
322 # is the file unmodified from the parent?
323 if t == r.read(fp1):
323 if t == r.read(fp1):
324 # record the proper existing parent in manifest
324 # record the proper existing parent in manifest
325 # no need to add a revision
325 # no need to add a revision
326 mm[f] = fp1
326 mm[f] = fp1
327 continue
327 continue
328
328
329 mm[f] = r.add(t, {}, tr, linkrev, fp1, fp2)
329 mm[f] = r.add(t, {}, tr, linkrev, fp1, fp2)
330 changed.append(f)
330 changed.append(f)
331 if update_dirstate:
331 if update_dirstate:
332 self.dirstate.update([f], "n")
332 self.dirstate.update([f], "n")
333 except IOError:
333 except IOError:
334 try:
334 try:
335 del mm[f]
335 del mm[f]
336 del mfm[f]
336 del mfm[f]
337 if update_dirstate:
337 if update_dirstate:
338 self.dirstate.forget([f])
338 self.dirstate.forget([f])
339 except:
339 except:
340 # deleted from p2?
340 # deleted from p2?
341 pass
341 pass
342
342
343 mnode = self.manifest.add(mm, mfm, tr, linkrev, c1[0], c2[0])
343 mnode = self.manifest.add(mm, mfm, tr, linkrev, c1[0], c2[0])
344 user = user or self.ui.username()
344 user = user or self.ui.username()
345 n = self.changelog.add(mnode, changed, text, tr, p1, p2, user, date)
345 n = self.changelog.add(mnode, changed, text, tr, p1, p2, user, date)
346 tr.close()
346 tr.close()
347 if update_dirstate:
347 if update_dirstate:
348 self.dirstate.setparents(n, nullid)
348 self.dirstate.setparents(n, nullid)
349
349
350 def commit(self, files=None, text="", user=None, date=None,
350 def commit(self, files=None, text="", user=None, date=None,
351 match=util.always, force=False, wlock=None):
351 match=util.always, force=False, wlock=None):
352 commit = []
352 commit = []
353 remove = []
353 remove = []
354 changed = []
354 changed = []
355
355
356 if files:
356 if files:
357 for f in files:
357 for f in files:
358 s = self.dirstate.state(f)
358 s = self.dirstate.state(f)
359 if s in 'nmai':
359 if s in 'nmai':
360 commit.append(f)
360 commit.append(f)
361 elif s == 'r':
361 elif s == 'r':
362 remove.append(f)
362 remove.append(f)
363 else:
363 else:
364 self.ui.warn(_("%s not tracked!\n") % f)
364 self.ui.warn(_("%s not tracked!\n") % f)
365 else:
365 else:
366 modified, added, removed, deleted, unknown = self.changes(match=match)
366 modified, added, removed, deleted, unknown = self.changes(match=match)
367 commit = modified + added
367 commit = modified + added
368 remove = removed
368 remove = removed
369
369
370 p1, p2 = self.dirstate.parents()
370 p1, p2 = self.dirstate.parents()
371 c1 = self.changelog.read(p1)
371 c1 = self.changelog.read(p1)
372 c2 = self.changelog.read(p2)
372 c2 = self.changelog.read(p2)
373 m1 = self.manifest.read(c1[0])
373 m1 = self.manifest.read(c1[0])
374 mf1 = self.manifest.readflags(c1[0])
374 mf1 = self.manifest.readflags(c1[0])
375 m2 = self.manifest.read(c2[0])
375 m2 = self.manifest.read(c2[0])
376
376
377 if not commit and not remove and not force and p2 == nullid:
377 if not commit and not remove and not force and p2 == nullid:
378 self.ui.status(_("nothing changed\n"))
378 self.ui.status(_("nothing changed\n"))
379 return None
379 return None
380
380
381 xp1 = hex(p1)
381 xp1 = hex(p1)
382 if p2 == nullid: xp2 = ''
382 if p2 == nullid: xp2 = ''
383 else: xp2 = hex(p2)
383 else: xp2 = hex(p2)
384
384
385 self.hook("precommit", throw=True, p1=xp1, p2=xp2)
385 self.hook("precommit", throw=True, parent1=xp1, parent2=xp2)
386
386
387 if not wlock:
387 if not wlock:
388 wlock = self.wlock()
388 wlock = self.wlock()
389 lock = self.lock()
389 lock = self.lock()
390 tr = self.transaction()
390 tr = self.transaction()
391
391
392 # check in files
392 # check in files
393 new = {}
393 new = {}
394 linkrev = self.changelog.count()
394 linkrev = self.changelog.count()
395 commit.sort()
395 commit.sort()
396 for f in commit:
396 for f in commit:
397 self.ui.note(f + "\n")
397 self.ui.note(f + "\n")
398 try:
398 try:
399 mf1[f] = util.is_exec(self.wjoin(f), mf1.get(f, False))
399 mf1[f] = util.is_exec(self.wjoin(f), mf1.get(f, False))
400 t = self.wread(f)
400 t = self.wread(f)
401 except IOError:
401 except IOError:
402 self.ui.warn(_("trouble committing %s!\n") % f)
402 self.ui.warn(_("trouble committing %s!\n") % f)
403 raise
403 raise
404
404
405 r = self.file(f)
405 r = self.file(f)
406
406
407 meta = {}
407 meta = {}
408 cp = self.dirstate.copied(f)
408 cp = self.dirstate.copied(f)
409 if cp:
409 if cp:
410 meta["copy"] = cp
410 meta["copy"] = cp
411 meta["copyrev"] = hex(m1.get(cp, m2.get(cp, nullid)))
411 meta["copyrev"] = hex(m1.get(cp, m2.get(cp, nullid)))
412 self.ui.debug(_(" %s: copy %s:%s\n") % (f, cp, meta["copyrev"]))
412 self.ui.debug(_(" %s: copy %s:%s\n") % (f, cp, meta["copyrev"]))
413 fp1, fp2 = nullid, nullid
413 fp1, fp2 = nullid, nullid
414 else:
414 else:
415 fp1 = m1.get(f, nullid)
415 fp1 = m1.get(f, nullid)
416 fp2 = m2.get(f, nullid)
416 fp2 = m2.get(f, nullid)
417
417
418 if fp2 != nullid:
418 if fp2 != nullid:
419 # is one parent an ancestor of the other?
419 # is one parent an ancestor of the other?
420 fpa = r.ancestor(fp1, fp2)
420 fpa = r.ancestor(fp1, fp2)
421 if fpa == fp1:
421 if fpa == fp1:
422 fp1, fp2 = fp2, nullid
422 fp1, fp2 = fp2, nullid
423 elif fpa == fp2:
423 elif fpa == fp2:
424 fp2 = nullid
424 fp2 = nullid
425
425
426 # is the file unmodified from the parent?
426 # is the file unmodified from the parent?
427 if not meta and t == r.read(fp1) and fp2 == nullid:
427 if not meta and t == r.read(fp1) and fp2 == nullid:
428 # record the proper existing parent in manifest
428 # record the proper existing parent in manifest
429 # no need to add a revision
429 # no need to add a revision
430 new[f] = fp1
430 new[f] = fp1
431 continue
431 continue
432
432
433 new[f] = r.add(t, meta, tr, linkrev, fp1, fp2)
433 new[f] = r.add(t, meta, tr, linkrev, fp1, fp2)
434 # remember what we've added so that we can later calculate
434 # remember what we've added so that we can later calculate
435 # the files to pull from a set of changesets
435 # the files to pull from a set of changesets
436 changed.append(f)
436 changed.append(f)
437
437
438 # update manifest
438 # update manifest
439 m1 = m1.copy()
439 m1 = m1.copy()
440 m1.update(new)
440 m1.update(new)
441 for f in remove:
441 for f in remove:
442 if f in m1:
442 if f in m1:
443 del m1[f]
443 del m1[f]
444 mn = self.manifest.add(m1, mf1, tr, linkrev, c1[0], c2[0],
444 mn = self.manifest.add(m1, mf1, tr, linkrev, c1[0], c2[0],
445 (new, remove))
445 (new, remove))
446
446
447 # add changeset
447 # add changeset
448 new = new.keys()
448 new = new.keys()
449 new.sort()
449 new.sort()
450
450
451 if not text:
451 if not text:
452 edittext = [""]
452 edittext = [""]
453 if p2 != nullid:
453 if p2 != nullid:
454 edittext.append("HG: branch merge")
454 edittext.append("HG: branch merge")
455 edittext.extend(["HG: changed %s" % f for f in changed])
455 edittext.extend(["HG: changed %s" % f for f in changed])
456 edittext.extend(["HG: removed %s" % f for f in remove])
456 edittext.extend(["HG: removed %s" % f for f in remove])
457 if not changed and not remove:
457 if not changed and not remove:
458 edittext.append("HG: no files changed")
458 edittext.append("HG: no files changed")
459 edittext.append("")
459 edittext.append("")
460 # run editor in the repository root
460 # run editor in the repository root
461 olddir = os.getcwd()
461 olddir = os.getcwd()
462 os.chdir(self.root)
462 os.chdir(self.root)
463 edittext = self.ui.edit("\n".join(edittext))
463 edittext = self.ui.edit("\n".join(edittext))
464 os.chdir(olddir)
464 os.chdir(olddir)
465 if not edittext.rstrip():
465 if not edittext.rstrip():
466 return None
466 return None
467 text = edittext
467 text = edittext
468
468
469 user = user or self.ui.username()
469 user = user or self.ui.username()
470 n = self.changelog.add(mn, changed + remove, text, tr, p1, p2, user, date)
470 n = self.changelog.add(mn, changed + remove, text, tr, p1, p2, user, date)
471 self.hook('pretxncommit', throw=True, node=hex(n), p1=xp1, p2=xp2)
471 self.hook('pretxncommit', throw=True, node=hex(n), parent1=xp1,
472 parent2=xp2)
472 tr.close()
473 tr.close()
473
474
474 self.dirstate.setparents(n)
475 self.dirstate.setparents(n)
475 self.dirstate.update(new, "n")
476 self.dirstate.update(new, "n")
476 self.dirstate.forget(remove)
477 self.dirstate.forget(remove)
477
478
478 self.hook("commit", node=hex(n), p1=xp1, p2=xp2)
479 self.hook("commit", node=hex(n), parent1=xp1, parent2=xp2)
479 return n
480 return n
480
481
481 def walk(self, node=None, files=[], match=util.always):
482 def walk(self, node=None, files=[], match=util.always):
482 if node:
483 if node:
483 fdict = dict.fromkeys(files)
484 fdict = dict.fromkeys(files)
484 for fn in self.manifest.read(self.changelog.read(node)[0]):
485 for fn in self.manifest.read(self.changelog.read(node)[0]):
485 fdict.pop(fn, None)
486 fdict.pop(fn, None)
486 if match(fn):
487 if match(fn):
487 yield 'm', fn
488 yield 'm', fn
488 for fn in fdict:
489 for fn in fdict:
489 self.ui.warn(_('%s: No such file in rev %s\n') % (
490 self.ui.warn(_('%s: No such file in rev %s\n') % (
490 util.pathto(self.getcwd(), fn), short(node)))
491 util.pathto(self.getcwd(), fn), short(node)))
491 else:
492 else:
492 for src, fn in self.dirstate.walk(files, match):
493 for src, fn in self.dirstate.walk(files, match):
493 yield src, fn
494 yield src, fn
494
495
495 def changes(self, node1=None, node2=None, files=[], match=util.always,
496 def changes(self, node1=None, node2=None, files=[], match=util.always,
496 wlock=None):
497 wlock=None):
497 """return changes between two nodes or node and working directory
498 """return changes between two nodes or node and working directory
498
499
499 If node1 is None, use the first dirstate parent instead.
500 If node1 is None, use the first dirstate parent instead.
500 If node2 is None, compare node1 with working directory.
501 If node2 is None, compare node1 with working directory.
501 """
502 """
502
503
503 def fcmp(fn, mf):
504 def fcmp(fn, mf):
504 t1 = self.wread(fn)
505 t1 = self.wread(fn)
505 t2 = self.file(fn).read(mf.get(fn, nullid))
506 t2 = self.file(fn).read(mf.get(fn, nullid))
506 return cmp(t1, t2)
507 return cmp(t1, t2)
507
508
508 def mfmatches(node):
509 def mfmatches(node):
509 change = self.changelog.read(node)
510 change = self.changelog.read(node)
510 mf = dict(self.manifest.read(change[0]))
511 mf = dict(self.manifest.read(change[0]))
511 for fn in mf.keys():
512 for fn in mf.keys():
512 if not match(fn):
513 if not match(fn):
513 del mf[fn]
514 del mf[fn]
514 return mf
515 return mf
515
516
516 # are we comparing the working directory?
517 # are we comparing the working directory?
517 if not node2:
518 if not node2:
518 if not wlock:
519 if not wlock:
519 try:
520 try:
520 wlock = self.wlock(wait=0)
521 wlock = self.wlock(wait=0)
521 except lock.LockHeld:
522 except lock.LockHeld:
522 wlock = None
523 wlock = None
523 lookup, modified, added, removed, deleted, unknown = (
524 lookup, modified, added, removed, deleted, unknown = (
524 self.dirstate.changes(files, match))
525 self.dirstate.changes(files, match))
525
526
526 # are we comparing working dir against its parent?
527 # are we comparing working dir against its parent?
527 if not node1:
528 if not node1:
528 if lookup:
529 if lookup:
529 # do a full compare of any files that might have changed
530 # do a full compare of any files that might have changed
530 mf2 = mfmatches(self.dirstate.parents()[0])
531 mf2 = mfmatches(self.dirstate.parents()[0])
531 for f in lookup:
532 for f in lookup:
532 if fcmp(f, mf2):
533 if fcmp(f, mf2):
533 modified.append(f)
534 modified.append(f)
534 elif wlock is not None:
535 elif wlock is not None:
535 self.dirstate.update([f], "n")
536 self.dirstate.update([f], "n")
536 else:
537 else:
537 # we are comparing working dir against non-parent
538 # we are comparing working dir against non-parent
538 # generate a pseudo-manifest for the working dir
539 # generate a pseudo-manifest for the working dir
539 mf2 = mfmatches(self.dirstate.parents()[0])
540 mf2 = mfmatches(self.dirstate.parents()[0])
540 for f in lookup + modified + added:
541 for f in lookup + modified + added:
541 mf2[f] = ""
542 mf2[f] = ""
542 for f in removed:
543 for f in removed:
543 if f in mf2:
544 if f in mf2:
544 del mf2[f]
545 del mf2[f]
545 else:
546 else:
546 # we are comparing two revisions
547 # we are comparing two revisions
547 deleted, unknown = [], []
548 deleted, unknown = [], []
548 mf2 = mfmatches(node2)
549 mf2 = mfmatches(node2)
549
550
550 if node1:
551 if node1:
551 # flush lists from dirstate before comparing manifests
552 # flush lists from dirstate before comparing manifests
552 modified, added = [], []
553 modified, added = [], []
553
554
554 mf1 = mfmatches(node1)
555 mf1 = mfmatches(node1)
555
556
556 for fn in mf2:
557 for fn in mf2:
557 if mf1.has_key(fn):
558 if mf1.has_key(fn):
558 if mf1[fn] != mf2[fn] and (mf2[fn] != "" or fcmp(fn, mf1)):
559 if mf1[fn] != mf2[fn] and (mf2[fn] != "" or fcmp(fn, mf1)):
559 modified.append(fn)
560 modified.append(fn)
560 del mf1[fn]
561 del mf1[fn]
561 else:
562 else:
562 added.append(fn)
563 added.append(fn)
563
564
564 removed = mf1.keys()
565 removed = mf1.keys()
565
566
566 # sort and return results:
567 # sort and return results:
567 for l in modified, added, removed, deleted, unknown:
568 for l in modified, added, removed, deleted, unknown:
568 l.sort()
569 l.sort()
569 return (modified, added, removed, deleted, unknown)
570 return (modified, added, removed, deleted, unknown)
570
571
571 def add(self, list, wlock=None):
572 def add(self, list, wlock=None):
572 if not wlock:
573 if not wlock:
573 wlock = self.wlock()
574 wlock = self.wlock()
574 for f in list:
575 for f in list:
575 p = self.wjoin(f)
576 p = self.wjoin(f)
576 if not os.path.exists(p):
577 if not os.path.exists(p):
577 self.ui.warn(_("%s does not exist!\n") % f)
578 self.ui.warn(_("%s does not exist!\n") % f)
578 elif not os.path.isfile(p):
579 elif not os.path.isfile(p):
579 self.ui.warn(_("%s not added: only files supported currently\n")
580 self.ui.warn(_("%s not added: only files supported currently\n")
580 % f)
581 % f)
581 elif self.dirstate.state(f) in 'an':
582 elif self.dirstate.state(f) in 'an':
582 self.ui.warn(_("%s already tracked!\n") % f)
583 self.ui.warn(_("%s already tracked!\n") % f)
583 else:
584 else:
584 self.dirstate.update([f], "a")
585 self.dirstate.update([f], "a")
585
586
586 def forget(self, list, wlock=None):
587 def forget(self, list, wlock=None):
587 if not wlock:
588 if not wlock:
588 wlock = self.wlock()
589 wlock = self.wlock()
589 for f in list:
590 for f in list:
590 if self.dirstate.state(f) not in 'ai':
591 if self.dirstate.state(f) not in 'ai':
591 self.ui.warn(_("%s not added!\n") % f)
592 self.ui.warn(_("%s not added!\n") % f)
592 else:
593 else:
593 self.dirstate.forget([f])
594 self.dirstate.forget([f])
594
595
595 def remove(self, list, unlink=False, wlock=None):
596 def remove(self, list, unlink=False, wlock=None):
596 if unlink:
597 if unlink:
597 for f in list:
598 for f in list:
598 try:
599 try:
599 util.unlink(self.wjoin(f))
600 util.unlink(self.wjoin(f))
600 except OSError, inst:
601 except OSError, inst:
601 if inst.errno != errno.ENOENT:
602 if inst.errno != errno.ENOENT:
602 raise
603 raise
603 if not wlock:
604 if not wlock:
604 wlock = self.wlock()
605 wlock = self.wlock()
605 for f in list:
606 for f in list:
606 p = self.wjoin(f)
607 p = self.wjoin(f)
607 if os.path.exists(p):
608 if os.path.exists(p):
608 self.ui.warn(_("%s still exists!\n") % f)
609 self.ui.warn(_("%s still exists!\n") % f)
609 elif self.dirstate.state(f) == 'a':
610 elif self.dirstate.state(f) == 'a':
610 self.ui.warn(_("%s never committed!\n") % f)
611 self.ui.warn(_("%s never committed!\n") % f)
611 self.dirstate.forget([f])
612 self.dirstate.forget([f])
612 elif f not in self.dirstate:
613 elif f not in self.dirstate:
613 self.ui.warn(_("%s not tracked!\n") % f)
614 self.ui.warn(_("%s not tracked!\n") % f)
614 else:
615 else:
615 self.dirstate.update([f], "r")
616 self.dirstate.update([f], "r")
616
617
617 def undelete(self, list, wlock=None):
618 def undelete(self, list, wlock=None):
618 p = self.dirstate.parents()[0]
619 p = self.dirstate.parents()[0]
619 mn = self.changelog.read(p)[0]
620 mn = self.changelog.read(p)[0]
620 mf = self.manifest.readflags(mn)
621 mf = self.manifest.readflags(mn)
621 m = self.manifest.read(mn)
622 m = self.manifest.read(mn)
622 if not wlock:
623 if not wlock:
623 wlock = self.wlock()
624 wlock = self.wlock()
624 for f in list:
625 for f in list:
625 if self.dirstate.state(f) not in "r":
626 if self.dirstate.state(f) not in "r":
626 self.ui.warn("%s not removed!\n" % f)
627 self.ui.warn("%s not removed!\n" % f)
627 else:
628 else:
628 t = self.file(f).read(m[f])
629 t = self.file(f).read(m[f])
629 self.wwrite(f, t)
630 self.wwrite(f, t)
630 util.set_exec(self.wjoin(f), mf[f])
631 util.set_exec(self.wjoin(f), mf[f])
631 self.dirstate.update([f], "n")
632 self.dirstate.update([f], "n")
632
633
633 def copy(self, source, dest, wlock=None):
634 def copy(self, source, dest, wlock=None):
634 p = self.wjoin(dest)
635 p = self.wjoin(dest)
635 if not os.path.exists(p):
636 if not os.path.exists(p):
636 self.ui.warn(_("%s does not exist!\n") % dest)
637 self.ui.warn(_("%s does not exist!\n") % dest)
637 elif not os.path.isfile(p):
638 elif not os.path.isfile(p):
638 self.ui.warn(_("copy failed: %s is not a file\n") % dest)
639 self.ui.warn(_("copy failed: %s is not a file\n") % dest)
639 else:
640 else:
640 if not wlock:
641 if not wlock:
641 wlock = self.wlock()
642 wlock = self.wlock()
642 if self.dirstate.state(dest) == '?':
643 if self.dirstate.state(dest) == '?':
643 self.dirstate.update([dest], "a")
644 self.dirstate.update([dest], "a")
644 self.dirstate.copy(source, dest)
645 self.dirstate.copy(source, dest)
645
646
646 def heads(self, start=None):
647 def heads(self, start=None):
647 heads = self.changelog.heads(start)
648 heads = self.changelog.heads(start)
648 # sort the output in rev descending order
649 # sort the output in rev descending order
649 heads = [(-self.changelog.rev(h), h) for h in heads]
650 heads = [(-self.changelog.rev(h), h) for h in heads]
650 heads.sort()
651 heads.sort()
651 return [n for (r, n) in heads]
652 return [n for (r, n) in heads]
652
653
653 # branchlookup returns a dict giving a list of branches for
654 # branchlookup returns a dict giving a list of branches for
654 # each head. A branch is defined as the tag of a node or
655 # each head. A branch is defined as the tag of a node or
655 # the branch of the node's parents. If a node has multiple
656 # the branch of the node's parents. If a node has multiple
656 # branch tags, tags are eliminated if they are visible from other
657 # branch tags, tags are eliminated if they are visible from other
657 # branch tags.
658 # branch tags.
658 #
659 #
659 # So, for this graph: a->b->c->d->e
660 # So, for this graph: a->b->c->d->e
660 # \ /
661 # \ /
661 # aa -----/
662 # aa -----/
662 # a has tag 2.6.12
663 # a has tag 2.6.12
663 # d has tag 2.6.13
664 # d has tag 2.6.13
664 # e would have branch tags for 2.6.12 and 2.6.13. Because the node
665 # e would have branch tags for 2.6.12 and 2.6.13. Because the node
665 # for 2.6.12 can be reached from the node 2.6.13, that is eliminated
666 # for 2.6.12 can be reached from the node 2.6.13, that is eliminated
666 # from the list.
667 # from the list.
667 #
668 #
668 # It is possible that more than one head will have the same branch tag.
669 # It is possible that more than one head will have the same branch tag.
669 # callers need to check the result for multiple heads under the same
670 # callers need to check the result for multiple heads under the same
670 # branch tag if that is a problem for them (ie checkout of a specific
671 # branch tag if that is a problem for them (ie checkout of a specific
671 # branch).
672 # branch).
672 #
673 #
673 # passing in a specific branch will limit the depth of the search
674 # passing in a specific branch will limit the depth of the search
674 # through the parents. It won't limit the branches returned in the
675 # through the parents. It won't limit the branches returned in the
675 # result though.
676 # result though.
676 def branchlookup(self, heads=None, branch=None):
677 def branchlookup(self, heads=None, branch=None):
677 if not heads:
678 if not heads:
678 heads = self.heads()
679 heads = self.heads()
679 headt = [ h for h in heads ]
680 headt = [ h for h in heads ]
680 chlog = self.changelog
681 chlog = self.changelog
681 branches = {}
682 branches = {}
682 merges = []
683 merges = []
683 seenmerge = {}
684 seenmerge = {}
684
685
685 # traverse the tree once for each head, recording in the branches
686 # traverse the tree once for each head, recording in the branches
686 # dict which tags are visible from this head. The branches
687 # dict which tags are visible from this head. The branches
687 # dict also records which tags are visible from each tag
688 # dict also records which tags are visible from each tag
688 # while we traverse.
689 # while we traverse.
689 while headt or merges:
690 while headt or merges:
690 if merges:
691 if merges:
691 n, found = merges.pop()
692 n, found = merges.pop()
692 visit = [n]
693 visit = [n]
693 else:
694 else:
694 h = headt.pop()
695 h = headt.pop()
695 visit = [h]
696 visit = [h]
696 found = [h]
697 found = [h]
697 seen = {}
698 seen = {}
698 while visit:
699 while visit:
699 n = visit.pop()
700 n = visit.pop()
700 if n in seen:
701 if n in seen:
701 continue
702 continue
702 pp = chlog.parents(n)
703 pp = chlog.parents(n)
703 tags = self.nodetags(n)
704 tags = self.nodetags(n)
704 if tags:
705 if tags:
705 for x in tags:
706 for x in tags:
706 if x == 'tip':
707 if x == 'tip':
707 continue
708 continue
708 for f in found:
709 for f in found:
709 branches.setdefault(f, {})[n] = 1
710 branches.setdefault(f, {})[n] = 1
710 branches.setdefault(n, {})[n] = 1
711 branches.setdefault(n, {})[n] = 1
711 break
712 break
712 if n not in found:
713 if n not in found:
713 found.append(n)
714 found.append(n)
714 if branch in tags:
715 if branch in tags:
715 continue
716 continue
716 seen[n] = 1
717 seen[n] = 1
717 if pp[1] != nullid and n not in seenmerge:
718 if pp[1] != nullid and n not in seenmerge:
718 merges.append((pp[1], [x for x in found]))
719 merges.append((pp[1], [x for x in found]))
719 seenmerge[n] = 1
720 seenmerge[n] = 1
720 if pp[0] != nullid:
721 if pp[0] != nullid:
721 visit.append(pp[0])
722 visit.append(pp[0])
722 # traverse the branches dict, eliminating branch tags from each
723 # traverse the branches dict, eliminating branch tags from each
723 # head that are visible from another branch tag for that head.
724 # head that are visible from another branch tag for that head.
724 out = {}
725 out = {}
725 viscache = {}
726 viscache = {}
726 for h in heads:
727 for h in heads:
727 def visible(node):
728 def visible(node):
728 if node in viscache:
729 if node in viscache:
729 return viscache[node]
730 return viscache[node]
730 ret = {}
731 ret = {}
731 visit = [node]
732 visit = [node]
732 while visit:
733 while visit:
733 x = visit.pop()
734 x = visit.pop()
734 if x in viscache:
735 if x in viscache:
735 ret.update(viscache[x])
736 ret.update(viscache[x])
736 elif x not in ret:
737 elif x not in ret:
737 ret[x] = 1
738 ret[x] = 1
738 if x in branches:
739 if x in branches:
739 visit[len(visit):] = branches[x].keys()
740 visit[len(visit):] = branches[x].keys()
740 viscache[node] = ret
741 viscache[node] = ret
741 return ret
742 return ret
742 if h not in branches:
743 if h not in branches:
743 continue
744 continue
744 # O(n^2), but somewhat limited. This only searches the
745 # O(n^2), but somewhat limited. This only searches the
745 # tags visible from a specific head, not all the tags in the
746 # tags visible from a specific head, not all the tags in the
746 # whole repo.
747 # whole repo.
747 for b in branches[h]:
748 for b in branches[h]:
748 vis = False
749 vis = False
749 for bb in branches[h].keys():
750 for bb in branches[h].keys():
750 if b != bb:
751 if b != bb:
751 if b in visible(bb):
752 if b in visible(bb):
752 vis = True
753 vis = True
753 break
754 break
754 if not vis:
755 if not vis:
755 l = out.setdefault(h, [])
756 l = out.setdefault(h, [])
756 l[len(l):] = self.nodetags(b)
757 l[len(l):] = self.nodetags(b)
757 return out
758 return out
758
759
759 def branches(self, nodes):
760 def branches(self, nodes):
760 if not nodes:
761 if not nodes:
761 nodes = [self.changelog.tip()]
762 nodes = [self.changelog.tip()]
762 b = []
763 b = []
763 for n in nodes:
764 for n in nodes:
764 t = n
765 t = n
765 while n:
766 while n:
766 p = self.changelog.parents(n)
767 p = self.changelog.parents(n)
767 if p[1] != nullid or p[0] == nullid:
768 if p[1] != nullid or p[0] == nullid:
768 b.append((t, n, p[0], p[1]))
769 b.append((t, n, p[0], p[1]))
769 break
770 break
770 n = p[0]
771 n = p[0]
771 return b
772 return b
772
773
773 def between(self, pairs):
774 def between(self, pairs):
774 r = []
775 r = []
775
776
776 for top, bottom in pairs:
777 for top, bottom in pairs:
777 n, l, i = top, [], 0
778 n, l, i = top, [], 0
778 f = 1
779 f = 1
779
780
780 while n != bottom:
781 while n != bottom:
781 p = self.changelog.parents(n)[0]
782 p = self.changelog.parents(n)[0]
782 if i == f:
783 if i == f:
783 l.append(n)
784 l.append(n)
784 f = f * 2
785 f = f * 2
785 n = p
786 n = p
786 i += 1
787 i += 1
787
788
788 r.append(l)
789 r.append(l)
789
790
790 return r
791 return r
791
792
792 def findincoming(self, remote, base=None, heads=None):
793 def findincoming(self, remote, base=None, heads=None):
793 m = self.changelog.nodemap
794 m = self.changelog.nodemap
794 search = []
795 search = []
795 fetch = {}
796 fetch = {}
796 seen = {}
797 seen = {}
797 seenbranch = {}
798 seenbranch = {}
798 if base == None:
799 if base == None:
799 base = {}
800 base = {}
800
801
801 # assume we're closer to the tip than the root
802 # assume we're closer to the tip than the root
802 # and start by examining the heads
803 # and start by examining the heads
803 self.ui.status(_("searching for changes\n"))
804 self.ui.status(_("searching for changes\n"))
804
805
805 if not heads:
806 if not heads:
806 heads = remote.heads()
807 heads = remote.heads()
807
808
808 unknown = []
809 unknown = []
809 for h in heads:
810 for h in heads:
810 if h not in m:
811 if h not in m:
811 unknown.append(h)
812 unknown.append(h)
812 else:
813 else:
813 base[h] = 1
814 base[h] = 1
814
815
815 if not unknown:
816 if not unknown:
816 return None
817 return None
817
818
818 rep = {}
819 rep = {}
819 reqcnt = 0
820 reqcnt = 0
820
821
821 # search through remote branches
822 # search through remote branches
822 # a 'branch' here is a linear segment of history, with four parts:
823 # a 'branch' here is a linear segment of history, with four parts:
823 # head, root, first parent, second parent
824 # head, root, first parent, second parent
824 # (a branch always has two parents (or none) by definition)
825 # (a branch always has two parents (or none) by definition)
825 unknown = remote.branches(unknown)
826 unknown = remote.branches(unknown)
826 while unknown:
827 while unknown:
827 r = []
828 r = []
828 while unknown:
829 while unknown:
829 n = unknown.pop(0)
830 n = unknown.pop(0)
830 if n[0] in seen:
831 if n[0] in seen:
831 continue
832 continue
832
833
833 self.ui.debug(_("examining %s:%s\n")
834 self.ui.debug(_("examining %s:%s\n")
834 % (short(n[0]), short(n[1])))
835 % (short(n[0]), short(n[1])))
835 if n[0] == nullid:
836 if n[0] == nullid:
836 break
837 break
837 if n in seenbranch:
838 if n in seenbranch:
838 self.ui.debug(_("branch already found\n"))
839 self.ui.debug(_("branch already found\n"))
839 continue
840 continue
840 if n[1] and n[1] in m: # do we know the base?
841 if n[1] and n[1] in m: # do we know the base?
841 self.ui.debug(_("found incomplete branch %s:%s\n")
842 self.ui.debug(_("found incomplete branch %s:%s\n")
842 % (short(n[0]), short(n[1])))
843 % (short(n[0]), short(n[1])))
843 search.append(n) # schedule branch range for scanning
844 search.append(n) # schedule branch range for scanning
844 seenbranch[n] = 1
845 seenbranch[n] = 1
845 else:
846 else:
846 if n[1] not in seen and n[1] not in fetch:
847 if n[1] not in seen and n[1] not in fetch:
847 if n[2] in m and n[3] in m:
848 if n[2] in m and n[3] in m:
848 self.ui.debug(_("found new changeset %s\n") %
849 self.ui.debug(_("found new changeset %s\n") %
849 short(n[1]))
850 short(n[1]))
850 fetch[n[1]] = 1 # earliest unknown
851 fetch[n[1]] = 1 # earliest unknown
851 base[n[2]] = 1 # latest known
852 base[n[2]] = 1 # latest known
852 continue
853 continue
853
854
854 for a in n[2:4]:
855 for a in n[2:4]:
855 if a not in rep:
856 if a not in rep:
856 r.append(a)
857 r.append(a)
857 rep[a] = 1
858 rep[a] = 1
858
859
859 seen[n[0]] = 1
860 seen[n[0]] = 1
860
861
861 if r:
862 if r:
862 reqcnt += 1
863 reqcnt += 1
863 self.ui.debug(_("request %d: %s\n") %
864 self.ui.debug(_("request %d: %s\n") %
864 (reqcnt, " ".join(map(short, r))))
865 (reqcnt, " ".join(map(short, r))))
865 for p in range(0, len(r), 10):
866 for p in range(0, len(r), 10):
866 for b in remote.branches(r[p:p+10]):
867 for b in remote.branches(r[p:p+10]):
867 self.ui.debug(_("received %s:%s\n") %
868 self.ui.debug(_("received %s:%s\n") %
868 (short(b[0]), short(b[1])))
869 (short(b[0]), short(b[1])))
869 if b[0] in m:
870 if b[0] in m:
870 self.ui.debug(_("found base node %s\n")
871 self.ui.debug(_("found base node %s\n")
871 % short(b[0]))
872 % short(b[0]))
872 base[b[0]] = 1
873 base[b[0]] = 1
873 elif b[0] not in seen:
874 elif b[0] not in seen:
874 unknown.append(b)
875 unknown.append(b)
875
876
876 # do binary search on the branches we found
877 # do binary search on the branches we found
877 while search:
878 while search:
878 n = search.pop(0)
879 n = search.pop(0)
879 reqcnt += 1
880 reqcnt += 1
880 l = remote.between([(n[0], n[1])])[0]
881 l = remote.between([(n[0], n[1])])[0]
881 l.append(n[1])
882 l.append(n[1])
882 p = n[0]
883 p = n[0]
883 f = 1
884 f = 1
884 for i in l:
885 for i in l:
885 self.ui.debug(_("narrowing %d:%d %s\n") % (f, len(l), short(i)))
886 self.ui.debug(_("narrowing %d:%d %s\n") % (f, len(l), short(i)))
886 if i in m:
887 if i in m:
887 if f <= 2:
888 if f <= 2:
888 self.ui.debug(_("found new branch changeset %s\n") %
889 self.ui.debug(_("found new branch changeset %s\n") %
889 short(p))
890 short(p))
890 fetch[p] = 1
891 fetch[p] = 1
891 base[i] = 1
892 base[i] = 1
892 else:
893 else:
893 self.ui.debug(_("narrowed branch search to %s:%s\n")
894 self.ui.debug(_("narrowed branch search to %s:%s\n")
894 % (short(p), short(i)))
895 % (short(p), short(i)))
895 search.append((p, i))
896 search.append((p, i))
896 break
897 break
897 p, f = i, f * 2
898 p, f = i, f * 2
898
899
899 # sanity check our fetch list
900 # sanity check our fetch list
900 for f in fetch.keys():
901 for f in fetch.keys():
901 if f in m:
902 if f in m:
902 raise repo.RepoError(_("already have changeset ") + short(f[:4]))
903 raise repo.RepoError(_("already have changeset ") + short(f[:4]))
903
904
904 if base.keys() == [nullid]:
905 if base.keys() == [nullid]:
905 self.ui.warn(_("warning: pulling from an unrelated repository!\n"))
906 self.ui.warn(_("warning: pulling from an unrelated repository!\n"))
906
907
907 self.ui.note(_("found new changesets starting at ") +
908 self.ui.note(_("found new changesets starting at ") +
908 " ".join([short(f) for f in fetch]) + "\n")
909 " ".join([short(f) for f in fetch]) + "\n")
909
910
910 self.ui.debug(_("%d total queries\n") % reqcnt)
911 self.ui.debug(_("%d total queries\n") % reqcnt)
911
912
912 return fetch.keys()
913 return fetch.keys()
913
914
914 def findoutgoing(self, remote, base=None, heads=None):
915 def findoutgoing(self, remote, base=None, heads=None):
915 if base == None:
916 if base == None:
916 base = {}
917 base = {}
917 self.findincoming(remote, base, heads)
918 self.findincoming(remote, base, heads)
918
919
919 self.ui.debug(_("common changesets up to ")
920 self.ui.debug(_("common changesets up to ")
920 + " ".join(map(short, base.keys())) + "\n")
921 + " ".join(map(short, base.keys())) + "\n")
921
922
922 remain = dict.fromkeys(self.changelog.nodemap)
923 remain = dict.fromkeys(self.changelog.nodemap)
923
924
924 # prune everything remote has from the tree
925 # prune everything remote has from the tree
925 del remain[nullid]
926 del remain[nullid]
926 remove = base.keys()
927 remove = base.keys()
927 while remove:
928 while remove:
928 n = remove.pop(0)
929 n = remove.pop(0)
929 if n in remain:
930 if n in remain:
930 del remain[n]
931 del remain[n]
931 for p in self.changelog.parents(n):
932 for p in self.changelog.parents(n):
932 remove.append(p)
933 remove.append(p)
933
934
934 # find every node whose parents have been pruned
935 # find every node whose parents have been pruned
935 subset = []
936 subset = []
936 for n in remain:
937 for n in remain:
937 p1, p2 = self.changelog.parents(n)
938 p1, p2 = self.changelog.parents(n)
938 if p1 not in remain and p2 not in remain:
939 if p1 not in remain and p2 not in remain:
939 subset.append(n)
940 subset.append(n)
940
941
941 # this is the set of all roots we have to push
942 # this is the set of all roots we have to push
942 return subset
943 return subset
943
944
944 def pull(self, remote, heads=None):
945 def pull(self, remote, heads=None):
945 lock = self.lock()
946 lock = self.lock()
946
947
947 # if we have an empty repo, fetch everything
948 # if we have an empty repo, fetch everything
948 if self.changelog.tip() == nullid:
949 if self.changelog.tip() == nullid:
949 self.ui.status(_("requesting all changes\n"))
950 self.ui.status(_("requesting all changes\n"))
950 fetch = [nullid]
951 fetch = [nullid]
951 else:
952 else:
952 fetch = self.findincoming(remote)
953 fetch = self.findincoming(remote)
953
954
954 if not fetch:
955 if not fetch:
955 self.ui.status(_("no changes found\n"))
956 self.ui.status(_("no changes found\n"))
956 return 1
957 return 1
957
958
958 if heads is None:
959 if heads is None:
959 cg = remote.changegroup(fetch)
960 cg = remote.changegroup(fetch)
960 else:
961 else:
961 cg = remote.changegroupsubset(fetch, heads)
962 cg = remote.changegroupsubset(fetch, heads)
962 return self.addchangegroup(cg)
963 return self.addchangegroup(cg)
963
964
964 def push(self, remote, force=False):
965 def push(self, remote, force=False):
965 lock = remote.lock()
966 lock = remote.lock()
966
967
967 base = {}
968 base = {}
968 heads = remote.heads()
969 heads = remote.heads()
969 inc = self.findincoming(remote, base, heads)
970 inc = self.findincoming(remote, base, heads)
970 if not force and inc:
971 if not force and inc:
971 self.ui.warn(_("abort: unsynced remote changes!\n"))
972 self.ui.warn(_("abort: unsynced remote changes!\n"))
972 self.ui.status(_("(did you forget to sync? use push -f to force)\n"))
973 self.ui.status(_("(did you forget to sync? use push -f to force)\n"))
973 return 1
974 return 1
974
975
975 update = self.findoutgoing(remote, base)
976 update = self.findoutgoing(remote, base)
976 if not update:
977 if not update:
977 self.ui.status(_("no changes found\n"))
978 self.ui.status(_("no changes found\n"))
978 return 1
979 return 1
979 elif not force:
980 elif not force:
980 if len(heads) < len(self.changelog.heads()):
981 if len(heads) < len(self.changelog.heads()):
981 self.ui.warn(_("abort: push creates new remote branches!\n"))
982 self.ui.warn(_("abort: push creates new remote branches!\n"))
982 self.ui.status(_("(did you forget to merge?"
983 self.ui.status(_("(did you forget to merge?"
983 " use push -f to force)\n"))
984 " use push -f to force)\n"))
984 return 1
985 return 1
985
986
986 cg = self.changegroup(update)
987 cg = self.changegroup(update)
987 return remote.addchangegroup(cg)
988 return remote.addchangegroup(cg)
988
989
989 def changegroupsubset(self, bases, heads):
990 def changegroupsubset(self, bases, heads):
990 """This function generates a changegroup consisting of all the nodes
991 """This function generates a changegroup consisting of all the nodes
991 that are descendents of any of the bases, and ancestors of any of
992 that are descendents of any of the bases, and ancestors of any of
992 the heads.
993 the heads.
993
994
994 It is fairly complex as determining which filenodes and which
995 It is fairly complex as determining which filenodes and which
995 manifest nodes need to be included for the changeset to be complete
996 manifest nodes need to be included for the changeset to be complete
996 is non-trivial.
997 is non-trivial.
997
998
998 Another wrinkle is doing the reverse, figuring out which changeset in
999 Another wrinkle is doing the reverse, figuring out which changeset in
999 the changegroup a particular filenode or manifestnode belongs to."""
1000 the changegroup a particular filenode or manifestnode belongs to."""
1000
1001
1001 # Set up some initial variables
1002 # Set up some initial variables
1002 # Make it easy to refer to self.changelog
1003 # Make it easy to refer to self.changelog
1003 cl = self.changelog
1004 cl = self.changelog
1004 # msng is short for missing - compute the list of changesets in this
1005 # msng is short for missing - compute the list of changesets in this
1005 # changegroup.
1006 # changegroup.
1006 msng_cl_lst, bases, heads = cl.nodesbetween(bases, heads)
1007 msng_cl_lst, bases, heads = cl.nodesbetween(bases, heads)
1007 # Some bases may turn out to be superfluous, and some heads may be
1008 # Some bases may turn out to be superfluous, and some heads may be
1008 # too. nodesbetween will return the minimal set of bases and heads
1009 # too. nodesbetween will return the minimal set of bases and heads
1009 # necessary to re-create the changegroup.
1010 # necessary to re-create the changegroup.
1010
1011
1011 # Known heads are the list of heads that it is assumed the recipient
1012 # Known heads are the list of heads that it is assumed the recipient
1012 # of this changegroup will know about.
1013 # of this changegroup will know about.
1013 knownheads = {}
1014 knownheads = {}
1014 # We assume that all parents of bases are known heads.
1015 # We assume that all parents of bases are known heads.
1015 for n in bases:
1016 for n in bases:
1016 for p in cl.parents(n):
1017 for p in cl.parents(n):
1017 if p != nullid:
1018 if p != nullid:
1018 knownheads[p] = 1
1019 knownheads[p] = 1
1019 knownheads = knownheads.keys()
1020 knownheads = knownheads.keys()
1020 if knownheads:
1021 if knownheads:
1021 # Now that we know what heads are known, we can compute which
1022 # Now that we know what heads are known, we can compute which
1022 # changesets are known. The recipient must know about all
1023 # changesets are known. The recipient must know about all
1023 # changesets required to reach the known heads from the null
1024 # changesets required to reach the known heads from the null
1024 # changeset.
1025 # changeset.
1025 has_cl_set, junk, junk = cl.nodesbetween(None, knownheads)
1026 has_cl_set, junk, junk = cl.nodesbetween(None, knownheads)
1026 junk = None
1027 junk = None
1027 # Transform the list into an ersatz set.
1028 # Transform the list into an ersatz set.
1028 has_cl_set = dict.fromkeys(has_cl_set)
1029 has_cl_set = dict.fromkeys(has_cl_set)
1029 else:
1030 else:
1030 # If there were no known heads, the recipient cannot be assumed to
1031 # If there were no known heads, the recipient cannot be assumed to
1031 # know about any changesets.
1032 # know about any changesets.
1032 has_cl_set = {}
1033 has_cl_set = {}
1033
1034
1034 # Make it easy to refer to self.manifest
1035 # Make it easy to refer to self.manifest
1035 mnfst = self.manifest
1036 mnfst = self.manifest
1036 # We don't know which manifests are missing yet
1037 # We don't know which manifests are missing yet
1037 msng_mnfst_set = {}
1038 msng_mnfst_set = {}
1038 # Nor do we know which filenodes are missing.
1039 # Nor do we know which filenodes are missing.
1039 msng_filenode_set = {}
1040 msng_filenode_set = {}
1040
1041
1041 junk = mnfst.index[mnfst.count() - 1] # Get around a bug in lazyindex
1042 junk = mnfst.index[mnfst.count() - 1] # Get around a bug in lazyindex
1042 junk = None
1043 junk = None
1043
1044
1044 # A changeset always belongs to itself, so the changenode lookup
1045 # A changeset always belongs to itself, so the changenode lookup
1045 # function for a changenode is identity.
1046 # function for a changenode is identity.
1046 def identity(x):
1047 def identity(x):
1047 return x
1048 return x
1048
1049
1049 # A function generating function. Sets up an environment for the
1050 # A function generating function. Sets up an environment for the
1050 # inner function.
1051 # inner function.
1051 def cmp_by_rev_func(revlog):
1052 def cmp_by_rev_func(revlog):
1052 # Compare two nodes by their revision number in the environment's
1053 # Compare two nodes by their revision number in the environment's
1053 # revision history. Since the revision number both represents the
1054 # revision history. Since the revision number both represents the
1054 # most efficient order to read the nodes in, and represents a
1055 # most efficient order to read the nodes in, and represents a
1055 # topological sorting of the nodes, this function is often useful.
1056 # topological sorting of the nodes, this function is often useful.
1056 def cmp_by_rev(a, b):
1057 def cmp_by_rev(a, b):
1057 return cmp(revlog.rev(a), revlog.rev(b))
1058 return cmp(revlog.rev(a), revlog.rev(b))
1058 return cmp_by_rev
1059 return cmp_by_rev
1059
1060
1060 # If we determine that a particular file or manifest node must be a
1061 # If we determine that a particular file or manifest node must be a
1061 # node that the recipient of the changegroup will already have, we can
1062 # node that the recipient of the changegroup will already have, we can
1062 # also assume the recipient will have all the parents. This function
1063 # also assume the recipient will have all the parents. This function
1063 # prunes them from the set of missing nodes.
1064 # prunes them from the set of missing nodes.
1064 def prune_parents(revlog, hasset, msngset):
1065 def prune_parents(revlog, hasset, msngset):
1065 haslst = hasset.keys()
1066 haslst = hasset.keys()
1066 haslst.sort(cmp_by_rev_func(revlog))
1067 haslst.sort(cmp_by_rev_func(revlog))
1067 for node in haslst:
1068 for node in haslst:
1068 parentlst = [p for p in revlog.parents(node) if p != nullid]
1069 parentlst = [p for p in revlog.parents(node) if p != nullid]
1069 while parentlst:
1070 while parentlst:
1070 n = parentlst.pop()
1071 n = parentlst.pop()
1071 if n not in hasset:
1072 if n not in hasset:
1072 hasset[n] = 1
1073 hasset[n] = 1
1073 p = [p for p in revlog.parents(n) if p != nullid]
1074 p = [p for p in revlog.parents(n) if p != nullid]
1074 parentlst.extend(p)
1075 parentlst.extend(p)
1075 for n in hasset:
1076 for n in hasset:
1076 msngset.pop(n, None)
1077 msngset.pop(n, None)
1077
1078
1078 # This is a function generating function used to set up an environment
1079 # This is a function generating function used to set up an environment
1079 # for the inner function to execute in.
1080 # for the inner function to execute in.
1080 def manifest_and_file_collector(changedfileset):
1081 def manifest_and_file_collector(changedfileset):
1081 # This is an information gathering function that gathers
1082 # This is an information gathering function that gathers
1082 # information from each changeset node that goes out as part of
1083 # information from each changeset node that goes out as part of
1083 # the changegroup. The information gathered is a list of which
1084 # the changegroup. The information gathered is a list of which
1084 # manifest nodes are potentially required (the recipient may
1085 # manifest nodes are potentially required (the recipient may
1085 # already have them) and total list of all files which were
1086 # already have them) and total list of all files which were
1086 # changed in any changeset in the changegroup.
1087 # changed in any changeset in the changegroup.
1087 #
1088 #
1088 # We also remember the first changenode we saw any manifest
1089 # We also remember the first changenode we saw any manifest
1089 # referenced by so we can later determine which changenode 'owns'
1090 # referenced by so we can later determine which changenode 'owns'
1090 # the manifest.
1091 # the manifest.
1091 def collect_manifests_and_files(clnode):
1092 def collect_manifests_and_files(clnode):
1092 c = cl.read(clnode)
1093 c = cl.read(clnode)
1093 for f in c[3]:
1094 for f in c[3]:
1094 # This is to make sure we only have one instance of each
1095 # This is to make sure we only have one instance of each
1095 # filename string for each filename.
1096 # filename string for each filename.
1096 changedfileset.setdefault(f, f)
1097 changedfileset.setdefault(f, f)
1097 msng_mnfst_set.setdefault(c[0], clnode)
1098 msng_mnfst_set.setdefault(c[0], clnode)
1098 return collect_manifests_and_files
1099 return collect_manifests_and_files
1099
1100
1100 # Figure out which manifest nodes (of the ones we think might be part
1101 # Figure out which manifest nodes (of the ones we think might be part
1101 # of the changegroup) the recipient must know about and remove them
1102 # of the changegroup) the recipient must know about and remove them
1102 # from the changegroup.
1103 # from the changegroup.
1103 def prune_manifests():
1104 def prune_manifests():
1104 has_mnfst_set = {}
1105 has_mnfst_set = {}
1105 for n in msng_mnfst_set:
1106 for n in msng_mnfst_set:
1106 # If a 'missing' manifest thinks it belongs to a changenode
1107 # If a 'missing' manifest thinks it belongs to a changenode
1107 # the recipient is assumed to have, obviously the recipient
1108 # the recipient is assumed to have, obviously the recipient
1108 # must have that manifest.
1109 # must have that manifest.
1109 linknode = cl.node(mnfst.linkrev(n))
1110 linknode = cl.node(mnfst.linkrev(n))
1110 if linknode in has_cl_set:
1111 if linknode in has_cl_set:
1111 has_mnfst_set[n] = 1
1112 has_mnfst_set[n] = 1
1112 prune_parents(mnfst, has_mnfst_set, msng_mnfst_set)
1113 prune_parents(mnfst, has_mnfst_set, msng_mnfst_set)
1113
1114
1114 # Use the information collected in collect_manifests_and_files to say
1115 # Use the information collected in collect_manifests_and_files to say
1115 # which changenode any manifestnode belongs to.
1116 # which changenode any manifestnode belongs to.
1116 def lookup_manifest_link(mnfstnode):
1117 def lookup_manifest_link(mnfstnode):
1117 return msng_mnfst_set[mnfstnode]
1118 return msng_mnfst_set[mnfstnode]
1118
1119
1119 # A function generating function that sets up the initial environment
1120 # A function generating function that sets up the initial environment
1120 # the inner function.
1121 # the inner function.
1121 def filenode_collector(changedfiles):
1122 def filenode_collector(changedfiles):
1122 next_rev = [0]
1123 next_rev = [0]
1123 # This gathers information from each manifestnode included in the
1124 # This gathers information from each manifestnode included in the
1124 # changegroup about which filenodes the manifest node references
1125 # changegroup about which filenodes the manifest node references
1125 # so we can include those in the changegroup too.
1126 # so we can include those in the changegroup too.
1126 #
1127 #
1127 # It also remembers which changenode each filenode belongs to. It
1128 # It also remembers which changenode each filenode belongs to. It
1128 # does this by assuming the a filenode belongs to the changenode
1129 # does this by assuming the a filenode belongs to the changenode
1129 # the first manifest that references it belongs to.
1130 # the first manifest that references it belongs to.
1130 def collect_msng_filenodes(mnfstnode):
1131 def collect_msng_filenodes(mnfstnode):
1131 r = mnfst.rev(mnfstnode)
1132 r = mnfst.rev(mnfstnode)
1132 if r == next_rev[0]:
1133 if r == next_rev[0]:
1133 # If the last rev we looked at was the one just previous,
1134 # If the last rev we looked at was the one just previous,
1134 # we only need to see a diff.
1135 # we only need to see a diff.
1135 delta = mdiff.patchtext(mnfst.delta(mnfstnode))
1136 delta = mdiff.patchtext(mnfst.delta(mnfstnode))
1136 # For each line in the delta
1137 # For each line in the delta
1137 for dline in delta.splitlines():
1138 for dline in delta.splitlines():
1138 # get the filename and filenode for that line
1139 # get the filename and filenode for that line
1139 f, fnode = dline.split('\0')
1140 f, fnode = dline.split('\0')
1140 fnode = bin(fnode[:40])
1141 fnode = bin(fnode[:40])
1141 f = changedfiles.get(f, None)
1142 f = changedfiles.get(f, None)
1142 # And if the file is in the list of files we care
1143 # And if the file is in the list of files we care
1143 # about.
1144 # about.
1144 if f is not None:
1145 if f is not None:
1145 # Get the changenode this manifest belongs to
1146 # Get the changenode this manifest belongs to
1146 clnode = msng_mnfst_set[mnfstnode]
1147 clnode = msng_mnfst_set[mnfstnode]
1147 # Create the set of filenodes for the file if
1148 # Create the set of filenodes for the file if
1148 # there isn't one already.
1149 # there isn't one already.
1149 ndset = msng_filenode_set.setdefault(f, {})
1150 ndset = msng_filenode_set.setdefault(f, {})
1150 # And set the filenode's changelog node to the
1151 # And set the filenode's changelog node to the
1151 # manifest's if it hasn't been set already.
1152 # manifest's if it hasn't been set already.
1152 ndset.setdefault(fnode, clnode)
1153 ndset.setdefault(fnode, clnode)
1153 else:
1154 else:
1154 # Otherwise we need a full manifest.
1155 # Otherwise we need a full manifest.
1155 m = mnfst.read(mnfstnode)
1156 m = mnfst.read(mnfstnode)
1156 # For every file in we care about.
1157 # For every file in we care about.
1157 for f in changedfiles:
1158 for f in changedfiles:
1158 fnode = m.get(f, None)
1159 fnode = m.get(f, None)
1159 # If it's in the manifest
1160 # If it's in the manifest
1160 if fnode is not None:
1161 if fnode is not None:
1161 # See comments above.
1162 # See comments above.
1162 clnode = msng_mnfst_set[mnfstnode]
1163 clnode = msng_mnfst_set[mnfstnode]
1163 ndset = msng_filenode_set.setdefault(f, {})
1164 ndset = msng_filenode_set.setdefault(f, {})
1164 ndset.setdefault(fnode, clnode)
1165 ndset.setdefault(fnode, clnode)
1165 # Remember the revision we hope to see next.
1166 # Remember the revision we hope to see next.
1166 next_rev[0] = r + 1
1167 next_rev[0] = r + 1
1167 return collect_msng_filenodes
1168 return collect_msng_filenodes
1168
1169
1169 # We have a list of filenodes we think we need for a file, lets remove
1170 # We have a list of filenodes we think we need for a file, lets remove
1170 # all those we now the recipient must have.
1171 # all those we now the recipient must have.
1171 def prune_filenodes(f, filerevlog):
1172 def prune_filenodes(f, filerevlog):
1172 msngset = msng_filenode_set[f]
1173 msngset = msng_filenode_set[f]
1173 hasset = {}
1174 hasset = {}
1174 # If a 'missing' filenode thinks it belongs to a changenode we
1175 # If a 'missing' filenode thinks it belongs to a changenode we
1175 # assume the recipient must have, then the recipient must have
1176 # assume the recipient must have, then the recipient must have
1176 # that filenode.
1177 # that filenode.
1177 for n in msngset:
1178 for n in msngset:
1178 clnode = cl.node(filerevlog.linkrev(n))
1179 clnode = cl.node(filerevlog.linkrev(n))
1179 if clnode in has_cl_set:
1180 if clnode in has_cl_set:
1180 hasset[n] = 1
1181 hasset[n] = 1
1181 prune_parents(filerevlog, hasset, msngset)
1182 prune_parents(filerevlog, hasset, msngset)
1182
1183
1183 # A function generator function that sets up the a context for the
1184 # A function generator function that sets up the a context for the
1184 # inner function.
1185 # inner function.
1185 def lookup_filenode_link_func(fname):
1186 def lookup_filenode_link_func(fname):
1186 msngset = msng_filenode_set[fname]
1187 msngset = msng_filenode_set[fname]
1187 # Lookup the changenode the filenode belongs to.
1188 # Lookup the changenode the filenode belongs to.
1188 def lookup_filenode_link(fnode):
1189 def lookup_filenode_link(fnode):
1189 return msngset[fnode]
1190 return msngset[fnode]
1190 return lookup_filenode_link
1191 return lookup_filenode_link
1191
1192
1192 # Now that we have all theses utility functions to help out and
1193 # Now that we have all theses utility functions to help out and
1193 # logically divide up the task, generate the group.
1194 # logically divide up the task, generate the group.
1194 def gengroup():
1195 def gengroup():
1195 # The set of changed files starts empty.
1196 # The set of changed files starts empty.
1196 changedfiles = {}
1197 changedfiles = {}
1197 # Create a changenode group generator that will call our functions
1198 # Create a changenode group generator that will call our functions
1198 # back to lookup the owning changenode and collect information.
1199 # back to lookup the owning changenode and collect information.
1199 group = cl.group(msng_cl_lst, identity,
1200 group = cl.group(msng_cl_lst, identity,
1200 manifest_and_file_collector(changedfiles))
1201 manifest_and_file_collector(changedfiles))
1201 for chnk in group:
1202 for chnk in group:
1202 yield chnk
1203 yield chnk
1203
1204
1204 # The list of manifests has been collected by the generator
1205 # The list of manifests has been collected by the generator
1205 # calling our functions back.
1206 # calling our functions back.
1206 prune_manifests()
1207 prune_manifests()
1207 msng_mnfst_lst = msng_mnfst_set.keys()
1208 msng_mnfst_lst = msng_mnfst_set.keys()
1208 # Sort the manifestnodes by revision number.
1209 # Sort the manifestnodes by revision number.
1209 msng_mnfst_lst.sort(cmp_by_rev_func(mnfst))
1210 msng_mnfst_lst.sort(cmp_by_rev_func(mnfst))
1210 # Create a generator for the manifestnodes that calls our lookup
1211 # Create a generator for the manifestnodes that calls our lookup
1211 # and data collection functions back.
1212 # and data collection functions back.
1212 group = mnfst.group(msng_mnfst_lst, lookup_manifest_link,
1213 group = mnfst.group(msng_mnfst_lst, lookup_manifest_link,
1213 filenode_collector(changedfiles))
1214 filenode_collector(changedfiles))
1214 for chnk in group:
1215 for chnk in group:
1215 yield chnk
1216 yield chnk
1216
1217
1217 # These are no longer needed, dereference and toss the memory for
1218 # These are no longer needed, dereference and toss the memory for
1218 # them.
1219 # them.
1219 msng_mnfst_lst = None
1220 msng_mnfst_lst = None
1220 msng_mnfst_set.clear()
1221 msng_mnfst_set.clear()
1221
1222
1222 changedfiles = changedfiles.keys()
1223 changedfiles = changedfiles.keys()
1223 changedfiles.sort()
1224 changedfiles.sort()
1224 # Go through all our files in order sorted by name.
1225 # Go through all our files in order sorted by name.
1225 for fname in changedfiles:
1226 for fname in changedfiles:
1226 filerevlog = self.file(fname)
1227 filerevlog = self.file(fname)
1227 # Toss out the filenodes that the recipient isn't really
1228 # Toss out the filenodes that the recipient isn't really
1228 # missing.
1229 # missing.
1229 if msng_filenode_set.has_key(fname):
1230 if msng_filenode_set.has_key(fname):
1230 prune_filenodes(fname, filerevlog)
1231 prune_filenodes(fname, filerevlog)
1231 msng_filenode_lst = msng_filenode_set[fname].keys()
1232 msng_filenode_lst = msng_filenode_set[fname].keys()
1232 else:
1233 else:
1233 msng_filenode_lst = []
1234 msng_filenode_lst = []
1234 # If any filenodes are left, generate the group for them,
1235 # If any filenodes are left, generate the group for them,
1235 # otherwise don't bother.
1236 # otherwise don't bother.
1236 if len(msng_filenode_lst) > 0:
1237 if len(msng_filenode_lst) > 0:
1237 yield struct.pack(">l", len(fname) + 4) + fname
1238 yield struct.pack(">l", len(fname) + 4) + fname
1238 # Sort the filenodes by their revision #
1239 # Sort the filenodes by their revision #
1239 msng_filenode_lst.sort(cmp_by_rev_func(filerevlog))
1240 msng_filenode_lst.sort(cmp_by_rev_func(filerevlog))
1240 # Create a group generator and only pass in a changenode
1241 # Create a group generator and only pass in a changenode
1241 # lookup function as we need to collect no information
1242 # lookup function as we need to collect no information
1242 # from filenodes.
1243 # from filenodes.
1243 group = filerevlog.group(msng_filenode_lst,
1244 group = filerevlog.group(msng_filenode_lst,
1244 lookup_filenode_link_func(fname))
1245 lookup_filenode_link_func(fname))
1245 for chnk in group:
1246 for chnk in group:
1246 yield chnk
1247 yield chnk
1247 if msng_filenode_set.has_key(fname):
1248 if msng_filenode_set.has_key(fname):
1248 # Don't need this anymore, toss it to free memory.
1249 # Don't need this anymore, toss it to free memory.
1249 del msng_filenode_set[fname]
1250 del msng_filenode_set[fname]
1250 # Signal that no more groups are left.
1251 # Signal that no more groups are left.
1251 yield struct.pack(">l", 0)
1252 yield struct.pack(">l", 0)
1252
1253
1253 return util.chunkbuffer(gengroup())
1254 return util.chunkbuffer(gengroup())
1254
1255
1255 def changegroup(self, basenodes):
1256 def changegroup(self, basenodes):
1256 """Generate a changegroup of all nodes that we have that a recipient
1257 """Generate a changegroup of all nodes that we have that a recipient
1257 doesn't.
1258 doesn't.
1258
1259
1259 This is much easier than the previous function as we can assume that
1260 This is much easier than the previous function as we can assume that
1260 the recipient has any changenode we aren't sending them."""
1261 the recipient has any changenode we aren't sending them."""
1261 cl = self.changelog
1262 cl = self.changelog
1262 nodes = cl.nodesbetween(basenodes, None)[0]
1263 nodes = cl.nodesbetween(basenodes, None)[0]
1263 revset = dict.fromkeys([cl.rev(n) for n in nodes])
1264 revset = dict.fromkeys([cl.rev(n) for n in nodes])
1264
1265
1265 def identity(x):
1266 def identity(x):
1266 return x
1267 return x
1267
1268
1268 def gennodelst(revlog):
1269 def gennodelst(revlog):
1269 for r in xrange(0, revlog.count()):
1270 for r in xrange(0, revlog.count()):
1270 n = revlog.node(r)
1271 n = revlog.node(r)
1271 if revlog.linkrev(n) in revset:
1272 if revlog.linkrev(n) in revset:
1272 yield n
1273 yield n
1273
1274
1274 def changed_file_collector(changedfileset):
1275 def changed_file_collector(changedfileset):
1275 def collect_changed_files(clnode):
1276 def collect_changed_files(clnode):
1276 c = cl.read(clnode)
1277 c = cl.read(clnode)
1277 for fname in c[3]:
1278 for fname in c[3]:
1278 changedfileset[fname] = 1
1279 changedfileset[fname] = 1
1279 return collect_changed_files
1280 return collect_changed_files
1280
1281
1281 def lookuprevlink_func(revlog):
1282 def lookuprevlink_func(revlog):
1282 def lookuprevlink(n):
1283 def lookuprevlink(n):
1283 return cl.node(revlog.linkrev(n))
1284 return cl.node(revlog.linkrev(n))
1284 return lookuprevlink
1285 return lookuprevlink
1285
1286
1286 def gengroup():
1287 def gengroup():
1287 # construct a list of all changed files
1288 # construct a list of all changed files
1288 changedfiles = {}
1289 changedfiles = {}
1289
1290
1290 for chnk in cl.group(nodes, identity,
1291 for chnk in cl.group(nodes, identity,
1291 changed_file_collector(changedfiles)):
1292 changed_file_collector(changedfiles)):
1292 yield chnk
1293 yield chnk
1293 changedfiles = changedfiles.keys()
1294 changedfiles = changedfiles.keys()
1294 changedfiles.sort()
1295 changedfiles.sort()
1295
1296
1296 mnfst = self.manifest
1297 mnfst = self.manifest
1297 nodeiter = gennodelst(mnfst)
1298 nodeiter = gennodelst(mnfst)
1298 for chnk in mnfst.group(nodeiter, lookuprevlink_func(mnfst)):
1299 for chnk in mnfst.group(nodeiter, lookuprevlink_func(mnfst)):
1299 yield chnk
1300 yield chnk
1300
1301
1301 for fname in changedfiles:
1302 for fname in changedfiles:
1302 filerevlog = self.file(fname)
1303 filerevlog = self.file(fname)
1303 nodeiter = gennodelst(filerevlog)
1304 nodeiter = gennodelst(filerevlog)
1304 nodeiter = list(nodeiter)
1305 nodeiter = list(nodeiter)
1305 if nodeiter:
1306 if nodeiter:
1306 yield struct.pack(">l", len(fname) + 4) + fname
1307 yield struct.pack(">l", len(fname) + 4) + fname
1307 lookup = lookuprevlink_func(filerevlog)
1308 lookup = lookuprevlink_func(filerevlog)
1308 for chnk in filerevlog.group(nodeiter, lookup):
1309 for chnk in filerevlog.group(nodeiter, lookup):
1309 yield chnk
1310 yield chnk
1310
1311
1311 yield struct.pack(">l", 0)
1312 yield struct.pack(">l", 0)
1312
1313
1313 return util.chunkbuffer(gengroup())
1314 return util.chunkbuffer(gengroup())
1314
1315
1315 def addchangegroup(self, source):
1316 def addchangegroup(self, source):
1316
1317
1317 def getchunk():
1318 def getchunk():
1318 d = source.read(4)
1319 d = source.read(4)
1319 if not d:
1320 if not d:
1320 return ""
1321 return ""
1321 l = struct.unpack(">l", d)[0]
1322 l = struct.unpack(">l", d)[0]
1322 if l <= 4:
1323 if l <= 4:
1323 return ""
1324 return ""
1324 d = source.read(l - 4)
1325 d = source.read(l - 4)
1325 if len(d) < l - 4:
1326 if len(d) < l - 4:
1326 raise repo.RepoError(_("premature EOF reading chunk"
1327 raise repo.RepoError(_("premature EOF reading chunk"
1327 " (got %d bytes, expected %d)")
1328 " (got %d bytes, expected %d)")
1328 % (len(d), l - 4))
1329 % (len(d), l - 4))
1329 return d
1330 return d
1330
1331
1331 def getgroup():
1332 def getgroup():
1332 while 1:
1333 while 1:
1333 c = getchunk()
1334 c = getchunk()
1334 if not c:
1335 if not c:
1335 break
1336 break
1336 yield c
1337 yield c
1337
1338
1338 def csmap(x):
1339 def csmap(x):
1339 self.ui.debug(_("add changeset %s\n") % short(x))
1340 self.ui.debug(_("add changeset %s\n") % short(x))
1340 return self.changelog.count()
1341 return self.changelog.count()
1341
1342
1342 def revmap(x):
1343 def revmap(x):
1343 return self.changelog.rev(x)
1344 return self.changelog.rev(x)
1344
1345
1345 if not source:
1346 if not source:
1346 return
1347 return
1347 changesets = files = revisions = 0
1348 changesets = files = revisions = 0
1348
1349
1349 tr = self.transaction()
1350 tr = self.transaction()
1350
1351
1351 oldheads = len(self.changelog.heads())
1352 oldheads = len(self.changelog.heads())
1352
1353
1353 # pull off the changeset group
1354 # pull off the changeset group
1354 self.ui.status(_("adding changesets\n"))
1355 self.ui.status(_("adding changesets\n"))
1355 co = self.changelog.tip()
1356 co = self.changelog.tip()
1356 cn = self.changelog.addgroup(getgroup(), csmap, tr, 1) # unique
1357 cn = self.changelog.addgroup(getgroup(), csmap, tr, 1) # unique
1357 cnr, cor = map(self.changelog.rev, (cn, co))
1358 cnr, cor = map(self.changelog.rev, (cn, co))
1358 if cn == nullid:
1359 if cn == nullid:
1359 cnr = cor
1360 cnr = cor
1360 changesets = cnr - cor
1361 changesets = cnr - cor
1361
1362
1362 # pull off the manifest group
1363 # pull off the manifest group
1363 self.ui.status(_("adding manifests\n"))
1364 self.ui.status(_("adding manifests\n"))
1364 mm = self.manifest.tip()
1365 mm = self.manifest.tip()
1365 mo = self.manifest.addgroup(getgroup(), revmap, tr)
1366 mo = self.manifest.addgroup(getgroup(), revmap, tr)
1366
1367
1367 # process the files
1368 # process the files
1368 self.ui.status(_("adding file changes\n"))
1369 self.ui.status(_("adding file changes\n"))
1369 while 1:
1370 while 1:
1370 f = getchunk()
1371 f = getchunk()
1371 if not f:
1372 if not f:
1372 break
1373 break
1373 self.ui.debug(_("adding %s revisions\n") % f)
1374 self.ui.debug(_("adding %s revisions\n") % f)
1374 fl = self.file(f)
1375 fl = self.file(f)
1375 o = fl.count()
1376 o = fl.count()
1376 n = fl.addgroup(getgroup(), revmap, tr)
1377 n = fl.addgroup(getgroup(), revmap, tr)
1377 revisions += fl.count() - o
1378 revisions += fl.count() - o
1378 files += 1
1379 files += 1
1379
1380
1380 newheads = len(self.changelog.heads())
1381 newheads = len(self.changelog.heads())
1381 heads = ""
1382 heads = ""
1382 if oldheads and newheads > oldheads:
1383 if oldheads and newheads > oldheads:
1383 heads = _(" (+%d heads)") % (newheads - oldheads)
1384 heads = _(" (+%d heads)") % (newheads - oldheads)
1384
1385
1385 self.ui.status(_("added %d changesets"
1386 self.ui.status(_("added %d changesets"
1386 " with %d changes to %d files%s\n")
1387 " with %d changes to %d files%s\n")
1387 % (changesets, revisions, files, heads))
1388 % (changesets, revisions, files, heads))
1388
1389
1389 tr.close()
1390 tr.close()
1390
1391
1391 if changesets > 0:
1392 if changesets > 0:
1392 self.hook("changegroup", node=hex(self.changelog.node(cor+1)))
1393 self.hook("changegroup", node=hex(self.changelog.node(cor+1)))
1393
1394
1394 for i in range(cor + 1, cnr + 1):
1395 for i in range(cor + 1, cnr + 1):
1395 self.hook("incoming", node=hex(self.changelog.node(i)))
1396 self.hook("incoming", node=hex(self.changelog.node(i)))
1396
1397
1397 def update(self, node, allow=False, force=False, choose=None,
1398 def update(self, node, allow=False, force=False, choose=None,
1398 moddirstate=True, forcemerge=False, wlock=None):
1399 moddirstate=True, forcemerge=False, wlock=None):
1399 pl = self.dirstate.parents()
1400 pl = self.dirstate.parents()
1400 if not force and pl[1] != nullid:
1401 if not force and pl[1] != nullid:
1401 self.ui.warn(_("aborting: outstanding uncommitted merges\n"))
1402 self.ui.warn(_("aborting: outstanding uncommitted merges\n"))
1402 return 1
1403 return 1
1403
1404
1404 err = False
1405 err = False
1405
1406
1406 p1, p2 = pl[0], node
1407 p1, p2 = pl[0], node
1407 pa = self.changelog.ancestor(p1, p2)
1408 pa = self.changelog.ancestor(p1, p2)
1408 m1n = self.changelog.read(p1)[0]
1409 m1n = self.changelog.read(p1)[0]
1409 m2n = self.changelog.read(p2)[0]
1410 m2n = self.changelog.read(p2)[0]
1410 man = self.manifest.ancestor(m1n, m2n)
1411 man = self.manifest.ancestor(m1n, m2n)
1411 m1 = self.manifest.read(m1n)
1412 m1 = self.manifest.read(m1n)
1412 mf1 = self.manifest.readflags(m1n)
1413 mf1 = self.manifest.readflags(m1n)
1413 m2 = self.manifest.read(m2n).copy()
1414 m2 = self.manifest.read(m2n).copy()
1414 mf2 = self.manifest.readflags(m2n)
1415 mf2 = self.manifest.readflags(m2n)
1415 ma = self.manifest.read(man)
1416 ma = self.manifest.read(man)
1416 mfa = self.manifest.readflags(man)
1417 mfa = self.manifest.readflags(man)
1417
1418
1418 modified, added, removed, deleted, unknown = self.changes()
1419 modified, added, removed, deleted, unknown = self.changes()
1419
1420
1420 # is this a jump, or a merge? i.e. is there a linear path
1421 # is this a jump, or a merge? i.e. is there a linear path
1421 # from p1 to p2?
1422 # from p1 to p2?
1422 linear_path = (pa == p1 or pa == p2)
1423 linear_path = (pa == p1 or pa == p2)
1423
1424
1424 if allow and linear_path:
1425 if allow and linear_path:
1425 raise util.Abort(_("there is nothing to merge, "
1426 raise util.Abort(_("there is nothing to merge, "
1426 "just use 'hg update'"))
1427 "just use 'hg update'"))
1427 if allow and not forcemerge:
1428 if allow and not forcemerge:
1428 if modified or added or removed:
1429 if modified or added or removed:
1429 raise util.Abort(_("outstanding uncommited changes"))
1430 raise util.Abort(_("outstanding uncommited changes"))
1430 if not forcemerge and not force:
1431 if not forcemerge and not force:
1431 for f in unknown:
1432 for f in unknown:
1432 if f in m2:
1433 if f in m2:
1433 t1 = self.wread(f)
1434 t1 = self.wread(f)
1434 t2 = self.file(f).read(m2[f])
1435 t2 = self.file(f).read(m2[f])
1435 if cmp(t1, t2) != 0:
1436 if cmp(t1, t2) != 0:
1436 raise util.Abort(_("'%s' already exists in the working"
1437 raise util.Abort(_("'%s' already exists in the working"
1437 " dir and differs from remote") % f)
1438 " dir and differs from remote") % f)
1438
1439
1439 # resolve the manifest to determine which files
1440 # resolve the manifest to determine which files
1440 # we care about merging
1441 # we care about merging
1441 self.ui.note(_("resolving manifests\n"))
1442 self.ui.note(_("resolving manifests\n"))
1442 self.ui.debug(_(" force %s allow %s moddirstate %s linear %s\n") %
1443 self.ui.debug(_(" force %s allow %s moddirstate %s linear %s\n") %
1443 (force, allow, moddirstate, linear_path))
1444 (force, allow, moddirstate, linear_path))
1444 self.ui.debug(_(" ancestor %s local %s remote %s\n") %
1445 self.ui.debug(_(" ancestor %s local %s remote %s\n") %
1445 (short(man), short(m1n), short(m2n)))
1446 (short(man), short(m1n), short(m2n)))
1446
1447
1447 merge = {}
1448 merge = {}
1448 get = {}
1449 get = {}
1449 remove = []
1450 remove = []
1450
1451
1451 # construct a working dir manifest
1452 # construct a working dir manifest
1452 mw = m1.copy()
1453 mw = m1.copy()
1453 mfw = mf1.copy()
1454 mfw = mf1.copy()
1454 umap = dict.fromkeys(unknown)
1455 umap = dict.fromkeys(unknown)
1455
1456
1456 for f in added + modified + unknown:
1457 for f in added + modified + unknown:
1457 mw[f] = ""
1458 mw[f] = ""
1458 mfw[f] = util.is_exec(self.wjoin(f), mfw.get(f, False))
1459 mfw[f] = util.is_exec(self.wjoin(f), mfw.get(f, False))
1459
1460
1460 if moddirstate and not wlock:
1461 if moddirstate and not wlock:
1461 wlock = self.wlock()
1462 wlock = self.wlock()
1462
1463
1463 for f in deleted + removed:
1464 for f in deleted + removed:
1464 if f in mw:
1465 if f in mw:
1465 del mw[f]
1466 del mw[f]
1466
1467
1467 # If we're jumping between revisions (as opposed to merging),
1468 # If we're jumping between revisions (as opposed to merging),
1468 # and if neither the working directory nor the target rev has
1469 # and if neither the working directory nor the target rev has
1469 # the file, then we need to remove it from the dirstate, to
1470 # the file, then we need to remove it from the dirstate, to
1470 # prevent the dirstate from listing the file when it is no
1471 # prevent the dirstate from listing the file when it is no
1471 # longer in the manifest.
1472 # longer in the manifest.
1472 if moddirstate and linear_path and f not in m2:
1473 if moddirstate and linear_path and f not in m2:
1473 self.dirstate.forget((f,))
1474 self.dirstate.forget((f,))
1474
1475
1475 # Compare manifests
1476 # Compare manifests
1476 for f, n in mw.iteritems():
1477 for f, n in mw.iteritems():
1477 if choose and not choose(f):
1478 if choose and not choose(f):
1478 continue
1479 continue
1479 if f in m2:
1480 if f in m2:
1480 s = 0
1481 s = 0
1481
1482
1482 # is the wfile new since m1, and match m2?
1483 # is the wfile new since m1, and match m2?
1483 if f not in m1:
1484 if f not in m1:
1484 t1 = self.wread(f)
1485 t1 = self.wread(f)
1485 t2 = self.file(f).read(m2[f])
1486 t2 = self.file(f).read(m2[f])
1486 if cmp(t1, t2) == 0:
1487 if cmp(t1, t2) == 0:
1487 n = m2[f]
1488 n = m2[f]
1488 del t1, t2
1489 del t1, t2
1489
1490
1490 # are files different?
1491 # are files different?
1491 if n != m2[f]:
1492 if n != m2[f]:
1492 a = ma.get(f, nullid)
1493 a = ma.get(f, nullid)
1493 # are both different from the ancestor?
1494 # are both different from the ancestor?
1494 if n != a and m2[f] != a:
1495 if n != a and m2[f] != a:
1495 self.ui.debug(_(" %s versions differ, resolve\n") % f)
1496 self.ui.debug(_(" %s versions differ, resolve\n") % f)
1496 # merge executable bits
1497 # merge executable bits
1497 # "if we changed or they changed, change in merge"
1498 # "if we changed or they changed, change in merge"
1498 a, b, c = mfa.get(f, 0), mfw[f], mf2[f]
1499 a, b, c = mfa.get(f, 0), mfw[f], mf2[f]
1499 mode = ((a^b) | (a^c)) ^ a
1500 mode = ((a^b) | (a^c)) ^ a
1500 merge[f] = (m1.get(f, nullid), m2[f], mode)
1501 merge[f] = (m1.get(f, nullid), m2[f], mode)
1501 s = 1
1502 s = 1
1502 # are we clobbering?
1503 # are we clobbering?
1503 # is remote's version newer?
1504 # is remote's version newer?
1504 # or are we going back in time?
1505 # or are we going back in time?
1505 elif force or m2[f] != a or (p2 == pa and mw[f] == m1[f]):
1506 elif force or m2[f] != a or (p2 == pa and mw[f] == m1[f]):
1506 self.ui.debug(_(" remote %s is newer, get\n") % f)
1507 self.ui.debug(_(" remote %s is newer, get\n") % f)
1507 get[f] = m2[f]
1508 get[f] = m2[f]
1508 s = 1
1509 s = 1
1509 elif f in umap:
1510 elif f in umap:
1510 # this unknown file is the same as the checkout
1511 # this unknown file is the same as the checkout
1511 get[f] = m2[f]
1512 get[f] = m2[f]
1512
1513
1513 if not s and mfw[f] != mf2[f]:
1514 if not s and mfw[f] != mf2[f]:
1514 if force:
1515 if force:
1515 self.ui.debug(_(" updating permissions for %s\n") % f)
1516 self.ui.debug(_(" updating permissions for %s\n") % f)
1516 util.set_exec(self.wjoin(f), mf2[f])
1517 util.set_exec(self.wjoin(f), mf2[f])
1517 else:
1518 else:
1518 a, b, c = mfa.get(f, 0), mfw[f], mf2[f]
1519 a, b, c = mfa.get(f, 0), mfw[f], mf2[f]
1519 mode = ((a^b) | (a^c)) ^ a
1520 mode = ((a^b) | (a^c)) ^ a
1520 if mode != b:
1521 if mode != b:
1521 self.ui.debug(_(" updating permissions for %s\n")
1522 self.ui.debug(_(" updating permissions for %s\n")
1522 % f)
1523 % f)
1523 util.set_exec(self.wjoin(f), mode)
1524 util.set_exec(self.wjoin(f), mode)
1524 del m2[f]
1525 del m2[f]
1525 elif f in ma:
1526 elif f in ma:
1526 if n != ma[f]:
1527 if n != ma[f]:
1527 r = _("d")
1528 r = _("d")
1528 if not force and (linear_path or allow):
1529 if not force and (linear_path or allow):
1529 r = self.ui.prompt(
1530 r = self.ui.prompt(
1530 (_(" local changed %s which remote deleted\n") % f) +
1531 (_(" local changed %s which remote deleted\n") % f) +
1531 _("(k)eep or (d)elete?"), _("[kd]"), _("k"))
1532 _("(k)eep or (d)elete?"), _("[kd]"), _("k"))
1532 if r == _("d"):
1533 if r == _("d"):
1533 remove.append(f)
1534 remove.append(f)
1534 else:
1535 else:
1535 self.ui.debug(_("other deleted %s\n") % f)
1536 self.ui.debug(_("other deleted %s\n") % f)
1536 remove.append(f) # other deleted it
1537 remove.append(f) # other deleted it
1537 else:
1538 else:
1538 # file is created on branch or in working directory
1539 # file is created on branch or in working directory
1539 if force and f not in umap:
1540 if force and f not in umap:
1540 self.ui.debug(_("remote deleted %s, clobbering\n") % f)
1541 self.ui.debug(_("remote deleted %s, clobbering\n") % f)
1541 remove.append(f)
1542 remove.append(f)
1542 elif n == m1.get(f, nullid): # same as parent
1543 elif n == m1.get(f, nullid): # same as parent
1543 if p2 == pa: # going backwards?
1544 if p2 == pa: # going backwards?
1544 self.ui.debug(_("remote deleted %s\n") % f)
1545 self.ui.debug(_("remote deleted %s\n") % f)
1545 remove.append(f)
1546 remove.append(f)
1546 else:
1547 else:
1547 self.ui.debug(_("local modified %s, keeping\n") % f)
1548 self.ui.debug(_("local modified %s, keeping\n") % f)
1548 else:
1549 else:
1549 self.ui.debug(_("working dir created %s, keeping\n") % f)
1550 self.ui.debug(_("working dir created %s, keeping\n") % f)
1550
1551
1551 for f, n in m2.iteritems():
1552 for f, n in m2.iteritems():
1552 if choose and not choose(f):
1553 if choose and not choose(f):
1553 continue
1554 continue
1554 if f[0] == "/":
1555 if f[0] == "/":
1555 continue
1556 continue
1556 if f in ma and n != ma[f]:
1557 if f in ma and n != ma[f]:
1557 r = _("k")
1558 r = _("k")
1558 if not force and (linear_path or allow):
1559 if not force and (linear_path or allow):
1559 r = self.ui.prompt(
1560 r = self.ui.prompt(
1560 (_("remote changed %s which local deleted\n") % f) +
1561 (_("remote changed %s which local deleted\n") % f) +
1561 _("(k)eep or (d)elete?"), _("[kd]"), _("k"))
1562 _("(k)eep or (d)elete?"), _("[kd]"), _("k"))
1562 if r == _("k"):
1563 if r == _("k"):
1563 get[f] = n
1564 get[f] = n
1564 elif f not in ma:
1565 elif f not in ma:
1565 self.ui.debug(_("remote created %s\n") % f)
1566 self.ui.debug(_("remote created %s\n") % f)
1566 get[f] = n
1567 get[f] = n
1567 else:
1568 else:
1568 if force or p2 == pa: # going backwards?
1569 if force or p2 == pa: # going backwards?
1569 self.ui.debug(_("local deleted %s, recreating\n") % f)
1570 self.ui.debug(_("local deleted %s, recreating\n") % f)
1570 get[f] = n
1571 get[f] = n
1571 else:
1572 else:
1572 self.ui.debug(_("local deleted %s\n") % f)
1573 self.ui.debug(_("local deleted %s\n") % f)
1573
1574
1574 del mw, m1, m2, ma
1575 del mw, m1, m2, ma
1575
1576
1576 if force:
1577 if force:
1577 for f in merge:
1578 for f in merge:
1578 get[f] = merge[f][1]
1579 get[f] = merge[f][1]
1579 merge = {}
1580 merge = {}
1580
1581
1581 if linear_path or force:
1582 if linear_path or force:
1582 # we don't need to do any magic, just jump to the new rev
1583 # we don't need to do any magic, just jump to the new rev
1583 branch_merge = False
1584 branch_merge = False
1584 p1, p2 = p2, nullid
1585 p1, p2 = p2, nullid
1585 else:
1586 else:
1586 if not allow:
1587 if not allow:
1587 self.ui.status(_("this update spans a branch"
1588 self.ui.status(_("this update spans a branch"
1588 " affecting the following files:\n"))
1589 " affecting the following files:\n"))
1589 fl = merge.keys() + get.keys()
1590 fl = merge.keys() + get.keys()
1590 fl.sort()
1591 fl.sort()
1591 for f in fl:
1592 for f in fl:
1592 cf = ""
1593 cf = ""
1593 if f in merge:
1594 if f in merge:
1594 cf = _(" (resolve)")
1595 cf = _(" (resolve)")
1595 self.ui.status(" %s%s\n" % (f, cf))
1596 self.ui.status(" %s%s\n" % (f, cf))
1596 self.ui.warn(_("aborting update spanning branches!\n"))
1597 self.ui.warn(_("aborting update spanning branches!\n"))
1597 self.ui.status(_("(use update -m to merge across branches"
1598 self.ui.status(_("(use update -m to merge across branches"
1598 " or -C to lose changes)\n"))
1599 " or -C to lose changes)\n"))
1599 return 1
1600 return 1
1600 branch_merge = True
1601 branch_merge = True
1601
1602
1602 # get the files we don't need to change
1603 # get the files we don't need to change
1603 files = get.keys()
1604 files = get.keys()
1604 files.sort()
1605 files.sort()
1605 for f in files:
1606 for f in files:
1606 if f[0] == "/":
1607 if f[0] == "/":
1607 continue
1608 continue
1608 self.ui.note(_("getting %s\n") % f)
1609 self.ui.note(_("getting %s\n") % f)
1609 t = self.file(f).read(get[f])
1610 t = self.file(f).read(get[f])
1610 self.wwrite(f, t)
1611 self.wwrite(f, t)
1611 util.set_exec(self.wjoin(f), mf2[f])
1612 util.set_exec(self.wjoin(f), mf2[f])
1612 if moddirstate:
1613 if moddirstate:
1613 if branch_merge:
1614 if branch_merge:
1614 self.dirstate.update([f], 'n', st_mtime=-1)
1615 self.dirstate.update([f], 'n', st_mtime=-1)
1615 else:
1616 else:
1616 self.dirstate.update([f], 'n')
1617 self.dirstate.update([f], 'n')
1617
1618
1618 # merge the tricky bits
1619 # merge the tricky bits
1619 files = merge.keys()
1620 files = merge.keys()
1620 files.sort()
1621 files.sort()
1621 for f in files:
1622 for f in files:
1622 self.ui.status(_("merging %s\n") % f)
1623 self.ui.status(_("merging %s\n") % f)
1623 my, other, flag = merge[f]
1624 my, other, flag = merge[f]
1624 ret = self.merge3(f, my, other)
1625 ret = self.merge3(f, my, other)
1625 if ret:
1626 if ret:
1626 err = True
1627 err = True
1627 util.set_exec(self.wjoin(f), flag)
1628 util.set_exec(self.wjoin(f), flag)
1628 if moddirstate:
1629 if moddirstate:
1629 if branch_merge:
1630 if branch_merge:
1630 # We've done a branch merge, mark this file as merged
1631 # We've done a branch merge, mark this file as merged
1631 # so that we properly record the merger later
1632 # so that we properly record the merger later
1632 self.dirstate.update([f], 'm')
1633 self.dirstate.update([f], 'm')
1633 else:
1634 else:
1634 # We've update-merged a locally modified file, so
1635 # We've update-merged a locally modified file, so
1635 # we set the dirstate to emulate a normal checkout
1636 # we set the dirstate to emulate a normal checkout
1636 # of that file some time in the past. Thus our
1637 # of that file some time in the past. Thus our
1637 # merge will appear as a normal local file
1638 # merge will appear as a normal local file
1638 # modification.
1639 # modification.
1639 f_len = len(self.file(f).read(other))
1640 f_len = len(self.file(f).read(other))
1640 self.dirstate.update([f], 'n', st_size=f_len, st_mtime=-1)
1641 self.dirstate.update([f], 'n', st_size=f_len, st_mtime=-1)
1641
1642
1642 remove.sort()
1643 remove.sort()
1643 for f in remove:
1644 for f in remove:
1644 self.ui.note(_("removing %s\n") % f)
1645 self.ui.note(_("removing %s\n") % f)
1645 try:
1646 try:
1646 util.unlink(self.wjoin(f))
1647 util.unlink(self.wjoin(f))
1647 except OSError, inst:
1648 except OSError, inst:
1648 if inst.errno != errno.ENOENT:
1649 if inst.errno != errno.ENOENT:
1649 self.ui.warn(_("update failed to remove %s: %s!\n") %
1650 self.ui.warn(_("update failed to remove %s: %s!\n") %
1650 (f, inst.strerror))
1651 (f, inst.strerror))
1651 if moddirstate:
1652 if moddirstate:
1652 if branch_merge:
1653 if branch_merge:
1653 self.dirstate.update(remove, 'r')
1654 self.dirstate.update(remove, 'r')
1654 else:
1655 else:
1655 self.dirstate.forget(remove)
1656 self.dirstate.forget(remove)
1656
1657
1657 if moddirstate:
1658 if moddirstate:
1658 self.dirstate.setparents(p1, p2)
1659 self.dirstate.setparents(p1, p2)
1659 return err
1660 return err
1660
1661
1661 def merge3(self, fn, my, other):
1662 def merge3(self, fn, my, other):
1662 """perform a 3-way merge in the working directory"""
1663 """perform a 3-way merge in the working directory"""
1663
1664
1664 def temp(prefix, node):
1665 def temp(prefix, node):
1665 pre = "%s~%s." % (os.path.basename(fn), prefix)
1666 pre = "%s~%s." % (os.path.basename(fn), prefix)
1666 (fd, name) = tempfile.mkstemp("", pre)
1667 (fd, name) = tempfile.mkstemp("", pre)
1667 f = os.fdopen(fd, "wb")
1668 f = os.fdopen(fd, "wb")
1668 self.wwrite(fn, fl.read(node), f)
1669 self.wwrite(fn, fl.read(node), f)
1669 f.close()
1670 f.close()
1670 return name
1671 return name
1671
1672
1672 fl = self.file(fn)
1673 fl = self.file(fn)
1673 base = fl.ancestor(my, other)
1674 base = fl.ancestor(my, other)
1674 a = self.wjoin(fn)
1675 a = self.wjoin(fn)
1675 b = temp("base", base)
1676 b = temp("base", base)
1676 c = temp("other", other)
1677 c = temp("other", other)
1677
1678
1678 self.ui.note(_("resolving %s\n") % fn)
1679 self.ui.note(_("resolving %s\n") % fn)
1679 self.ui.debug(_("file %s: my %s other %s ancestor %s\n") %
1680 self.ui.debug(_("file %s: my %s other %s ancestor %s\n") %
1680 (fn, short(my), short(other), short(base)))
1681 (fn, short(my), short(other), short(base)))
1681
1682
1682 cmd = (os.environ.get("HGMERGE") or self.ui.config("ui", "merge")
1683 cmd = (os.environ.get("HGMERGE") or self.ui.config("ui", "merge")
1683 or "hgmerge")
1684 or "hgmerge")
1684 r = os.system('%s "%s" "%s" "%s"' % (cmd, a, b, c))
1685 r = os.system('%s "%s" "%s" "%s"' % (cmd, a, b, c))
1685 if r:
1686 if r:
1686 self.ui.warn(_("merging %s failed!\n") % fn)
1687 self.ui.warn(_("merging %s failed!\n") % fn)
1687
1688
1688 os.unlink(b)
1689 os.unlink(b)
1689 os.unlink(c)
1690 os.unlink(c)
1690 return r
1691 return r
1691
1692
1692 def verify(self):
1693 def verify(self):
1693 filelinkrevs = {}
1694 filelinkrevs = {}
1694 filenodes = {}
1695 filenodes = {}
1695 changesets = revisions = files = 0
1696 changesets = revisions = files = 0
1696 errors = [0]
1697 errors = [0]
1697 neededmanifests = {}
1698 neededmanifests = {}
1698
1699
1699 def err(msg):
1700 def err(msg):
1700 self.ui.warn(msg + "\n")
1701 self.ui.warn(msg + "\n")
1701 errors[0] += 1
1702 errors[0] += 1
1702
1703
1703 def checksize(obj, name):
1704 def checksize(obj, name):
1704 d = obj.checksize()
1705 d = obj.checksize()
1705 if d[0]:
1706 if d[0]:
1706 err(_("%s data length off by %d bytes") % (name, d[0]))
1707 err(_("%s data length off by %d bytes") % (name, d[0]))
1707 if d[1]:
1708 if d[1]:
1708 err(_("%s index contains %d extra bytes") % (name, d[1]))
1709 err(_("%s index contains %d extra bytes") % (name, d[1]))
1709
1710
1710 seen = {}
1711 seen = {}
1711 self.ui.status(_("checking changesets\n"))
1712 self.ui.status(_("checking changesets\n"))
1712 checksize(self.changelog, "changelog")
1713 checksize(self.changelog, "changelog")
1713
1714
1714 for i in range(self.changelog.count()):
1715 for i in range(self.changelog.count()):
1715 changesets += 1
1716 changesets += 1
1716 n = self.changelog.node(i)
1717 n = self.changelog.node(i)
1717 l = self.changelog.linkrev(n)
1718 l = self.changelog.linkrev(n)
1718 if l != i:
1719 if l != i:
1719 err(_("incorrect link (%d) for changeset revision %d") %(l, i))
1720 err(_("incorrect link (%d) for changeset revision %d") %(l, i))
1720 if n in seen:
1721 if n in seen:
1721 err(_("duplicate changeset at revision %d") % i)
1722 err(_("duplicate changeset at revision %d") % i)
1722 seen[n] = 1
1723 seen[n] = 1
1723
1724
1724 for p in self.changelog.parents(n):
1725 for p in self.changelog.parents(n):
1725 if p not in self.changelog.nodemap:
1726 if p not in self.changelog.nodemap:
1726 err(_("changeset %s has unknown parent %s") %
1727 err(_("changeset %s has unknown parent %s") %
1727 (short(n), short(p)))
1728 (short(n), short(p)))
1728 try:
1729 try:
1729 changes = self.changelog.read(n)
1730 changes = self.changelog.read(n)
1730 except KeyboardInterrupt:
1731 except KeyboardInterrupt:
1731 self.ui.warn(_("interrupted"))
1732 self.ui.warn(_("interrupted"))
1732 raise
1733 raise
1733 except Exception, inst:
1734 except Exception, inst:
1734 err(_("unpacking changeset %s: %s") % (short(n), inst))
1735 err(_("unpacking changeset %s: %s") % (short(n), inst))
1735
1736
1736 neededmanifests[changes[0]] = n
1737 neededmanifests[changes[0]] = n
1737
1738
1738 for f in changes[3]:
1739 for f in changes[3]:
1739 filelinkrevs.setdefault(f, []).append(i)
1740 filelinkrevs.setdefault(f, []).append(i)
1740
1741
1741 seen = {}
1742 seen = {}
1742 self.ui.status(_("checking manifests\n"))
1743 self.ui.status(_("checking manifests\n"))
1743 checksize(self.manifest, "manifest")
1744 checksize(self.manifest, "manifest")
1744
1745
1745 for i in range(self.manifest.count()):
1746 for i in range(self.manifest.count()):
1746 n = self.manifest.node(i)
1747 n = self.manifest.node(i)
1747 l = self.manifest.linkrev(n)
1748 l = self.manifest.linkrev(n)
1748
1749
1749 if l < 0 or l >= self.changelog.count():
1750 if l < 0 or l >= self.changelog.count():
1750 err(_("bad manifest link (%d) at revision %d") % (l, i))
1751 err(_("bad manifest link (%d) at revision %d") % (l, i))
1751
1752
1752 if n in neededmanifests:
1753 if n in neededmanifests:
1753 del neededmanifests[n]
1754 del neededmanifests[n]
1754
1755
1755 if n in seen:
1756 if n in seen:
1756 err(_("duplicate manifest at revision %d") % i)
1757 err(_("duplicate manifest at revision %d") % i)
1757
1758
1758 seen[n] = 1
1759 seen[n] = 1
1759
1760
1760 for p in self.manifest.parents(n):
1761 for p in self.manifest.parents(n):
1761 if p not in self.manifest.nodemap:
1762 if p not in self.manifest.nodemap:
1762 err(_("manifest %s has unknown parent %s") %
1763 err(_("manifest %s has unknown parent %s") %
1763 (short(n), short(p)))
1764 (short(n), short(p)))
1764
1765
1765 try:
1766 try:
1766 delta = mdiff.patchtext(self.manifest.delta(n))
1767 delta = mdiff.patchtext(self.manifest.delta(n))
1767 except KeyboardInterrupt:
1768 except KeyboardInterrupt:
1768 self.ui.warn(_("interrupted"))
1769 self.ui.warn(_("interrupted"))
1769 raise
1770 raise
1770 except Exception, inst:
1771 except Exception, inst:
1771 err(_("unpacking manifest %s: %s") % (short(n), inst))
1772 err(_("unpacking manifest %s: %s") % (short(n), inst))
1772
1773
1773 ff = [ l.split('\0') for l in delta.splitlines() ]
1774 ff = [ l.split('\0') for l in delta.splitlines() ]
1774 for f, fn in ff:
1775 for f, fn in ff:
1775 filenodes.setdefault(f, {})[bin(fn[:40])] = 1
1776 filenodes.setdefault(f, {})[bin(fn[:40])] = 1
1776
1777
1777 self.ui.status(_("crosschecking files in changesets and manifests\n"))
1778 self.ui.status(_("crosschecking files in changesets and manifests\n"))
1778
1779
1779 for m, c in neededmanifests.items():
1780 for m, c in neededmanifests.items():
1780 err(_("Changeset %s refers to unknown manifest %s") %
1781 err(_("Changeset %s refers to unknown manifest %s") %
1781 (short(m), short(c)))
1782 (short(m), short(c)))
1782 del neededmanifests
1783 del neededmanifests
1783
1784
1784 for f in filenodes:
1785 for f in filenodes:
1785 if f not in filelinkrevs:
1786 if f not in filelinkrevs:
1786 err(_("file %s in manifest but not in changesets") % f)
1787 err(_("file %s in manifest but not in changesets") % f)
1787
1788
1788 for f in filelinkrevs:
1789 for f in filelinkrevs:
1789 if f not in filenodes:
1790 if f not in filenodes:
1790 err(_("file %s in changeset but not in manifest") % f)
1791 err(_("file %s in changeset but not in manifest") % f)
1791
1792
1792 self.ui.status(_("checking files\n"))
1793 self.ui.status(_("checking files\n"))
1793 ff = filenodes.keys()
1794 ff = filenodes.keys()
1794 ff.sort()
1795 ff.sort()
1795 for f in ff:
1796 for f in ff:
1796 if f == "/dev/null":
1797 if f == "/dev/null":
1797 continue
1798 continue
1798 files += 1
1799 files += 1
1799 fl = self.file(f)
1800 fl = self.file(f)
1800 checksize(fl, f)
1801 checksize(fl, f)
1801
1802
1802 nodes = {nullid: 1}
1803 nodes = {nullid: 1}
1803 seen = {}
1804 seen = {}
1804 for i in range(fl.count()):
1805 for i in range(fl.count()):
1805 revisions += 1
1806 revisions += 1
1806 n = fl.node(i)
1807 n = fl.node(i)
1807
1808
1808 if n in seen:
1809 if n in seen:
1809 err(_("%s: duplicate revision %d") % (f, i))
1810 err(_("%s: duplicate revision %d") % (f, i))
1810 if n not in filenodes[f]:
1811 if n not in filenodes[f]:
1811 err(_("%s: %d:%s not in manifests") % (f, i, short(n)))
1812 err(_("%s: %d:%s not in manifests") % (f, i, short(n)))
1812 else:
1813 else:
1813 del filenodes[f][n]
1814 del filenodes[f][n]
1814
1815
1815 flr = fl.linkrev(n)
1816 flr = fl.linkrev(n)
1816 if flr not in filelinkrevs[f]:
1817 if flr not in filelinkrevs[f]:
1817 err(_("%s:%s points to unexpected changeset %d")
1818 err(_("%s:%s points to unexpected changeset %d")
1818 % (f, short(n), flr))
1819 % (f, short(n), flr))
1819 else:
1820 else:
1820 filelinkrevs[f].remove(flr)
1821 filelinkrevs[f].remove(flr)
1821
1822
1822 # verify contents
1823 # verify contents
1823 try:
1824 try:
1824 t = fl.read(n)
1825 t = fl.read(n)
1825 except KeyboardInterrupt:
1826 except KeyboardInterrupt:
1826 self.ui.warn(_("interrupted"))
1827 self.ui.warn(_("interrupted"))
1827 raise
1828 raise
1828 except Exception, inst:
1829 except Exception, inst:
1829 err(_("unpacking file %s %s: %s") % (f, short(n), inst))
1830 err(_("unpacking file %s %s: %s") % (f, short(n), inst))
1830
1831
1831 # verify parents
1832 # verify parents
1832 (p1, p2) = fl.parents(n)
1833 (p1, p2) = fl.parents(n)
1833 if p1 not in nodes:
1834 if p1 not in nodes:
1834 err(_("file %s:%s unknown parent 1 %s") %
1835 err(_("file %s:%s unknown parent 1 %s") %
1835 (f, short(n), short(p1)))
1836 (f, short(n), short(p1)))
1836 if p2 not in nodes:
1837 if p2 not in nodes:
1837 err(_("file %s:%s unknown parent 2 %s") %
1838 err(_("file %s:%s unknown parent 2 %s") %
1838 (f, short(n), short(p1)))
1839 (f, short(n), short(p1)))
1839 nodes[n] = 1
1840 nodes[n] = 1
1840
1841
1841 # cross-check
1842 # cross-check
1842 for node in filenodes[f]:
1843 for node in filenodes[f]:
1843 err(_("node %s in manifests not in %s") % (hex(node), f))
1844 err(_("node %s in manifests not in %s") % (hex(node), f))
1844
1845
1845 self.ui.status(_("%d files, %d changesets, %d total revisions\n") %
1846 self.ui.status(_("%d files, %d changesets, %d total revisions\n") %
1846 (files, changesets, revisions))
1847 (files, changesets, revisions))
1847
1848
1848 if errors[0]:
1849 if errors[0]:
1849 self.ui.warn(_("%d integrity errors encountered!\n") % errors[0])
1850 self.ui.warn(_("%d integrity errors encountered!\n") % errors[0])
1850 return 1
1851 return 1
General Comments 0
You need to be logged in to leave comments. Login now