##// END OF EJS Templates
Hook fixups...
mpm@selenic.com -
r1316:b650bfdf default
parent child Browse files
Show More
@@ -1,229 +1,231
1 HGRC(5)
1 HGRC(5)
2 =======
2 =======
3 Bryan O'Sullivan <bos@serpentine.com>
3 Bryan O'Sullivan <bos@serpentine.com>
4
4
5 NAME
5 NAME
6 ----
6 ----
7 hgrc - configuration files for Mercurial
7 hgrc - configuration files for Mercurial
8
8
9 SYNOPSIS
9 SYNOPSIS
10 --------
10 --------
11
11
12 The Mercurial system uses a set of configuration files to control
12 The Mercurial system uses a set of configuration files to control
13 aspects of its behaviour.
13 aspects of its behaviour.
14
14
15 FILES
15 FILES
16 -----
16 -----
17
17
18 Mercurial reads configuration data from up to three files, if they
18 Mercurial reads configuration data from up to three files, if they
19 exist. The names of these files depend on the system on which
19 exist. The names of these files depend on the system on which
20 Mercurial is installed.
20 Mercurial is installed.
21
21
22 (Unix) /etc/mercurial/hgrc::
22 (Unix) /etc/mercurial/hgrc::
23 (Windows) C:\Mercurial\Mercurial.ini::
23 (Windows) C:\Mercurial\Mercurial.ini::
24 Options in this global configuration file apply to all Mercurial
24 Options in this global configuration file apply to all Mercurial
25 commands executed by any user in any directory.
25 commands executed by any user in any directory.
26
26
27 (Unix) $HOME/.hgrc::
27 (Unix) $HOME/.hgrc::
28 (Windows) C:\Documents and Settings\USERNAME\Mercurial.ini
28 (Windows) C:\Documents and Settings\USERNAME\Mercurial.ini
29 Per-user configuration options that apply to all Mercurial commands,
29 Per-user configuration options that apply to all Mercurial commands,
30 no matter from which directory they are run. Values in this file
30 no matter from which directory they are run. Values in this file
31 override global settings.
31 override global settings.
32
32
33 (Unix, Windows) <repo>/.hg/hgrc::
33 (Unix, Windows) <repo>/.hg/hgrc::
34 Per-repository configuration options that only apply in a
34 Per-repository configuration options that only apply in a
35 particular repository. This file is not version-controlled, and
35 particular repository. This file is not version-controlled, and
36 will not get transferred during a "clone" operation. Values in
36 will not get transferred during a "clone" operation. Values in
37 this file override global and per-user settings.
37 this file override global and per-user settings.
38
38
39 SYNTAX
39 SYNTAX
40 ------
40 ------
41
41
42 A configuration file consists of sections, led by a "[section]" header
42 A configuration file consists of sections, led by a "[section]" header
43 and followed by "name: value" entries; "name=value" is also accepted.
43 and followed by "name: value" entries; "name=value" is also accepted.
44
44
45 [spam]
45 [spam]
46 eggs=ham
46 eggs=ham
47 green=
47 green=
48 eggs
48 eggs
49
49
50 Each line contains one entry. If the lines that follow are indented,
50 Each line contains one entry. If the lines that follow are indented,
51 they are treated as continuations of that entry.
51 they are treated as continuations of that entry.
52
52
53 Leading whitespace is removed from values. Empty lines are skipped.
53 Leading whitespace is removed from values. Empty lines are skipped.
54
54
55 The optional values can contain format strings which refer to other
55 The optional values can contain format strings which refer to other
56 values in the same section, or values in a special DEFAULT section.
56 values in the same section, or values in a special DEFAULT section.
57
57
58 Lines beginning with "#" or ";" are ignored and may be used to provide
58 Lines beginning with "#" or ";" are ignored and may be used to provide
59 comments.
59 comments.
60
60
61 SECTIONS
61 SECTIONS
62 --------
62 --------
63
63
64 This section describes the different sections that may appear in a
64 This section describes the different sections that may appear in a
65 Mercurial "hgrc" file, the purpose of each section, its possible
65 Mercurial "hgrc" file, the purpose of each section, its possible
66 keys, and their possible values.
66 keys, and their possible values.
67
67
68 decode/encode::
68 decode/encode::
69 Filters for transforming files on checkout/checkin. This would
69 Filters for transforming files on checkout/checkin. This would
70 typically be used for newline processing or other
70 typically be used for newline processing or other
71 localization/canonicalization of files.
71 localization/canonicalization of files.
72
72
73 Filters consist of a filter pattern followed by a filter command.
73 Filters consist of a filter pattern followed by a filter command.
74 Filter patterns are globs by default, rooted at the repository
74 Filter patterns are globs by default, rooted at the repository
75 root. For example, to match any file ending in ".txt" in the root
75 root. For example, to match any file ending in ".txt" in the root
76 directory only, use the pattern "*.txt". To match any file ending
76 directory only, use the pattern "*.txt". To match any file ending
77 in ".c" anywhere in the repository, use the pattern "**.c".
77 in ".c" anywhere in the repository, use the pattern "**.c".
78
78
79 The filter command can start with a specifier, either "pipe:" or
79 The filter command can start with a specifier, either "pipe:" or
80 "tempfile:". If no specifier is given, "pipe:" is used by default.
80 "tempfile:". If no specifier is given, "pipe:" is used by default.
81
81
82 A "pipe:" command must accept data on stdin and return the
82 A "pipe:" command must accept data on stdin and return the
83 transformed data on stdout.
83 transformed data on stdout.
84
84
85 Pipe example:
85 Pipe example:
86
86
87 [encode]
87 [encode]
88 # uncompress gzip files on checkin to improve delta compression
88 # uncompress gzip files on checkin to improve delta compression
89 # note: not necessarily a good idea, just an example
89 # note: not necessarily a good idea, just an example
90 *.gz = pipe: gunzip
90 *.gz = pipe: gunzip
91
91
92 [decode]
92 [decode]
93 # recompress gzip files when writing them to the working dir (we
93 # recompress gzip files when writing them to the working dir (we
94 # can safely omit "pipe:", because it's the default)
94 # can safely omit "pipe:", because it's the default)
95 *.gz = gzip
95 *.gz = gzip
96
96
97 A "tempfile:" command is a template. The string INFILE is replaced
97 A "tempfile:" command is a template. The string INFILE is replaced
98 with the name of a temporary file that contains the data to be
98 with the name of a temporary file that contains the data to be
99 filtered by the command. The string OUTFILE is replaced with the
99 filtered by the command. The string OUTFILE is replaced with the
100 name of an empty temporary file, where the filtered data must be
100 name of an empty temporary file, where the filtered data must be
101 written by the command.
101 written by the command.
102
102
103 NOTE: the tempfile mechanism is recommended for Windows systems,
103 NOTE: the tempfile mechanism is recommended for Windows systems,
104 where the standard shell I/O redirection operators often have
104 where the standard shell I/O redirection operators often have
105 strange effects. In particular, if you are doing line ending
105 strange effects. In particular, if you are doing line ending
106 conversion on Windows using the popular dos2unix and unix2dos
106 conversion on Windows using the popular dos2unix and unix2dos
107 programs, you *must* use the tempfile mechanism, as using pipes will
107 programs, you *must* use the tempfile mechanism, as using pipes will
108 corrupt the contents of your files.
108 corrupt the contents of your files.
109
109
110 Tempfile example:
110 Tempfile example:
111
111
112 [encode]
112 [encode]
113 # convert files to unix line ending conventions on checkin
113 # convert files to unix line ending conventions on checkin
114 **.txt = tempfile: dos2unix -n INFILE OUTFILE
114 **.txt = tempfile: dos2unix -n INFILE OUTFILE
115
115
116 [decode]
116 [decode]
117 # convert files to windows line ending conventions when writing
117 # convert files to windows line ending conventions when writing
118 # them to the working dir
118 # them to the working dir
119 **.txt = tempfile: unix2dos -n INFILE OUTFILE
119 **.txt = tempfile: unix2dos -n INFILE OUTFILE
120
120
121 hooks::
121 hooks::
122 Commands that get automatically executed by various actions such as
122 Commands that get automatically executed by various actions such as
123 starting or finishing a commit.
123 starting or finishing a commit.
124 changegroup;;
124 changegroup;;
125 Run after a changegroup has been added via push or pull.
125 Run after a changegroup has been added via push or pull. Passed
126 the ID of the first new changeset in $NODE.
126 commit;;
127 commit;;
127 Run after a changeset has been created. Passed the ID of the newly
128 Run after a changeset has been created or for each changeset
128 created changeset.
129 pulled. Passed the ID of the newly created changeset in
130 environment variable $NODE.
129 precommit;;
131 precommit;;
130 Run before starting a commit. Exit status 0 allows the commit to
132 Run before starting a commit. Exit status 0 allows the commit to
131 proceed. Non-zero status will cause the commit to fail.
133 proceed. Non-zero status will cause the commit to fail.
132
134
133 http_proxy::
135 http_proxy::
134 Used to access web-based Mercurial repositories through a HTTP
136 Used to access web-based Mercurial repositories through a HTTP
135 proxy.
137 proxy.
136 host;;
138 host;;
137 Host name and (optional) port of the proxy server, for example
139 Host name and (optional) port of the proxy server, for example
138 "myproxy:8000".
140 "myproxy:8000".
139 no;;
141 no;;
140 Optional. Comma-separated list of host names that should bypass
142 Optional. Comma-separated list of host names that should bypass
141 the proxy.
143 the proxy.
142 passwd;;
144 passwd;;
143 Optional. Password to authenticate with at the proxy server.
145 Optional. Password to authenticate with at the proxy server.
144 user;;
146 user;;
145 Optional. User name to authenticate with at the proxy server.
147 Optional. User name to authenticate with at the proxy server.
146
148
147 paths::
149 paths::
148 Assigns symbolic names to repositories. The left side is the
150 Assigns symbolic names to repositories. The left side is the
149 symbolic name, and the right gives the directory or URL that is the
151 symbolic name, and the right gives the directory or URL that is the
150 location of the repository.
152 location of the repository.
151
153
152 ui::
154 ui::
153 User interface controls.
155 User interface controls.
154 debug;;
156 debug;;
155 Print debugging information. True or False. Default is False.
157 Print debugging information. True or False. Default is False.
156 editor;;
158 editor;;
157 The editor to use during a commit. Default is $EDITOR or "vi".
159 The editor to use during a commit. Default is $EDITOR or "vi".
158 interactive;;
160 interactive;;
159 Allow to prompt the user. True or False. Default is True.
161 Allow to prompt the user. True or False. Default is True.
160 merge;;
162 merge;;
161 The conflict resolution program to use during a manual merge.
163 The conflict resolution program to use during a manual merge.
162 Default is "hgmerge".
164 Default is "hgmerge".
163 quiet;;
165 quiet;;
164 Reduce the amount of output printed. True or False. Default is False.
166 Reduce the amount of output printed. True or False. Default is False.
165 remotecmd;;
167 remotecmd;;
166 remote command to use for clone/push/pull operations. Default is 'hg'.
168 remote command to use for clone/push/pull operations. Default is 'hg'.
167 ssh;;
169 ssh;;
168 command to use for SSH connections. Default is 'ssh'.
170 command to use for SSH connections. Default is 'ssh'.
169 username;;
171 username;;
170 The committer of a changeset created when running "commit".
172 The committer of a changeset created when running "commit".
171 Typically a person's name and email address, e.g. "Fred Widget
173 Typically a person's name and email address, e.g. "Fred Widget
172 <fred@example.com>". Default is $EMAIL or username@hostname.
174 <fred@example.com>". Default is $EMAIL or username@hostname.
173 verbose;;
175 verbose;;
174 Increase the amount of output printed. True or False. Default is False.
176 Increase the amount of output printed. True or False. Default is False.
175
177
176
178
177 web::
179 web::
178 Web interface configuration.
180 Web interface configuration.
179 accesslog;;
181 accesslog;;
180 Where to output the access log. Default is stdout.
182 Where to output the access log. Default is stdout.
181 address;;
183 address;;
182 Interface address to bind to. Default is all.
184 Interface address to bind to. Default is all.
183 allowbz2;;
185 allowbz2;;
184 Whether to allow .tar.bz2 downloading of repo revisions. Default is false.
186 Whether to allow .tar.bz2 downloading of repo revisions. Default is false.
185 allowgz;;
187 allowgz;;
186 Whether to allow .tar.gz downloading of repo revisions. Default is false.
188 Whether to allow .tar.gz downloading of repo revisions. Default is false.
187 allowpull;;
189 allowpull;;
188 Whether to allow pulling from the repository. Default is true.
190 Whether to allow pulling from the repository. Default is true.
189 allowzip;;
191 allowzip;;
190 Whether to allow .zip downloading of repo revisions. Default is false.
192 Whether to allow .zip downloading of repo revisions. Default is false.
191 This feature creates temporary files.
193 This feature creates temporary files.
192 description;;
194 description;;
193 Textual description of the repository's purpose or contents.
195 Textual description of the repository's purpose or contents.
194 Default is "unknown".
196 Default is "unknown".
195 errorlog;;
197 errorlog;;
196 Where to output the error log. Default is stderr.
198 Where to output the error log. Default is stderr.
197 ipv6;;
199 ipv6;;
198 Whether to use IPv6. Default is false.
200 Whether to use IPv6. Default is false.
199 name;;
201 name;;
200 Repository name to use in the web interface. Default is current
202 Repository name to use in the web interface. Default is current
201 working directory.
203 working directory.
202 maxchanges;;
204 maxchanges;;
203 Maximum number of changes to list on the changelog. Default is 10.
205 Maximum number of changes to list on the changelog. Default is 10.
204 maxfiles;;
206 maxfiles;;
205 Maximum number of files to list per changeset. Default is 10.
207 Maximum number of files to list per changeset. Default is 10.
206 port;;
208 port;;
207 Port to listen on. Default is 8000.
209 Port to listen on. Default is 8000.
208 style;;
210 style;;
209 Which template map style to use.
211 Which template map style to use.
210 templates;;
212 templates;;
211 Where to find the HTML templates. Default is install path.
213 Where to find the HTML templates. Default is install path.
212
214
213
215
214 AUTHOR
216 AUTHOR
215 ------
217 ------
216 Bryan O'Sullivan <bos@serpentine.com>.
218 Bryan O'Sullivan <bos@serpentine.com>.
217
219
218 Mercurial was written by Matt Mackall <mpm@selenic.com>.
220 Mercurial was written by Matt Mackall <mpm@selenic.com>.
219
221
220 SEE ALSO
222 SEE ALSO
221 --------
223 --------
222 hg(1)
224 hg(1)
223
225
224 COPYING
226 COPYING
225 -------
227 -------
226 This manual page is copyright 2005 Bryan O'Sullivan.
228 This manual page is copyright 2005 Bryan O'Sullivan.
227 Mercurial is copyright 2005 Matt Mackall.
229 Mercurial is copyright 2005 Matt Mackall.
228 Free use of this software is granted under the terms of the GNU General
230 Free use of this software is granted under the terms of the GNU General
229 Public License (GPL).
231 Public License (GPL).
@@ -1,1426 +1,1431
1 # localrepo.py - read/write repository class for mercurial
1 # localrepo.py - read/write repository class for mercurial
2 #
2 #
3 # Copyright 2005 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms
5 # This software may be used and distributed according to the terms
6 # of the GNU General Public License, incorporated herein by reference.
6 # of the GNU General Public License, incorporated herein by reference.
7
7
8 import struct, os, util
8 import struct, os, util
9 import filelog, manifest, changelog, dirstate, repo
9 import filelog, manifest, changelog, dirstate, repo
10 from node import *
10 from node import *
11 from demandload import *
11 from demandload import *
12 demandload(globals(), "re lock transaction tempfile stat mdiff")
12 demandload(globals(), "re lock transaction tempfile stat mdiff")
13
13
14 class localrepository:
14 class localrepository:
15 def __init__(self, ui, path=None, create=0):
15 def __init__(self, ui, path=None, create=0):
16 if not path:
16 if not path:
17 p = os.getcwd()
17 p = os.getcwd()
18 while not os.path.isdir(os.path.join(p, ".hg")):
18 while not os.path.isdir(os.path.join(p, ".hg")):
19 oldp = p
19 oldp = p
20 p = os.path.dirname(p)
20 p = os.path.dirname(p)
21 if p == oldp: raise repo.RepoError("no repo found")
21 if p == oldp: raise repo.RepoError("no repo found")
22 path = p
22 path = p
23 self.path = os.path.join(path, ".hg")
23 self.path = os.path.join(path, ".hg")
24
24
25 if not create and not os.path.isdir(self.path):
25 if not create and not os.path.isdir(self.path):
26 raise repo.RepoError("repository %s not found" % self.path)
26 raise repo.RepoError("repository %s not found" % self.path)
27
27
28 self.root = os.path.abspath(path)
28 self.root = os.path.abspath(path)
29 self.ui = ui
29 self.ui = ui
30 self.opener = util.opener(self.path)
30 self.opener = util.opener(self.path)
31 self.wopener = util.opener(self.root)
31 self.wopener = util.opener(self.root)
32 self.manifest = manifest.manifest(self.opener)
32 self.manifest = manifest.manifest(self.opener)
33 self.changelog = changelog.changelog(self.opener)
33 self.changelog = changelog.changelog(self.opener)
34 self.tagscache = None
34 self.tagscache = None
35 self.nodetagscache = None
35 self.nodetagscache = None
36 self.encodepats = None
36 self.encodepats = None
37 self.decodepats = None
37 self.decodepats = None
38
38
39 if create:
39 if create:
40 os.mkdir(self.path)
40 os.mkdir(self.path)
41 os.mkdir(self.join("data"))
41 os.mkdir(self.join("data"))
42
42
43 self.dirstate = dirstate.dirstate(self.opener, ui, self.root)
43 self.dirstate = dirstate.dirstate(self.opener, ui, self.root)
44 try:
44 try:
45 self.ui.readconfig(self.opener("hgrc"))
45 self.ui.readconfig(self.opener("hgrc"))
46 except IOError: pass
46 except IOError: pass
47
47
48 def hook(self, name, **args):
48 def hook(self, name, **args):
49 s = self.ui.config("hooks", name)
49 s = self.ui.config("hooks", name)
50 if s:
50 if s:
51 self.ui.note("running hook %s: %s\n" % (name, s))
51 self.ui.note("running hook %s: %s\n" % (name, s))
52 old = {}
52 old = {}
53 for k, v in args.items():
53 for k, v in args.items():
54 k = k.upper()
54 k = k.upper()
55 old[k] = os.environ.get(k, None)
55 old[k] = os.environ.get(k, None)
56 os.environ[k] = v
56 os.environ[k] = v
57
57
58 r = os.system(s)
58 r = os.system(s)
59
59
60 for k, v in old.items():
60 for k, v in old.items():
61 if v != None:
61 if v != None:
62 os.environ[k] = v
62 os.environ[k] = v
63 else:
63 else:
64 del os.environ[k]
64 del os.environ[k]
65
65
66 if r:
66 if r:
67 self.ui.warn("abort: %s hook failed with status %d!\n" %
67 self.ui.warn("abort: %s hook failed with status %d!\n" %
68 (name, r))
68 (name, r))
69 return False
69 return False
70 return True
70 return True
71
71
72 def tags(self):
72 def tags(self):
73 '''return a mapping of tag to node'''
73 '''return a mapping of tag to node'''
74 if not self.tagscache:
74 if not self.tagscache:
75 self.tagscache = {}
75 self.tagscache = {}
76 def addtag(self, k, n):
76 def addtag(self, k, n):
77 try:
77 try:
78 bin_n = bin(n)
78 bin_n = bin(n)
79 except TypeError:
79 except TypeError:
80 bin_n = ''
80 bin_n = ''
81 self.tagscache[k.strip()] = bin_n
81 self.tagscache[k.strip()] = bin_n
82
82
83 try:
83 try:
84 # read each head of the tags file, ending with the tip
84 # read each head of the tags file, ending with the tip
85 # and add each tag found to the map, with "newer" ones
85 # and add each tag found to the map, with "newer" ones
86 # taking precedence
86 # taking precedence
87 fl = self.file(".hgtags")
87 fl = self.file(".hgtags")
88 h = fl.heads()
88 h = fl.heads()
89 h.reverse()
89 h.reverse()
90 for r in h:
90 for r in h:
91 for l in fl.read(r).splitlines():
91 for l in fl.read(r).splitlines():
92 if l:
92 if l:
93 n, k = l.split(" ", 1)
93 n, k = l.split(" ", 1)
94 addtag(self, k, n)
94 addtag(self, k, n)
95 except KeyError:
95 except KeyError:
96 pass
96 pass
97
97
98 try:
98 try:
99 f = self.opener("localtags")
99 f = self.opener("localtags")
100 for l in f:
100 for l in f:
101 n, k = l.split(" ", 1)
101 n, k = l.split(" ", 1)
102 addtag(self, k, n)
102 addtag(self, k, n)
103 except IOError:
103 except IOError:
104 pass
104 pass
105
105
106 self.tagscache['tip'] = self.changelog.tip()
106 self.tagscache['tip'] = self.changelog.tip()
107
107
108 return self.tagscache
108 return self.tagscache
109
109
110 def tagslist(self):
110 def tagslist(self):
111 '''return a list of tags ordered by revision'''
111 '''return a list of tags ordered by revision'''
112 l = []
112 l = []
113 for t, n in self.tags().items():
113 for t, n in self.tags().items():
114 try:
114 try:
115 r = self.changelog.rev(n)
115 r = self.changelog.rev(n)
116 except:
116 except:
117 r = -2 # sort to the beginning of the list if unknown
117 r = -2 # sort to the beginning of the list if unknown
118 l.append((r,t,n))
118 l.append((r,t,n))
119 l.sort()
119 l.sort()
120 return [(t,n) for r,t,n in l]
120 return [(t,n) for r,t,n in l]
121
121
122 def nodetags(self, node):
122 def nodetags(self, node):
123 '''return the tags associated with a node'''
123 '''return the tags associated with a node'''
124 if not self.nodetagscache:
124 if not self.nodetagscache:
125 self.nodetagscache = {}
125 self.nodetagscache = {}
126 for t,n in self.tags().items():
126 for t,n in self.tags().items():
127 self.nodetagscache.setdefault(n,[]).append(t)
127 self.nodetagscache.setdefault(n,[]).append(t)
128 return self.nodetagscache.get(node, [])
128 return self.nodetagscache.get(node, [])
129
129
130 def lookup(self, key):
130 def lookup(self, key):
131 try:
131 try:
132 return self.tags()[key]
132 return self.tags()[key]
133 except KeyError:
133 except KeyError:
134 try:
134 try:
135 return self.changelog.lookup(key)
135 return self.changelog.lookup(key)
136 except:
136 except:
137 raise repo.RepoError("unknown revision '%s'" % key)
137 raise repo.RepoError("unknown revision '%s'" % key)
138
138
139 def dev(self):
139 def dev(self):
140 return os.stat(self.path).st_dev
140 return os.stat(self.path).st_dev
141
141
142 def local(self):
142 def local(self):
143 return True
143 return True
144
144
145 def join(self, f):
145 def join(self, f):
146 return os.path.join(self.path, f)
146 return os.path.join(self.path, f)
147
147
148 def wjoin(self, f):
148 def wjoin(self, f):
149 return os.path.join(self.root, f)
149 return os.path.join(self.root, f)
150
150
151 def file(self, f):
151 def file(self, f):
152 if f[0] == '/': f = f[1:]
152 if f[0] == '/': f = f[1:]
153 return filelog.filelog(self.opener, f)
153 return filelog.filelog(self.opener, f)
154
154
155 def getcwd(self):
155 def getcwd(self):
156 return self.dirstate.getcwd()
156 return self.dirstate.getcwd()
157
157
158 def wfile(self, f, mode='r'):
158 def wfile(self, f, mode='r'):
159 return self.wopener(f, mode)
159 return self.wopener(f, mode)
160
160
161 def wread(self, filename):
161 def wread(self, filename):
162 if self.encodepats == None:
162 if self.encodepats == None:
163 l = []
163 l = []
164 for pat, cmd in self.ui.configitems("encode"):
164 for pat, cmd in self.ui.configitems("encode"):
165 mf = util.matcher("", "/", [pat], [], [])[1]
165 mf = util.matcher("", "/", [pat], [], [])[1]
166 l.append((mf, cmd))
166 l.append((mf, cmd))
167 self.encodepats = l
167 self.encodepats = l
168
168
169 data = self.wopener(filename, 'r').read()
169 data = self.wopener(filename, 'r').read()
170
170
171 for mf, cmd in self.encodepats:
171 for mf, cmd in self.encodepats:
172 if mf(filename):
172 if mf(filename):
173 self.ui.debug("filtering %s through %s\n" % (filename, cmd))
173 self.ui.debug("filtering %s through %s\n" % (filename, cmd))
174 data = util.filter(data, cmd)
174 data = util.filter(data, cmd)
175 break
175 break
176
176
177 return data
177 return data
178
178
179 def wwrite(self, filename, data, fd=None):
179 def wwrite(self, filename, data, fd=None):
180 if self.decodepats == None:
180 if self.decodepats == None:
181 l = []
181 l = []
182 for pat, cmd in self.ui.configitems("decode"):
182 for pat, cmd in self.ui.configitems("decode"):
183 mf = util.matcher("", "/", [pat], [], [])[1]
183 mf = util.matcher("", "/", [pat], [], [])[1]
184 l.append((mf, cmd))
184 l.append((mf, cmd))
185 self.decodepats = l
185 self.decodepats = l
186
186
187 for mf, cmd in self.decodepats:
187 for mf, cmd in self.decodepats:
188 if mf(filename):
188 if mf(filename):
189 self.ui.debug("filtering %s through %s\n" % (filename, cmd))
189 self.ui.debug("filtering %s through %s\n" % (filename, cmd))
190 data = util.filter(data, cmd)
190 data = util.filter(data, cmd)
191 break
191 break
192
192
193 if fd:
193 if fd:
194 return fd.write(data)
194 return fd.write(data)
195 return self.wopener(filename, 'w').write(data)
195 return self.wopener(filename, 'w').write(data)
196
196
197 def transaction(self):
197 def transaction(self):
198 # save dirstate for undo
198 # save dirstate for undo
199 try:
199 try:
200 ds = self.opener("dirstate").read()
200 ds = self.opener("dirstate").read()
201 except IOError:
201 except IOError:
202 ds = ""
202 ds = ""
203 self.opener("journal.dirstate", "w").write(ds)
203 self.opener("journal.dirstate", "w").write(ds)
204
204
205 def after():
205 def after():
206 util.rename(self.join("journal"), self.join("undo"))
206 util.rename(self.join("journal"), self.join("undo"))
207 util.rename(self.join("journal.dirstate"),
207 util.rename(self.join("journal.dirstate"),
208 self.join("undo.dirstate"))
208 self.join("undo.dirstate"))
209
209
210 return transaction.transaction(self.ui.warn, self.opener,
210 return transaction.transaction(self.ui.warn, self.opener,
211 self.join("journal"), after)
211 self.join("journal"), after)
212
212
213 def recover(self):
213 def recover(self):
214 lock = self.lock()
214 lock = self.lock()
215 if os.path.exists(self.join("journal")):
215 if os.path.exists(self.join("journal")):
216 self.ui.status("rolling back interrupted transaction\n")
216 self.ui.status("rolling back interrupted transaction\n")
217 return transaction.rollback(self.opener, self.join("journal"))
217 return transaction.rollback(self.opener, self.join("journal"))
218 else:
218 else:
219 self.ui.warn("no interrupted transaction available\n")
219 self.ui.warn("no interrupted transaction available\n")
220
220
221 def undo(self):
221 def undo(self):
222 lock = self.lock()
222 lock = self.lock()
223 if os.path.exists(self.join("undo")):
223 if os.path.exists(self.join("undo")):
224 self.ui.status("rolling back last transaction\n")
224 self.ui.status("rolling back last transaction\n")
225 transaction.rollback(self.opener, self.join("undo"))
225 transaction.rollback(self.opener, self.join("undo"))
226 self.dirstate = None
226 self.dirstate = None
227 util.rename(self.join("undo.dirstate"), self.join("dirstate"))
227 util.rename(self.join("undo.dirstate"), self.join("dirstate"))
228 self.dirstate = dirstate.dirstate(self.opener, self.ui, self.root)
228 self.dirstate = dirstate.dirstate(self.opener, self.ui, self.root)
229 else:
229 else:
230 self.ui.warn("no undo information available\n")
230 self.ui.warn("no undo information available\n")
231
231
232 def lock(self, wait=1):
232 def lock(self, wait=1):
233 try:
233 try:
234 return lock.lock(self.join("lock"), 0)
234 return lock.lock(self.join("lock"), 0)
235 except lock.LockHeld, inst:
235 except lock.LockHeld, inst:
236 if wait:
236 if wait:
237 self.ui.warn("waiting for lock held by %s\n" % inst.args[0])
237 self.ui.warn("waiting for lock held by %s\n" % inst.args[0])
238 return lock.lock(self.join("lock"), wait)
238 return lock.lock(self.join("lock"), wait)
239 raise inst
239 raise inst
240
240
241 def rawcommit(self, files, text, user, date, p1=None, p2=None):
241 def rawcommit(self, files, text, user, date, p1=None, p2=None):
242 orig_parent = self.dirstate.parents()[0] or nullid
242 orig_parent = self.dirstate.parents()[0] or nullid
243 p1 = p1 or self.dirstate.parents()[0] or nullid
243 p1 = p1 or self.dirstate.parents()[0] or nullid
244 p2 = p2 or self.dirstate.parents()[1] or nullid
244 p2 = p2 or self.dirstate.parents()[1] or nullid
245 c1 = self.changelog.read(p1)
245 c1 = self.changelog.read(p1)
246 c2 = self.changelog.read(p2)
246 c2 = self.changelog.read(p2)
247 m1 = self.manifest.read(c1[0])
247 m1 = self.manifest.read(c1[0])
248 mf1 = self.manifest.readflags(c1[0])
248 mf1 = self.manifest.readflags(c1[0])
249 m2 = self.manifest.read(c2[0])
249 m2 = self.manifest.read(c2[0])
250 changed = []
250 changed = []
251
251
252 if orig_parent == p1:
252 if orig_parent == p1:
253 update_dirstate = 1
253 update_dirstate = 1
254 else:
254 else:
255 update_dirstate = 0
255 update_dirstate = 0
256
256
257 tr = self.transaction()
257 tr = self.transaction()
258 mm = m1.copy()
258 mm = m1.copy()
259 mfm = mf1.copy()
259 mfm = mf1.copy()
260 linkrev = self.changelog.count()
260 linkrev = self.changelog.count()
261 for f in files:
261 for f in files:
262 try:
262 try:
263 t = self.wread(f)
263 t = self.wread(f)
264 tm = util.is_exec(self.wjoin(f), mfm.get(f, False))
264 tm = util.is_exec(self.wjoin(f), mfm.get(f, False))
265 r = self.file(f)
265 r = self.file(f)
266 mfm[f] = tm
266 mfm[f] = tm
267
267
268 fp1 = m1.get(f, nullid)
268 fp1 = m1.get(f, nullid)
269 fp2 = m2.get(f, nullid)
269 fp2 = m2.get(f, nullid)
270
270
271 # is the same revision on two branches of a merge?
271 # is the same revision on two branches of a merge?
272 if fp2 == fp1:
272 if fp2 == fp1:
273 fp2 = nullid
273 fp2 = nullid
274
274
275 if fp2 != nullid:
275 if fp2 != nullid:
276 # is one parent an ancestor of the other?
276 # is one parent an ancestor of the other?
277 fpa = r.ancestor(fp1, fp2)
277 fpa = r.ancestor(fp1, fp2)
278 if fpa == fp1:
278 if fpa == fp1:
279 fp1, fp2 = fp2, nullid
279 fp1, fp2 = fp2, nullid
280 elif fpa == fp2:
280 elif fpa == fp2:
281 fp2 = nullid
281 fp2 = nullid
282
282
283 # is the file unmodified from the parent?
283 # is the file unmodified from the parent?
284 if t == r.read(fp1):
284 if t == r.read(fp1):
285 # record the proper existing parent in manifest
285 # record the proper existing parent in manifest
286 # no need to add a revision
286 # no need to add a revision
287 mm[f] = fp1
287 mm[f] = fp1
288 continue
288 continue
289
289
290 mm[f] = r.add(t, {}, tr, linkrev, fp1, fp2)
290 mm[f] = r.add(t, {}, tr, linkrev, fp1, fp2)
291 changed.append(f)
291 changed.append(f)
292 if update_dirstate:
292 if update_dirstate:
293 self.dirstate.update([f], "n")
293 self.dirstate.update([f], "n")
294 except IOError:
294 except IOError:
295 try:
295 try:
296 del mm[f]
296 del mm[f]
297 del mfm[f]
297 del mfm[f]
298 if update_dirstate:
298 if update_dirstate:
299 self.dirstate.forget([f])
299 self.dirstate.forget([f])
300 except:
300 except:
301 # deleted from p2?
301 # deleted from p2?
302 pass
302 pass
303
303
304 mnode = self.manifest.add(mm, mfm, tr, linkrev, c1[0], c2[0])
304 mnode = self.manifest.add(mm, mfm, tr, linkrev, c1[0], c2[0])
305 user = user or self.ui.username()
305 user = user or self.ui.username()
306 n = self.changelog.add(mnode, changed, text, tr, p1, p2, user, date)
306 n = self.changelog.add(mnode, changed, text, tr, p1, p2, user, date)
307 tr.close()
307 tr.close()
308 if update_dirstate:
308 if update_dirstate:
309 self.dirstate.setparents(n, nullid)
309 self.dirstate.setparents(n, nullid)
310
310
311 def commit(self, files = None, text = "", user = None, date = None,
311 def commit(self, files = None, text = "", user = None, date = None,
312 match = util.always, force=False):
312 match = util.always, force=False):
313 commit = []
313 commit = []
314 remove = []
314 remove = []
315 changed = []
315 changed = []
316
316
317 if files:
317 if files:
318 for f in files:
318 for f in files:
319 s = self.dirstate.state(f)
319 s = self.dirstate.state(f)
320 if s in 'nmai':
320 if s in 'nmai':
321 commit.append(f)
321 commit.append(f)
322 elif s == 'r':
322 elif s == 'r':
323 remove.append(f)
323 remove.append(f)
324 else:
324 else:
325 self.ui.warn("%s not tracked!\n" % f)
325 self.ui.warn("%s not tracked!\n" % f)
326 else:
326 else:
327 (c, a, d, u) = self.changes(match=match)
327 (c, a, d, u) = self.changes(match=match)
328 commit = c + a
328 commit = c + a
329 remove = d
329 remove = d
330
330
331 p1, p2 = self.dirstate.parents()
331 p1, p2 = self.dirstate.parents()
332 c1 = self.changelog.read(p1)
332 c1 = self.changelog.read(p1)
333 c2 = self.changelog.read(p2)
333 c2 = self.changelog.read(p2)
334 m1 = self.manifest.read(c1[0])
334 m1 = self.manifest.read(c1[0])
335 mf1 = self.manifest.readflags(c1[0])
335 mf1 = self.manifest.readflags(c1[0])
336 m2 = self.manifest.read(c2[0])
336 m2 = self.manifest.read(c2[0])
337
337
338 if not commit and not remove and not force and p2 == nullid:
338 if not commit and not remove and not force and p2 == nullid:
339 self.ui.status("nothing changed\n")
339 self.ui.status("nothing changed\n")
340 return None
340 return None
341
341
342 if not self.hook("precommit"):
342 if not self.hook("precommit"):
343 return None
343 return None
344
344
345 lock = self.lock()
345 lock = self.lock()
346 tr = self.transaction()
346 tr = self.transaction()
347
347
348 # check in files
348 # check in files
349 new = {}
349 new = {}
350 linkrev = self.changelog.count()
350 linkrev = self.changelog.count()
351 commit.sort()
351 commit.sort()
352 for f in commit:
352 for f in commit:
353 self.ui.note(f + "\n")
353 self.ui.note(f + "\n")
354 try:
354 try:
355 mf1[f] = util.is_exec(self.wjoin(f), mf1.get(f, False))
355 mf1[f] = util.is_exec(self.wjoin(f), mf1.get(f, False))
356 t = self.wread(f)
356 t = self.wread(f)
357 except IOError:
357 except IOError:
358 self.ui.warn("trouble committing %s!\n" % f)
358 self.ui.warn("trouble committing %s!\n" % f)
359 raise
359 raise
360
360
361 r = self.file(f)
361 r = self.file(f)
362
362
363 meta = {}
363 meta = {}
364 cp = self.dirstate.copied(f)
364 cp = self.dirstate.copied(f)
365 if cp:
365 if cp:
366 meta["copy"] = cp
366 meta["copy"] = cp
367 meta["copyrev"] = hex(m1.get(cp, m2.get(cp, nullid)))
367 meta["copyrev"] = hex(m1.get(cp, m2.get(cp, nullid)))
368 self.ui.debug(" %s: copy %s:%s\n" % (f, cp, meta["copyrev"]))
368 self.ui.debug(" %s: copy %s:%s\n" % (f, cp, meta["copyrev"]))
369 fp1, fp2 = nullid, nullid
369 fp1, fp2 = nullid, nullid
370 else:
370 else:
371 fp1 = m1.get(f, nullid)
371 fp1 = m1.get(f, nullid)
372 fp2 = m2.get(f, nullid)
372 fp2 = m2.get(f, nullid)
373
373
374 # is the same revision on two branches of a merge?
374 # is the same revision on two branches of a merge?
375 if fp2 == fp1:
375 if fp2 == fp1:
376 fp2 = nullid
376 fp2 = nullid
377
377
378 if fp2 != nullid:
378 if fp2 != nullid:
379 # is one parent an ancestor of the other?
379 # is one parent an ancestor of the other?
380 fpa = r.ancestor(fp1, fp2)
380 fpa = r.ancestor(fp1, fp2)
381 if fpa == fp1:
381 if fpa == fp1:
382 fp1, fp2 = fp2, nullid
382 fp1, fp2 = fp2, nullid
383 elif fpa == fp2:
383 elif fpa == fp2:
384 fp2 = nullid
384 fp2 = nullid
385
385
386 # is the file unmodified from the parent?
386 # is the file unmodified from the parent?
387 if not meta and t == r.read(fp1):
387 if not meta and t == r.read(fp1):
388 # record the proper existing parent in manifest
388 # record the proper existing parent in manifest
389 # no need to add a revision
389 # no need to add a revision
390 new[f] = fp1
390 new[f] = fp1
391 continue
391 continue
392
392
393 new[f] = r.add(t, meta, tr, linkrev, fp1, fp2)
393 new[f] = r.add(t, meta, tr, linkrev, fp1, fp2)
394 # remember what we've added so that we can later calculate
394 # remember what we've added so that we can later calculate
395 # the files to pull from a set of changesets
395 # the files to pull from a set of changesets
396 changed.append(f)
396 changed.append(f)
397
397
398 # update manifest
398 # update manifest
399 m1.update(new)
399 m1.update(new)
400 for f in remove:
400 for f in remove:
401 if f in m1:
401 if f in m1:
402 del m1[f]
402 del m1[f]
403 mn = self.manifest.add(m1, mf1, tr, linkrev, c1[0], c2[0],
403 mn = self.manifest.add(m1, mf1, tr, linkrev, c1[0], c2[0],
404 (new, remove))
404 (new, remove))
405
405
406 # add changeset
406 # add changeset
407 new = new.keys()
407 new = new.keys()
408 new.sort()
408 new.sort()
409
409
410 if not text:
410 if not text:
411 edittext = ""
411 edittext = ""
412 if p2 != nullid:
412 if p2 != nullid:
413 edittext += "HG: branch merge\n"
413 edittext += "HG: branch merge\n"
414 edittext += "\n" + "HG: manifest hash %s\n" % hex(mn)
414 edittext += "\n" + "HG: manifest hash %s\n" % hex(mn)
415 edittext += "".join(["HG: changed %s\n" % f for f in changed])
415 edittext += "".join(["HG: changed %s\n" % f for f in changed])
416 edittext += "".join(["HG: removed %s\n" % f for f in remove])
416 edittext += "".join(["HG: removed %s\n" % f for f in remove])
417 if not changed and not remove:
417 if not changed and not remove:
418 edittext += "HG: no files changed\n"
418 edittext += "HG: no files changed\n"
419 edittext = self.ui.edit(edittext)
419 edittext = self.ui.edit(edittext)
420 if not edittext.rstrip():
420 if not edittext.rstrip():
421 return None
421 return None
422 text = edittext
422 text = edittext
423
423
424 user = user or self.ui.username()
424 user = user or self.ui.username()
425 n = self.changelog.add(mn, changed, text, tr, p1, p2, user, date)
425 n = self.changelog.add(mn, changed, text, tr, p1, p2, user, date)
426 tr.close()
426 tr.close()
427
427
428 self.dirstate.setparents(n)
428 self.dirstate.setparents(n)
429 self.dirstate.update(new, "n")
429 self.dirstate.update(new, "n")
430 self.dirstate.forget(remove)
430 self.dirstate.forget(remove)
431
431
432 if not self.hook("commit", node=hex(n)):
432 if not self.hook("commit", node=hex(n)):
433 return None
433 return None
434 return n
434 return n
435
435
436 def walk(self, node=None, files=[], match=util.always):
436 def walk(self, node=None, files=[], match=util.always):
437 if node:
437 if node:
438 for fn in self.manifest.read(self.changelog.read(node)[0]):
438 for fn in self.manifest.read(self.changelog.read(node)[0]):
439 if match(fn): yield 'm', fn
439 if match(fn): yield 'm', fn
440 else:
440 else:
441 for src, fn in self.dirstate.walk(files, match):
441 for src, fn in self.dirstate.walk(files, match):
442 yield src, fn
442 yield src, fn
443
443
444 def changes(self, node1 = None, node2 = None, files = [],
444 def changes(self, node1 = None, node2 = None, files = [],
445 match = util.always):
445 match = util.always):
446 mf2, u = None, []
446 mf2, u = None, []
447
447
448 def fcmp(fn, mf):
448 def fcmp(fn, mf):
449 t1 = self.wread(fn)
449 t1 = self.wread(fn)
450 t2 = self.file(fn).read(mf.get(fn, nullid))
450 t2 = self.file(fn).read(mf.get(fn, nullid))
451 return cmp(t1, t2)
451 return cmp(t1, t2)
452
452
453 def mfmatches(node):
453 def mfmatches(node):
454 mf = dict(self.manifest.read(node))
454 mf = dict(self.manifest.read(node))
455 for fn in mf.keys():
455 for fn in mf.keys():
456 if not match(fn):
456 if not match(fn):
457 del mf[fn]
457 del mf[fn]
458 return mf
458 return mf
459
459
460 # are we comparing the working directory?
460 # are we comparing the working directory?
461 if not node2:
461 if not node2:
462 l, c, a, d, u = self.dirstate.changes(files, match)
462 l, c, a, d, u = self.dirstate.changes(files, match)
463
463
464 # are we comparing working dir against its parent?
464 # are we comparing working dir against its parent?
465 if not node1:
465 if not node1:
466 if l:
466 if l:
467 # do a full compare of any files that might have changed
467 # do a full compare of any files that might have changed
468 change = self.changelog.read(self.dirstate.parents()[0])
468 change = self.changelog.read(self.dirstate.parents()[0])
469 mf2 = mfmatches(change[0])
469 mf2 = mfmatches(change[0])
470 for f in l:
470 for f in l:
471 if fcmp(f, mf2):
471 if fcmp(f, mf2):
472 c.append(f)
472 c.append(f)
473
473
474 for l in c, a, d, u:
474 for l in c, a, d, u:
475 l.sort()
475 l.sort()
476
476
477 return (c, a, d, u)
477 return (c, a, d, u)
478
478
479 # are we comparing working dir against non-tip?
479 # are we comparing working dir against non-tip?
480 # generate a pseudo-manifest for the working dir
480 # generate a pseudo-manifest for the working dir
481 if not node2:
481 if not node2:
482 if not mf2:
482 if not mf2:
483 change = self.changelog.read(self.dirstate.parents()[0])
483 change = self.changelog.read(self.dirstate.parents()[0])
484 mf2 = mfmatches(change[0])
484 mf2 = mfmatches(change[0])
485 for f in a + c + l:
485 for f in a + c + l:
486 mf2[f] = ""
486 mf2[f] = ""
487 for f in d:
487 for f in d:
488 if f in mf2: del mf2[f]
488 if f in mf2: del mf2[f]
489 else:
489 else:
490 change = self.changelog.read(node2)
490 change = self.changelog.read(node2)
491 mf2 = mfmatches(change[0])
491 mf2 = mfmatches(change[0])
492
492
493 # flush lists from dirstate before comparing manifests
493 # flush lists from dirstate before comparing manifests
494 c, a = [], []
494 c, a = [], []
495
495
496 change = self.changelog.read(node1)
496 change = self.changelog.read(node1)
497 mf1 = mfmatches(change[0])
497 mf1 = mfmatches(change[0])
498
498
499 for fn in mf2:
499 for fn in mf2:
500 if mf1.has_key(fn):
500 if mf1.has_key(fn):
501 if mf1[fn] != mf2[fn]:
501 if mf1[fn] != mf2[fn]:
502 if mf2[fn] != "" or fcmp(fn, mf1):
502 if mf2[fn] != "" or fcmp(fn, mf1):
503 c.append(fn)
503 c.append(fn)
504 del mf1[fn]
504 del mf1[fn]
505 else:
505 else:
506 a.append(fn)
506 a.append(fn)
507
507
508 d = mf1.keys()
508 d = mf1.keys()
509
509
510 for l in c, a, d, u:
510 for l in c, a, d, u:
511 l.sort()
511 l.sort()
512
512
513 return (c, a, d, u)
513 return (c, a, d, u)
514
514
515 def add(self, list):
515 def add(self, list):
516 for f in list:
516 for f in list:
517 p = self.wjoin(f)
517 p = self.wjoin(f)
518 if not os.path.exists(p):
518 if not os.path.exists(p):
519 self.ui.warn("%s does not exist!\n" % f)
519 self.ui.warn("%s does not exist!\n" % f)
520 elif not os.path.isfile(p):
520 elif not os.path.isfile(p):
521 self.ui.warn("%s not added: only files supported currently\n" % f)
521 self.ui.warn("%s not added: only files supported currently\n" % f)
522 elif self.dirstate.state(f) in 'an':
522 elif self.dirstate.state(f) in 'an':
523 self.ui.warn("%s already tracked!\n" % f)
523 self.ui.warn("%s already tracked!\n" % f)
524 else:
524 else:
525 self.dirstate.update([f], "a")
525 self.dirstate.update([f], "a")
526
526
527 def forget(self, list):
527 def forget(self, list):
528 for f in list:
528 for f in list:
529 if self.dirstate.state(f) not in 'ai':
529 if self.dirstate.state(f) not in 'ai':
530 self.ui.warn("%s not added!\n" % f)
530 self.ui.warn("%s not added!\n" % f)
531 else:
531 else:
532 self.dirstate.forget([f])
532 self.dirstate.forget([f])
533
533
534 def remove(self, list):
534 def remove(self, list):
535 for f in list:
535 for f in list:
536 p = self.wjoin(f)
536 p = self.wjoin(f)
537 if os.path.exists(p):
537 if os.path.exists(p):
538 self.ui.warn("%s still exists!\n" % f)
538 self.ui.warn("%s still exists!\n" % f)
539 elif self.dirstate.state(f) == 'a':
539 elif self.dirstate.state(f) == 'a':
540 self.ui.warn("%s never committed!\n" % f)
540 self.ui.warn("%s never committed!\n" % f)
541 self.dirstate.forget([f])
541 self.dirstate.forget([f])
542 elif f not in self.dirstate:
542 elif f not in self.dirstate:
543 self.ui.warn("%s not tracked!\n" % f)
543 self.ui.warn("%s not tracked!\n" % f)
544 else:
544 else:
545 self.dirstate.update([f], "r")
545 self.dirstate.update([f], "r")
546
546
547 def copy(self, source, dest):
547 def copy(self, source, dest):
548 p = self.wjoin(dest)
548 p = self.wjoin(dest)
549 if not os.path.exists(p):
549 if not os.path.exists(p):
550 self.ui.warn("%s does not exist!\n" % dest)
550 self.ui.warn("%s does not exist!\n" % dest)
551 elif not os.path.isfile(p):
551 elif not os.path.isfile(p):
552 self.ui.warn("copy failed: %s is not a file\n" % dest)
552 self.ui.warn("copy failed: %s is not a file\n" % dest)
553 else:
553 else:
554 if self.dirstate.state(dest) == '?':
554 if self.dirstate.state(dest) == '?':
555 self.dirstate.update([dest], "a")
555 self.dirstate.update([dest], "a")
556 self.dirstate.copy(source, dest)
556 self.dirstate.copy(source, dest)
557
557
558 def heads(self):
558 def heads(self):
559 return self.changelog.heads()
559 return self.changelog.heads()
560
560
561 # branchlookup returns a dict giving a list of branches for
561 # branchlookup returns a dict giving a list of branches for
562 # each head. A branch is defined as the tag of a node or
562 # each head. A branch is defined as the tag of a node or
563 # the branch of the node's parents. If a node has multiple
563 # the branch of the node's parents. If a node has multiple
564 # branch tags, tags are eliminated if they are visible from other
564 # branch tags, tags are eliminated if they are visible from other
565 # branch tags.
565 # branch tags.
566 #
566 #
567 # So, for this graph: a->b->c->d->e
567 # So, for this graph: a->b->c->d->e
568 # \ /
568 # \ /
569 # aa -----/
569 # aa -----/
570 # a has tag 2.6.12
570 # a has tag 2.6.12
571 # d has tag 2.6.13
571 # d has tag 2.6.13
572 # e would have branch tags for 2.6.12 and 2.6.13. Because the node
572 # e would have branch tags for 2.6.12 and 2.6.13. Because the node
573 # for 2.6.12 can be reached from the node 2.6.13, that is eliminated
573 # for 2.6.12 can be reached from the node 2.6.13, that is eliminated
574 # from the list.
574 # from the list.
575 #
575 #
576 # It is possible that more than one head will have the same branch tag.
576 # It is possible that more than one head will have the same branch tag.
577 # callers need to check the result for multiple heads under the same
577 # callers need to check the result for multiple heads under the same
578 # branch tag if that is a problem for them (ie checkout of a specific
578 # branch tag if that is a problem for them (ie checkout of a specific
579 # branch).
579 # branch).
580 #
580 #
581 # passing in a specific branch will limit the depth of the search
581 # passing in a specific branch will limit the depth of the search
582 # through the parents. It won't limit the branches returned in the
582 # through the parents. It won't limit the branches returned in the
583 # result though.
583 # result though.
584 def branchlookup(self, heads=None, branch=None):
584 def branchlookup(self, heads=None, branch=None):
585 if not heads:
585 if not heads:
586 heads = self.heads()
586 heads = self.heads()
587 headt = [ h for h in heads ]
587 headt = [ h for h in heads ]
588 chlog = self.changelog
588 chlog = self.changelog
589 branches = {}
589 branches = {}
590 merges = []
590 merges = []
591 seenmerge = {}
591 seenmerge = {}
592
592
593 # traverse the tree once for each head, recording in the branches
593 # traverse the tree once for each head, recording in the branches
594 # dict which tags are visible from this head. The branches
594 # dict which tags are visible from this head. The branches
595 # dict also records which tags are visible from each tag
595 # dict also records which tags are visible from each tag
596 # while we traverse.
596 # while we traverse.
597 while headt or merges:
597 while headt or merges:
598 if merges:
598 if merges:
599 n, found = merges.pop()
599 n, found = merges.pop()
600 visit = [n]
600 visit = [n]
601 else:
601 else:
602 h = headt.pop()
602 h = headt.pop()
603 visit = [h]
603 visit = [h]
604 found = [h]
604 found = [h]
605 seen = {}
605 seen = {}
606 while visit:
606 while visit:
607 n = visit.pop()
607 n = visit.pop()
608 if n in seen:
608 if n in seen:
609 continue
609 continue
610 pp = chlog.parents(n)
610 pp = chlog.parents(n)
611 tags = self.nodetags(n)
611 tags = self.nodetags(n)
612 if tags:
612 if tags:
613 for x in tags:
613 for x in tags:
614 if x == 'tip':
614 if x == 'tip':
615 continue
615 continue
616 for f in found:
616 for f in found:
617 branches.setdefault(f, {})[n] = 1
617 branches.setdefault(f, {})[n] = 1
618 branches.setdefault(n, {})[n] = 1
618 branches.setdefault(n, {})[n] = 1
619 break
619 break
620 if n not in found:
620 if n not in found:
621 found.append(n)
621 found.append(n)
622 if branch in tags:
622 if branch in tags:
623 continue
623 continue
624 seen[n] = 1
624 seen[n] = 1
625 if pp[1] != nullid and n not in seenmerge:
625 if pp[1] != nullid and n not in seenmerge:
626 merges.append((pp[1], [x for x in found]))
626 merges.append((pp[1], [x for x in found]))
627 seenmerge[n] = 1
627 seenmerge[n] = 1
628 if pp[0] != nullid:
628 if pp[0] != nullid:
629 visit.append(pp[0])
629 visit.append(pp[0])
630 # traverse the branches dict, eliminating branch tags from each
630 # traverse the branches dict, eliminating branch tags from each
631 # head that are visible from another branch tag for that head.
631 # head that are visible from another branch tag for that head.
632 out = {}
632 out = {}
633 viscache = {}
633 viscache = {}
634 for h in heads:
634 for h in heads:
635 def visible(node):
635 def visible(node):
636 if node in viscache:
636 if node in viscache:
637 return viscache[node]
637 return viscache[node]
638 ret = {}
638 ret = {}
639 visit = [node]
639 visit = [node]
640 while visit:
640 while visit:
641 x = visit.pop()
641 x = visit.pop()
642 if x in viscache:
642 if x in viscache:
643 ret.update(viscache[x])
643 ret.update(viscache[x])
644 elif x not in ret:
644 elif x not in ret:
645 ret[x] = 1
645 ret[x] = 1
646 if x in branches:
646 if x in branches:
647 visit[len(visit):] = branches[x].keys()
647 visit[len(visit):] = branches[x].keys()
648 viscache[node] = ret
648 viscache[node] = ret
649 return ret
649 return ret
650 if h not in branches:
650 if h not in branches:
651 continue
651 continue
652 # O(n^2), but somewhat limited. This only searches the
652 # O(n^2), but somewhat limited. This only searches the
653 # tags visible from a specific head, not all the tags in the
653 # tags visible from a specific head, not all the tags in the
654 # whole repo.
654 # whole repo.
655 for b in branches[h]:
655 for b in branches[h]:
656 vis = False
656 vis = False
657 for bb in branches[h].keys():
657 for bb in branches[h].keys():
658 if b != bb:
658 if b != bb:
659 if b in visible(bb):
659 if b in visible(bb):
660 vis = True
660 vis = True
661 break
661 break
662 if not vis:
662 if not vis:
663 l = out.setdefault(h, [])
663 l = out.setdefault(h, [])
664 l[len(l):] = self.nodetags(b)
664 l[len(l):] = self.nodetags(b)
665 return out
665 return out
666
666
667 def branches(self, nodes):
667 def branches(self, nodes):
668 if not nodes: nodes = [self.changelog.tip()]
668 if not nodes: nodes = [self.changelog.tip()]
669 b = []
669 b = []
670 for n in nodes:
670 for n in nodes:
671 t = n
671 t = n
672 while n:
672 while n:
673 p = self.changelog.parents(n)
673 p = self.changelog.parents(n)
674 if p[1] != nullid or p[0] == nullid:
674 if p[1] != nullid or p[0] == nullid:
675 b.append((t, n, p[0], p[1]))
675 b.append((t, n, p[0], p[1]))
676 break
676 break
677 n = p[0]
677 n = p[0]
678 return b
678 return b
679
679
680 def between(self, pairs):
680 def between(self, pairs):
681 r = []
681 r = []
682
682
683 for top, bottom in pairs:
683 for top, bottom in pairs:
684 n, l, i = top, [], 0
684 n, l, i = top, [], 0
685 f = 1
685 f = 1
686
686
687 while n != bottom:
687 while n != bottom:
688 p = self.changelog.parents(n)[0]
688 p = self.changelog.parents(n)[0]
689 if i == f:
689 if i == f:
690 l.append(n)
690 l.append(n)
691 f = f * 2
691 f = f * 2
692 n = p
692 n = p
693 i += 1
693 i += 1
694
694
695 r.append(l)
695 r.append(l)
696
696
697 return r
697 return r
698
698
699 def newer(self, nodes):
699 def newer(self, nodes):
700 m = {}
700 m = {}
701 nl = []
701 nl = []
702 pm = {}
702 pm = {}
703 cl = self.changelog
703 cl = self.changelog
704 t = l = cl.count()
704 t = l = cl.count()
705
705
706 # find the lowest numbered node
706 # find the lowest numbered node
707 for n in nodes:
707 for n in nodes:
708 l = min(l, cl.rev(n))
708 l = min(l, cl.rev(n))
709 m[n] = 1
709 m[n] = 1
710
710
711 for i in xrange(l, t):
711 for i in xrange(l, t):
712 n = cl.node(i)
712 n = cl.node(i)
713 if n in m: # explicitly listed
713 if n in m: # explicitly listed
714 pm[n] = 1
714 pm[n] = 1
715 nl.append(n)
715 nl.append(n)
716 continue
716 continue
717 for p in cl.parents(n):
717 for p in cl.parents(n):
718 if p in pm: # parent listed
718 if p in pm: # parent listed
719 pm[n] = 1
719 pm[n] = 1
720 nl.append(n)
720 nl.append(n)
721 break
721 break
722
722
723 return nl
723 return nl
724
724
725 def findincoming(self, remote, base=None, heads=None):
725 def findincoming(self, remote, base=None, heads=None):
726 m = self.changelog.nodemap
726 m = self.changelog.nodemap
727 search = []
727 search = []
728 fetch = {}
728 fetch = {}
729 seen = {}
729 seen = {}
730 seenbranch = {}
730 seenbranch = {}
731 if base == None:
731 if base == None:
732 base = {}
732 base = {}
733
733
734 # assume we're closer to the tip than the root
734 # assume we're closer to the tip than the root
735 # and start by examining the heads
735 # and start by examining the heads
736 self.ui.status("searching for changes\n")
736 self.ui.status("searching for changes\n")
737
737
738 if not heads:
738 if not heads:
739 heads = remote.heads()
739 heads = remote.heads()
740
740
741 unknown = []
741 unknown = []
742 for h in heads:
742 for h in heads:
743 if h not in m:
743 if h not in m:
744 unknown.append(h)
744 unknown.append(h)
745 else:
745 else:
746 base[h] = 1
746 base[h] = 1
747
747
748 if not unknown:
748 if not unknown:
749 return None
749 return None
750
750
751 rep = {}
751 rep = {}
752 reqcnt = 0
752 reqcnt = 0
753
753
754 # search through remote branches
754 # search through remote branches
755 # a 'branch' here is a linear segment of history, with four parts:
755 # a 'branch' here is a linear segment of history, with four parts:
756 # head, root, first parent, second parent
756 # head, root, first parent, second parent
757 # (a branch always has two parents (or none) by definition)
757 # (a branch always has two parents (or none) by definition)
758 unknown = remote.branches(unknown)
758 unknown = remote.branches(unknown)
759 while unknown:
759 while unknown:
760 r = []
760 r = []
761 while unknown:
761 while unknown:
762 n = unknown.pop(0)
762 n = unknown.pop(0)
763 if n[0] in seen:
763 if n[0] in seen:
764 continue
764 continue
765
765
766 self.ui.debug("examining %s:%s\n" % (short(n[0]), short(n[1])))
766 self.ui.debug("examining %s:%s\n" % (short(n[0]), short(n[1])))
767 if n[0] == nullid:
767 if n[0] == nullid:
768 break
768 break
769 if n in seenbranch:
769 if n in seenbranch:
770 self.ui.debug("branch already found\n")
770 self.ui.debug("branch already found\n")
771 continue
771 continue
772 if n[1] and n[1] in m: # do we know the base?
772 if n[1] and n[1] in m: # do we know the base?
773 self.ui.debug("found incomplete branch %s:%s\n"
773 self.ui.debug("found incomplete branch %s:%s\n"
774 % (short(n[0]), short(n[1])))
774 % (short(n[0]), short(n[1])))
775 search.append(n) # schedule branch range for scanning
775 search.append(n) # schedule branch range for scanning
776 seenbranch[n] = 1
776 seenbranch[n] = 1
777 else:
777 else:
778 if n[1] not in seen and n[1] not in fetch:
778 if n[1] not in seen and n[1] not in fetch:
779 if n[2] in m and n[3] in m:
779 if n[2] in m and n[3] in m:
780 self.ui.debug("found new changeset %s\n" %
780 self.ui.debug("found new changeset %s\n" %
781 short(n[1]))
781 short(n[1]))
782 fetch[n[1]] = 1 # earliest unknown
782 fetch[n[1]] = 1 # earliest unknown
783 base[n[2]] = 1 # latest known
783 base[n[2]] = 1 # latest known
784 continue
784 continue
785
785
786 for a in n[2:4]:
786 for a in n[2:4]:
787 if a not in rep:
787 if a not in rep:
788 r.append(a)
788 r.append(a)
789 rep[a] = 1
789 rep[a] = 1
790
790
791 seen[n[0]] = 1
791 seen[n[0]] = 1
792
792
793 if r:
793 if r:
794 reqcnt += 1
794 reqcnt += 1
795 self.ui.debug("request %d: %s\n" %
795 self.ui.debug("request %d: %s\n" %
796 (reqcnt, " ".join(map(short, r))))
796 (reqcnt, " ".join(map(short, r))))
797 for p in range(0, len(r), 10):
797 for p in range(0, len(r), 10):
798 for b in remote.branches(r[p:p+10]):
798 for b in remote.branches(r[p:p+10]):
799 self.ui.debug("received %s:%s\n" %
799 self.ui.debug("received %s:%s\n" %
800 (short(b[0]), short(b[1])))
800 (short(b[0]), short(b[1])))
801 if b[0] in m:
801 if b[0] in m:
802 self.ui.debug("found base node %s\n" % short(b[0]))
802 self.ui.debug("found base node %s\n" % short(b[0]))
803 base[b[0]] = 1
803 base[b[0]] = 1
804 elif b[0] not in seen:
804 elif b[0] not in seen:
805 unknown.append(b)
805 unknown.append(b)
806
806
807 # do binary search on the branches we found
807 # do binary search on the branches we found
808 while search:
808 while search:
809 n = search.pop(0)
809 n = search.pop(0)
810 reqcnt += 1
810 reqcnt += 1
811 l = remote.between([(n[0], n[1])])[0]
811 l = remote.between([(n[0], n[1])])[0]
812 l.append(n[1])
812 l.append(n[1])
813 p = n[0]
813 p = n[0]
814 f = 1
814 f = 1
815 for i in l:
815 for i in l:
816 self.ui.debug("narrowing %d:%d %s\n" % (f, len(l), short(i)))
816 self.ui.debug("narrowing %d:%d %s\n" % (f, len(l), short(i)))
817 if i in m:
817 if i in m:
818 if f <= 2:
818 if f <= 2:
819 self.ui.debug("found new branch changeset %s\n" %
819 self.ui.debug("found new branch changeset %s\n" %
820 short(p))
820 short(p))
821 fetch[p] = 1
821 fetch[p] = 1
822 base[i] = 1
822 base[i] = 1
823 else:
823 else:
824 self.ui.debug("narrowed branch search to %s:%s\n"
824 self.ui.debug("narrowed branch search to %s:%s\n"
825 % (short(p), short(i)))
825 % (short(p), short(i)))
826 search.append((p, i))
826 search.append((p, i))
827 break
827 break
828 p, f = i, f * 2
828 p, f = i, f * 2
829
829
830 # sanity check our fetch list
830 # sanity check our fetch list
831 for f in fetch.keys():
831 for f in fetch.keys():
832 if f in m:
832 if f in m:
833 raise repo.RepoError("already have changeset " + short(f[:4]))
833 raise repo.RepoError("already have changeset " + short(f[:4]))
834
834
835 if base.keys() == [nullid]:
835 if base.keys() == [nullid]:
836 self.ui.warn("warning: pulling from an unrelated repository!\n")
836 self.ui.warn("warning: pulling from an unrelated repository!\n")
837
837
838 self.ui.note("found new changesets starting at " +
838 self.ui.note("found new changesets starting at " +
839 " ".join([short(f) for f in fetch]) + "\n")
839 " ".join([short(f) for f in fetch]) + "\n")
840
840
841 self.ui.debug("%d total queries\n" % reqcnt)
841 self.ui.debug("%d total queries\n" % reqcnt)
842
842
843 return fetch.keys()
843 return fetch.keys()
844
844
845 def findoutgoing(self, remote, base=None, heads=None):
845 def findoutgoing(self, remote, base=None, heads=None):
846 if base == None:
846 if base == None:
847 base = {}
847 base = {}
848 self.findincoming(remote, base, heads)
848 self.findincoming(remote, base, heads)
849
849
850 self.ui.debug("common changesets up to "
850 self.ui.debug("common changesets up to "
851 + " ".join(map(short, base.keys())) + "\n")
851 + " ".join(map(short, base.keys())) + "\n")
852
852
853 remain = dict.fromkeys(self.changelog.nodemap)
853 remain = dict.fromkeys(self.changelog.nodemap)
854
854
855 # prune everything remote has from the tree
855 # prune everything remote has from the tree
856 del remain[nullid]
856 del remain[nullid]
857 remove = base.keys()
857 remove = base.keys()
858 while remove:
858 while remove:
859 n = remove.pop(0)
859 n = remove.pop(0)
860 if n in remain:
860 if n in remain:
861 del remain[n]
861 del remain[n]
862 for p in self.changelog.parents(n):
862 for p in self.changelog.parents(n):
863 remove.append(p)
863 remove.append(p)
864
864
865 # find every node whose parents have been pruned
865 # find every node whose parents have been pruned
866 subset = []
866 subset = []
867 for n in remain:
867 for n in remain:
868 p1, p2 = self.changelog.parents(n)
868 p1, p2 = self.changelog.parents(n)
869 if p1 not in remain and p2 not in remain:
869 if p1 not in remain and p2 not in remain:
870 subset.append(n)
870 subset.append(n)
871
871
872 # this is the set of all roots we have to push
872 # this is the set of all roots we have to push
873 return subset
873 return subset
874
874
875 def pull(self, remote):
875 def pull(self, remote):
876 lock = self.lock()
876 lock = self.lock()
877
877
878 # if we have an empty repo, fetch everything
878 # if we have an empty repo, fetch everything
879 if self.changelog.tip() == nullid:
879 if self.changelog.tip() == nullid:
880 self.ui.status("requesting all changes\n")
880 self.ui.status("requesting all changes\n")
881 fetch = [nullid]
881 fetch = [nullid]
882 else:
882 else:
883 fetch = self.findincoming(remote)
883 fetch = self.findincoming(remote)
884
884
885 if not fetch:
885 if not fetch:
886 self.ui.status("no changes found\n")
886 self.ui.status("no changes found\n")
887 return 1
887 return 1
888
888
889 cg = remote.changegroup(fetch)
889 cg = remote.changegroup(fetch)
890 return self.addchangegroup(cg)
890 return self.addchangegroup(cg)
891
891
892 def push(self, remote, force=False):
892 def push(self, remote, force=False):
893 lock = remote.lock()
893 lock = remote.lock()
894
894
895 base = {}
895 base = {}
896 heads = remote.heads()
896 heads = remote.heads()
897 inc = self.findincoming(remote, base, heads)
897 inc = self.findincoming(remote, base, heads)
898 if not force and inc:
898 if not force and inc:
899 self.ui.warn("abort: unsynced remote changes!\n")
899 self.ui.warn("abort: unsynced remote changes!\n")
900 self.ui.status("(did you forget to sync? use push -f to force)\n")
900 self.ui.status("(did you forget to sync? use push -f to force)\n")
901 return 1
901 return 1
902
902
903 update = self.findoutgoing(remote, base)
903 update = self.findoutgoing(remote, base)
904 if not update:
904 if not update:
905 self.ui.status("no changes found\n")
905 self.ui.status("no changes found\n")
906 return 1
906 return 1
907 elif not force:
907 elif not force:
908 if len(heads) < len(self.changelog.heads()):
908 if len(heads) < len(self.changelog.heads()):
909 self.ui.warn("abort: push creates new remote branches!\n")
909 self.ui.warn("abort: push creates new remote branches!\n")
910 self.ui.status("(did you forget to merge?" +
910 self.ui.status("(did you forget to merge?" +
911 " use push -f to force)\n")
911 " use push -f to force)\n")
912 return 1
912 return 1
913
913
914 cg = self.changegroup(update)
914 cg = self.changegroup(update)
915 return remote.addchangegroup(cg)
915 return remote.addchangegroup(cg)
916
916
917 def changegroup(self, basenodes):
917 def changegroup(self, basenodes):
918 genread = util.chunkbuffer
918 genread = util.chunkbuffer
919
919
920 def gengroup():
920 def gengroup():
921 nodes = self.newer(basenodes)
921 nodes = self.newer(basenodes)
922
922
923 # construct the link map
923 # construct the link map
924 linkmap = {}
924 linkmap = {}
925 for n in nodes:
925 for n in nodes:
926 linkmap[self.changelog.rev(n)] = n
926 linkmap[self.changelog.rev(n)] = n
927
927
928 # construct a list of all changed files
928 # construct a list of all changed files
929 changed = {}
929 changed = {}
930 for n in nodes:
930 for n in nodes:
931 c = self.changelog.read(n)
931 c = self.changelog.read(n)
932 for f in c[3]:
932 for f in c[3]:
933 changed[f] = 1
933 changed[f] = 1
934 changed = changed.keys()
934 changed = changed.keys()
935 changed.sort()
935 changed.sort()
936
936
937 # the changegroup is changesets + manifests + all file revs
937 # the changegroup is changesets + manifests + all file revs
938 revs = [ self.changelog.rev(n) for n in nodes ]
938 revs = [ self.changelog.rev(n) for n in nodes ]
939
939
940 for y in self.changelog.group(linkmap): yield y
940 for y in self.changelog.group(linkmap): yield y
941 for y in self.manifest.group(linkmap): yield y
941 for y in self.manifest.group(linkmap): yield y
942 for f in changed:
942 for f in changed:
943 yield struct.pack(">l", len(f) + 4) + f
943 yield struct.pack(">l", len(f) + 4) + f
944 g = self.file(f).group(linkmap)
944 g = self.file(f).group(linkmap)
945 for y in g:
945 for y in g:
946 yield y
946 yield y
947
947
948 yield struct.pack(">l", 0)
948 yield struct.pack(">l", 0)
949
949
950 return genread(gengroup())
950 return genread(gengroup())
951
951
952 def addchangegroup(self, source):
952 def addchangegroup(self, source):
953
953
954 def getchunk():
954 def getchunk():
955 d = source.read(4)
955 d = source.read(4)
956 if not d: return ""
956 if not d: return ""
957 l = struct.unpack(">l", d)[0]
957 l = struct.unpack(">l", d)[0]
958 if l <= 4: return ""
958 if l <= 4: return ""
959 d = source.read(l - 4)
959 d = source.read(l - 4)
960 if len(d) < l - 4:
960 if len(d) < l - 4:
961 raise repo.RepoError("premature EOF reading chunk" +
961 raise repo.RepoError("premature EOF reading chunk" +
962 " (got %d bytes, expected %d)"
962 " (got %d bytes, expected %d)"
963 % (len(d), l - 4))
963 % (len(d), l - 4))
964 return d
964 return d
965
965
966 def getgroup():
966 def getgroup():
967 while 1:
967 while 1:
968 c = getchunk()
968 c = getchunk()
969 if not c: break
969 if not c: break
970 yield c
970 yield c
971
971
972 def csmap(x):
972 def csmap(x):
973 self.ui.debug("add changeset %s\n" % short(x))
973 self.ui.debug("add changeset %s\n" % short(x))
974 return self.changelog.count()
974 return self.changelog.count()
975
975
976 def revmap(x):
976 def revmap(x):
977 return self.changelog.rev(x)
977 return self.changelog.rev(x)
978
978
979 if not source: return
979 if not source: return
980 changesets = files = revisions = 0
980 changesets = files = revisions = 0
981
981
982 tr = self.transaction()
982 tr = self.transaction()
983
983
984 oldheads = len(self.changelog.heads())
984 oldheads = len(self.changelog.heads())
985
985
986 # pull off the changeset group
986 # pull off the changeset group
987 self.ui.status("adding changesets\n")
987 self.ui.status("adding changesets\n")
988 co = self.changelog.tip()
988 co = self.changelog.tip()
989 cn = self.changelog.addgroup(getgroup(), csmap, tr, 1) # unique
989 cn = self.changelog.addgroup(getgroup(), csmap, tr, 1) # unique
990 changesets = self.changelog.rev(cn) - self.changelog.rev(co)
990 cnr, cor = map(self.changelog.rev, (cn, co))
991 changesets = cnr - cor
991
992
992 # pull off the manifest group
993 # pull off the manifest group
993 self.ui.status("adding manifests\n")
994 self.ui.status("adding manifests\n")
994 mm = self.manifest.tip()
995 mm = self.manifest.tip()
995 mo = self.manifest.addgroup(getgroup(), revmap, tr)
996 mo = self.manifest.addgroup(getgroup(), revmap, tr)
996
997
997 # process the files
998 # process the files
998 self.ui.status("adding file changes\n")
999 self.ui.status("adding file changes\n")
999 while 1:
1000 while 1:
1000 f = getchunk()
1001 f = getchunk()
1001 if not f: break
1002 if not f: break
1002 self.ui.debug("adding %s revisions\n" % f)
1003 self.ui.debug("adding %s revisions\n" % f)
1003 fl = self.file(f)
1004 fl = self.file(f)
1004 o = fl.count()
1005 o = fl.count()
1005 n = fl.addgroup(getgroup(), revmap, tr)
1006 n = fl.addgroup(getgroup(), revmap, tr)
1006 revisions += fl.count() - o
1007 revisions += fl.count() - o
1007 files += 1
1008 files += 1
1008
1009
1009 newheads = len(self.changelog.heads())
1010 newheads = len(self.changelog.heads())
1010 heads = ""
1011 heads = ""
1011 if oldheads and newheads > oldheads:
1012 if oldheads and newheads > oldheads:
1012 heads = " (+%d heads)" % (newheads - oldheads)
1013 heads = " (+%d heads)" % (newheads - oldheads)
1013
1014
1014 self.ui.status(("added %d changesets" +
1015 self.ui.status(("added %d changesets" +
1015 " with %d changes to %d files%s\n")
1016 " with %d changes to %d files%s\n")
1016 % (changesets, revisions, files, heads))
1017 % (changesets, revisions, files, heads))
1017
1018
1018 tr.close()
1019 tr.close()
1019
1020
1020 if not self.hook("changegroup"):
1021 if not self.hook("changegroup", node=hex(self.changelog.node(cor+1))):
1022 self.ui.warn("abort: changegroup hook returned failure!\n")
1021 return 1
1023 return 1
1022
1024
1025 for i in range(cor + 1, cnr + 1):
1026 self.hook("commit", node=hex(self.changelog.node(i)))
1027
1023 return
1028 return
1024
1029
1025 def update(self, node, allow=False, force=False, choose=None,
1030 def update(self, node, allow=False, force=False, choose=None,
1026 moddirstate=True):
1031 moddirstate=True):
1027 pl = self.dirstate.parents()
1032 pl = self.dirstate.parents()
1028 if not force and pl[1] != nullid:
1033 if not force and pl[1] != nullid:
1029 self.ui.warn("aborting: outstanding uncommitted merges\n")
1034 self.ui.warn("aborting: outstanding uncommitted merges\n")
1030 return 1
1035 return 1
1031
1036
1032 p1, p2 = pl[0], node
1037 p1, p2 = pl[0], node
1033 pa = self.changelog.ancestor(p1, p2)
1038 pa = self.changelog.ancestor(p1, p2)
1034 m1n = self.changelog.read(p1)[0]
1039 m1n = self.changelog.read(p1)[0]
1035 m2n = self.changelog.read(p2)[0]
1040 m2n = self.changelog.read(p2)[0]
1036 man = self.manifest.ancestor(m1n, m2n)
1041 man = self.manifest.ancestor(m1n, m2n)
1037 m1 = self.manifest.read(m1n)
1042 m1 = self.manifest.read(m1n)
1038 mf1 = self.manifest.readflags(m1n)
1043 mf1 = self.manifest.readflags(m1n)
1039 m2 = self.manifest.read(m2n)
1044 m2 = self.manifest.read(m2n)
1040 mf2 = self.manifest.readflags(m2n)
1045 mf2 = self.manifest.readflags(m2n)
1041 ma = self.manifest.read(man)
1046 ma = self.manifest.read(man)
1042 mfa = self.manifest.readflags(man)
1047 mfa = self.manifest.readflags(man)
1043
1048
1044 (c, a, d, u) = self.changes()
1049 (c, a, d, u) = self.changes()
1045
1050
1046 # is this a jump, or a merge? i.e. is there a linear path
1051 # is this a jump, or a merge? i.e. is there a linear path
1047 # from p1 to p2?
1052 # from p1 to p2?
1048 linear_path = (pa == p1 or pa == p2)
1053 linear_path = (pa == p1 or pa == p2)
1049
1054
1050 # resolve the manifest to determine which files
1055 # resolve the manifest to determine which files
1051 # we care about merging
1056 # we care about merging
1052 self.ui.note("resolving manifests\n")
1057 self.ui.note("resolving manifests\n")
1053 self.ui.debug(" force %s allow %s moddirstate %s linear %s\n" %
1058 self.ui.debug(" force %s allow %s moddirstate %s linear %s\n" %
1054 (force, allow, moddirstate, linear_path))
1059 (force, allow, moddirstate, linear_path))
1055 self.ui.debug(" ancestor %s local %s remote %s\n" %
1060 self.ui.debug(" ancestor %s local %s remote %s\n" %
1056 (short(man), short(m1n), short(m2n)))
1061 (short(man), short(m1n), short(m2n)))
1057
1062
1058 merge = {}
1063 merge = {}
1059 get = {}
1064 get = {}
1060 remove = []
1065 remove = []
1061
1066
1062 # construct a working dir manifest
1067 # construct a working dir manifest
1063 mw = m1.copy()
1068 mw = m1.copy()
1064 mfw = mf1.copy()
1069 mfw = mf1.copy()
1065 umap = dict.fromkeys(u)
1070 umap = dict.fromkeys(u)
1066
1071
1067 for f in a + c + u:
1072 for f in a + c + u:
1068 mw[f] = ""
1073 mw[f] = ""
1069 mfw[f] = util.is_exec(self.wjoin(f), mfw.get(f, False))
1074 mfw[f] = util.is_exec(self.wjoin(f), mfw.get(f, False))
1070
1075
1071 for f in d:
1076 for f in d:
1072 if f in mw: del mw[f]
1077 if f in mw: del mw[f]
1073
1078
1074 # If we're jumping between revisions (as opposed to merging),
1079 # If we're jumping between revisions (as opposed to merging),
1075 # and if neither the working directory nor the target rev has
1080 # and if neither the working directory nor the target rev has
1076 # the file, then we need to remove it from the dirstate, to
1081 # the file, then we need to remove it from the dirstate, to
1077 # prevent the dirstate from listing the file when it is no
1082 # prevent the dirstate from listing the file when it is no
1078 # longer in the manifest.
1083 # longer in the manifest.
1079 if moddirstate and linear_path and f not in m2:
1084 if moddirstate and linear_path and f not in m2:
1080 self.dirstate.forget((f,))
1085 self.dirstate.forget((f,))
1081
1086
1082 # Compare manifests
1087 # Compare manifests
1083 for f, n in mw.iteritems():
1088 for f, n in mw.iteritems():
1084 if choose and not choose(f): continue
1089 if choose and not choose(f): continue
1085 if f in m2:
1090 if f in m2:
1086 s = 0
1091 s = 0
1087
1092
1088 # is the wfile new since m1, and match m2?
1093 # is the wfile new since m1, and match m2?
1089 if f not in m1:
1094 if f not in m1:
1090 t1 = self.wread(f)
1095 t1 = self.wread(f)
1091 t2 = self.file(f).read(m2[f])
1096 t2 = self.file(f).read(m2[f])
1092 if cmp(t1, t2) == 0:
1097 if cmp(t1, t2) == 0:
1093 n = m2[f]
1098 n = m2[f]
1094 del t1, t2
1099 del t1, t2
1095
1100
1096 # are files different?
1101 # are files different?
1097 if n != m2[f]:
1102 if n != m2[f]:
1098 a = ma.get(f, nullid)
1103 a = ma.get(f, nullid)
1099 # are both different from the ancestor?
1104 # are both different from the ancestor?
1100 if n != a and m2[f] != a:
1105 if n != a and m2[f] != a:
1101 self.ui.debug(" %s versions differ, resolve\n" % f)
1106 self.ui.debug(" %s versions differ, resolve\n" % f)
1102 # merge executable bits
1107 # merge executable bits
1103 # "if we changed or they changed, change in merge"
1108 # "if we changed or they changed, change in merge"
1104 a, b, c = mfa.get(f, 0), mfw[f], mf2[f]
1109 a, b, c = mfa.get(f, 0), mfw[f], mf2[f]
1105 mode = ((a^b) | (a^c)) ^ a
1110 mode = ((a^b) | (a^c)) ^ a
1106 merge[f] = (m1.get(f, nullid), m2[f], mode)
1111 merge[f] = (m1.get(f, nullid), m2[f], mode)
1107 s = 1
1112 s = 1
1108 # are we clobbering?
1113 # are we clobbering?
1109 # is remote's version newer?
1114 # is remote's version newer?
1110 # or are we going back in time?
1115 # or are we going back in time?
1111 elif force or m2[f] != a or (p2 == pa and mw[f] == m1[f]):
1116 elif force or m2[f] != a or (p2 == pa and mw[f] == m1[f]):
1112 self.ui.debug(" remote %s is newer, get\n" % f)
1117 self.ui.debug(" remote %s is newer, get\n" % f)
1113 get[f] = m2[f]
1118 get[f] = m2[f]
1114 s = 1
1119 s = 1
1115 elif f in umap:
1120 elif f in umap:
1116 # this unknown file is the same as the checkout
1121 # this unknown file is the same as the checkout
1117 get[f] = m2[f]
1122 get[f] = m2[f]
1118
1123
1119 if not s and mfw[f] != mf2[f]:
1124 if not s and mfw[f] != mf2[f]:
1120 if force:
1125 if force:
1121 self.ui.debug(" updating permissions for %s\n" % f)
1126 self.ui.debug(" updating permissions for %s\n" % f)
1122 util.set_exec(self.wjoin(f), mf2[f])
1127 util.set_exec(self.wjoin(f), mf2[f])
1123 else:
1128 else:
1124 a, b, c = mfa.get(f, 0), mfw[f], mf2[f]
1129 a, b, c = mfa.get(f, 0), mfw[f], mf2[f]
1125 mode = ((a^b) | (a^c)) ^ a
1130 mode = ((a^b) | (a^c)) ^ a
1126 if mode != b:
1131 if mode != b:
1127 self.ui.debug(" updating permissions for %s\n" % f)
1132 self.ui.debug(" updating permissions for %s\n" % f)
1128 util.set_exec(self.wjoin(f), mode)
1133 util.set_exec(self.wjoin(f), mode)
1129 del m2[f]
1134 del m2[f]
1130 elif f in ma:
1135 elif f in ma:
1131 if n != ma[f]:
1136 if n != ma[f]:
1132 r = "d"
1137 r = "d"
1133 if not force and (linear_path or allow):
1138 if not force and (linear_path or allow):
1134 r = self.ui.prompt(
1139 r = self.ui.prompt(
1135 (" local changed %s which remote deleted\n" % f) +
1140 (" local changed %s which remote deleted\n" % f) +
1136 "(k)eep or (d)elete?", "[kd]", "k")
1141 "(k)eep or (d)elete?", "[kd]", "k")
1137 if r == "d":
1142 if r == "d":
1138 remove.append(f)
1143 remove.append(f)
1139 else:
1144 else:
1140 self.ui.debug("other deleted %s\n" % f)
1145 self.ui.debug("other deleted %s\n" % f)
1141 remove.append(f) # other deleted it
1146 remove.append(f) # other deleted it
1142 else:
1147 else:
1143 # file is created on branch or in working directory
1148 # file is created on branch or in working directory
1144 if force and f not in umap:
1149 if force and f not in umap:
1145 self.ui.debug("remote deleted %s, clobbering\n" % f)
1150 self.ui.debug("remote deleted %s, clobbering\n" % f)
1146 remove.append(f)
1151 remove.append(f)
1147 elif n == m1.get(f, nullid): # same as parent
1152 elif n == m1.get(f, nullid): # same as parent
1148 if p2 == pa: # going backwards?
1153 if p2 == pa: # going backwards?
1149 self.ui.debug("remote deleted %s\n" % f)
1154 self.ui.debug("remote deleted %s\n" % f)
1150 remove.append(f)
1155 remove.append(f)
1151 else:
1156 else:
1152 self.ui.debug("local modified %s, keeping\n" % f)
1157 self.ui.debug("local modified %s, keeping\n" % f)
1153 else:
1158 else:
1154 self.ui.debug("working dir created %s, keeping\n" % f)
1159 self.ui.debug("working dir created %s, keeping\n" % f)
1155
1160
1156 for f, n in m2.iteritems():
1161 for f, n in m2.iteritems():
1157 if choose and not choose(f): continue
1162 if choose and not choose(f): continue
1158 if f[0] == "/": continue
1163 if f[0] == "/": continue
1159 if f in ma and n != ma[f]:
1164 if f in ma and n != ma[f]:
1160 r = "k"
1165 r = "k"
1161 if not force and (linear_path or allow):
1166 if not force and (linear_path or allow):
1162 r = self.ui.prompt(
1167 r = self.ui.prompt(
1163 ("remote changed %s which local deleted\n" % f) +
1168 ("remote changed %s which local deleted\n" % f) +
1164 "(k)eep or (d)elete?", "[kd]", "k")
1169 "(k)eep or (d)elete?", "[kd]", "k")
1165 if r == "k": get[f] = n
1170 if r == "k": get[f] = n
1166 elif f not in ma:
1171 elif f not in ma:
1167 self.ui.debug("remote created %s\n" % f)
1172 self.ui.debug("remote created %s\n" % f)
1168 get[f] = n
1173 get[f] = n
1169 else:
1174 else:
1170 if force or p2 == pa: # going backwards?
1175 if force or p2 == pa: # going backwards?
1171 self.ui.debug("local deleted %s, recreating\n" % f)
1176 self.ui.debug("local deleted %s, recreating\n" % f)
1172 get[f] = n
1177 get[f] = n
1173 else:
1178 else:
1174 self.ui.debug("local deleted %s\n" % f)
1179 self.ui.debug("local deleted %s\n" % f)
1175
1180
1176 del mw, m1, m2, ma
1181 del mw, m1, m2, ma
1177
1182
1178 if force:
1183 if force:
1179 for f in merge:
1184 for f in merge:
1180 get[f] = merge[f][1]
1185 get[f] = merge[f][1]
1181 merge = {}
1186 merge = {}
1182
1187
1183 if linear_path or force:
1188 if linear_path or force:
1184 # we don't need to do any magic, just jump to the new rev
1189 # we don't need to do any magic, just jump to the new rev
1185 branch_merge = False
1190 branch_merge = False
1186 p1, p2 = p2, nullid
1191 p1, p2 = p2, nullid
1187 else:
1192 else:
1188 if not allow:
1193 if not allow:
1189 self.ui.status("this update spans a branch" +
1194 self.ui.status("this update spans a branch" +
1190 " affecting the following files:\n")
1195 " affecting the following files:\n")
1191 fl = merge.keys() + get.keys()
1196 fl = merge.keys() + get.keys()
1192 fl.sort()
1197 fl.sort()
1193 for f in fl:
1198 for f in fl:
1194 cf = ""
1199 cf = ""
1195 if f in merge: cf = " (resolve)"
1200 if f in merge: cf = " (resolve)"
1196 self.ui.status(" %s%s\n" % (f, cf))
1201 self.ui.status(" %s%s\n" % (f, cf))
1197 self.ui.warn("aborting update spanning branches!\n")
1202 self.ui.warn("aborting update spanning branches!\n")
1198 self.ui.status("(use update -m to merge across branches" +
1203 self.ui.status("(use update -m to merge across branches" +
1199 " or -C to lose changes)\n")
1204 " or -C to lose changes)\n")
1200 return 1
1205 return 1
1201 branch_merge = True
1206 branch_merge = True
1202
1207
1203 if moddirstate:
1208 if moddirstate:
1204 self.dirstate.setparents(p1, p2)
1209 self.dirstate.setparents(p1, p2)
1205
1210
1206 # get the files we don't need to change
1211 # get the files we don't need to change
1207 files = get.keys()
1212 files = get.keys()
1208 files.sort()
1213 files.sort()
1209 for f in files:
1214 for f in files:
1210 if f[0] == "/": continue
1215 if f[0] == "/": continue
1211 self.ui.note("getting %s\n" % f)
1216 self.ui.note("getting %s\n" % f)
1212 t = self.file(f).read(get[f])
1217 t = self.file(f).read(get[f])
1213 try:
1218 try:
1214 self.wwrite(f, t)
1219 self.wwrite(f, t)
1215 except IOError:
1220 except IOError:
1216 os.makedirs(os.path.dirname(self.wjoin(f)))
1221 os.makedirs(os.path.dirname(self.wjoin(f)))
1217 self.wwrite(f, t)
1222 self.wwrite(f, t)
1218 util.set_exec(self.wjoin(f), mf2[f])
1223 util.set_exec(self.wjoin(f), mf2[f])
1219 if moddirstate:
1224 if moddirstate:
1220 if branch_merge:
1225 if branch_merge:
1221 self.dirstate.update([f], 'n', st_mtime=-1)
1226 self.dirstate.update([f], 'n', st_mtime=-1)
1222 else:
1227 else:
1223 self.dirstate.update([f], 'n')
1228 self.dirstate.update([f], 'n')
1224
1229
1225 # merge the tricky bits
1230 # merge the tricky bits
1226 files = merge.keys()
1231 files = merge.keys()
1227 files.sort()
1232 files.sort()
1228 for f in files:
1233 for f in files:
1229 self.ui.status("merging %s\n" % f)
1234 self.ui.status("merging %s\n" % f)
1230 my, other, flag = merge[f]
1235 my, other, flag = merge[f]
1231 self.merge3(f, my, other)
1236 self.merge3(f, my, other)
1232 util.set_exec(self.wjoin(f), flag)
1237 util.set_exec(self.wjoin(f), flag)
1233 if moddirstate:
1238 if moddirstate:
1234 if branch_merge:
1239 if branch_merge:
1235 # We've done a branch merge, mark this file as merged
1240 # We've done a branch merge, mark this file as merged
1236 # so that we properly record the merger later
1241 # so that we properly record the merger later
1237 self.dirstate.update([f], 'm')
1242 self.dirstate.update([f], 'm')
1238 else:
1243 else:
1239 # We've update-merged a locally modified file, so
1244 # We've update-merged a locally modified file, so
1240 # we set the dirstate to emulate a normal checkout
1245 # we set the dirstate to emulate a normal checkout
1241 # of that file some time in the past. Thus our
1246 # of that file some time in the past. Thus our
1242 # merge will appear as a normal local file
1247 # merge will appear as a normal local file
1243 # modification.
1248 # modification.
1244 f_len = len(self.file(f).read(other))
1249 f_len = len(self.file(f).read(other))
1245 self.dirstate.update([f], 'n', st_size=f_len, st_mtime=-1)
1250 self.dirstate.update([f], 'n', st_size=f_len, st_mtime=-1)
1246
1251
1247 remove.sort()
1252 remove.sort()
1248 for f in remove:
1253 for f in remove:
1249 self.ui.note("removing %s\n" % f)
1254 self.ui.note("removing %s\n" % f)
1250 try:
1255 try:
1251 os.unlink(self.wjoin(f))
1256 os.unlink(self.wjoin(f))
1252 except OSError, inst:
1257 except OSError, inst:
1253 self.ui.warn("update failed to remove %s: %s!\n" % (f, inst))
1258 self.ui.warn("update failed to remove %s: %s!\n" % (f, inst))
1254 # try removing directories that might now be empty
1259 # try removing directories that might now be empty
1255 try: os.removedirs(os.path.dirname(self.wjoin(f)))
1260 try: os.removedirs(os.path.dirname(self.wjoin(f)))
1256 except: pass
1261 except: pass
1257 if moddirstate:
1262 if moddirstate:
1258 if branch_merge:
1263 if branch_merge:
1259 self.dirstate.update(remove, 'r')
1264 self.dirstate.update(remove, 'r')
1260 else:
1265 else:
1261 self.dirstate.forget(remove)
1266 self.dirstate.forget(remove)
1262
1267
1263 def merge3(self, fn, my, other):
1268 def merge3(self, fn, my, other):
1264 """perform a 3-way merge in the working directory"""
1269 """perform a 3-way merge in the working directory"""
1265
1270
1266 def temp(prefix, node):
1271 def temp(prefix, node):
1267 pre = "%s~%s." % (os.path.basename(fn), prefix)
1272 pre = "%s~%s." % (os.path.basename(fn), prefix)
1268 (fd, name) = tempfile.mkstemp("", pre)
1273 (fd, name) = tempfile.mkstemp("", pre)
1269 f = os.fdopen(fd, "wb")
1274 f = os.fdopen(fd, "wb")
1270 self.wwrite(fn, fl.read(node), f)
1275 self.wwrite(fn, fl.read(node), f)
1271 f.close()
1276 f.close()
1272 return name
1277 return name
1273
1278
1274 fl = self.file(fn)
1279 fl = self.file(fn)
1275 base = fl.ancestor(my, other)
1280 base = fl.ancestor(my, other)
1276 a = self.wjoin(fn)
1281 a = self.wjoin(fn)
1277 b = temp("base", base)
1282 b = temp("base", base)
1278 c = temp("other", other)
1283 c = temp("other", other)
1279
1284
1280 self.ui.note("resolving %s\n" % fn)
1285 self.ui.note("resolving %s\n" % fn)
1281 self.ui.debug("file %s: other %s ancestor %s\n" %
1286 self.ui.debug("file %s: other %s ancestor %s\n" %
1282 (fn, short(other), short(base)))
1287 (fn, short(other), short(base)))
1283
1288
1284 cmd = (os.environ.get("HGMERGE") or self.ui.config("ui", "merge")
1289 cmd = (os.environ.get("HGMERGE") or self.ui.config("ui", "merge")
1285 or "hgmerge")
1290 or "hgmerge")
1286 r = os.system("%s %s %s %s" % (cmd, a, b, c))
1291 r = os.system("%s %s %s %s" % (cmd, a, b, c))
1287 if r:
1292 if r:
1288 self.ui.warn("merging %s failed!\n" % fn)
1293 self.ui.warn("merging %s failed!\n" % fn)
1289
1294
1290 os.unlink(b)
1295 os.unlink(b)
1291 os.unlink(c)
1296 os.unlink(c)
1292
1297
1293 def verify(self):
1298 def verify(self):
1294 filelinkrevs = {}
1299 filelinkrevs = {}
1295 filenodes = {}
1300 filenodes = {}
1296 changesets = revisions = files = 0
1301 changesets = revisions = files = 0
1297 errors = 0
1302 errors = 0
1298
1303
1299 seen = {}
1304 seen = {}
1300 self.ui.status("checking changesets\n")
1305 self.ui.status("checking changesets\n")
1301 for i in range(self.changelog.count()):
1306 for i in range(self.changelog.count()):
1302 changesets += 1
1307 changesets += 1
1303 n = self.changelog.node(i)
1308 n = self.changelog.node(i)
1304 if n in seen:
1309 if n in seen:
1305 self.ui.warn("duplicate changeset at revision %d\n" % i)
1310 self.ui.warn("duplicate changeset at revision %d\n" % i)
1306 errors += 1
1311 errors += 1
1307 seen[n] = 1
1312 seen[n] = 1
1308
1313
1309 for p in self.changelog.parents(n):
1314 for p in self.changelog.parents(n):
1310 if p not in self.changelog.nodemap:
1315 if p not in self.changelog.nodemap:
1311 self.ui.warn("changeset %s has unknown parent %s\n" %
1316 self.ui.warn("changeset %s has unknown parent %s\n" %
1312 (short(n), short(p)))
1317 (short(n), short(p)))
1313 errors += 1
1318 errors += 1
1314 try:
1319 try:
1315 changes = self.changelog.read(n)
1320 changes = self.changelog.read(n)
1316 except Exception, inst:
1321 except Exception, inst:
1317 self.ui.warn("unpacking changeset %s: %s\n" % (short(n), inst))
1322 self.ui.warn("unpacking changeset %s: %s\n" % (short(n), inst))
1318 errors += 1
1323 errors += 1
1319
1324
1320 for f in changes[3]:
1325 for f in changes[3]:
1321 filelinkrevs.setdefault(f, []).append(i)
1326 filelinkrevs.setdefault(f, []).append(i)
1322
1327
1323 seen = {}
1328 seen = {}
1324 self.ui.status("checking manifests\n")
1329 self.ui.status("checking manifests\n")
1325 for i in range(self.manifest.count()):
1330 for i in range(self.manifest.count()):
1326 n = self.manifest.node(i)
1331 n = self.manifest.node(i)
1327 if n in seen:
1332 if n in seen:
1328 self.ui.warn("duplicate manifest at revision %d\n" % i)
1333 self.ui.warn("duplicate manifest at revision %d\n" % i)
1329 errors += 1
1334 errors += 1
1330 seen[n] = 1
1335 seen[n] = 1
1331
1336
1332 for p in self.manifest.parents(n):
1337 for p in self.manifest.parents(n):
1333 if p not in self.manifest.nodemap:
1338 if p not in self.manifest.nodemap:
1334 self.ui.warn("manifest %s has unknown parent %s\n" %
1339 self.ui.warn("manifest %s has unknown parent %s\n" %
1335 (short(n), short(p)))
1340 (short(n), short(p)))
1336 errors += 1
1341 errors += 1
1337
1342
1338 try:
1343 try:
1339 delta = mdiff.patchtext(self.manifest.delta(n))
1344 delta = mdiff.patchtext(self.manifest.delta(n))
1340 except KeyboardInterrupt:
1345 except KeyboardInterrupt:
1341 self.ui.warn("interrupted")
1346 self.ui.warn("interrupted")
1342 raise
1347 raise
1343 except Exception, inst:
1348 except Exception, inst:
1344 self.ui.warn("unpacking manifest %s: %s\n"
1349 self.ui.warn("unpacking manifest %s: %s\n"
1345 % (short(n), inst))
1350 % (short(n), inst))
1346 errors += 1
1351 errors += 1
1347
1352
1348 ff = [ l.split('\0') for l in delta.splitlines() ]
1353 ff = [ l.split('\0') for l in delta.splitlines() ]
1349 for f, fn in ff:
1354 for f, fn in ff:
1350 filenodes.setdefault(f, {})[bin(fn[:40])] = 1
1355 filenodes.setdefault(f, {})[bin(fn[:40])] = 1
1351
1356
1352 self.ui.status("crosschecking files in changesets and manifests\n")
1357 self.ui.status("crosschecking files in changesets and manifests\n")
1353 for f in filenodes:
1358 for f in filenodes:
1354 if f not in filelinkrevs:
1359 if f not in filelinkrevs:
1355 self.ui.warn("file %s in manifest but not in changesets\n" % f)
1360 self.ui.warn("file %s in manifest but not in changesets\n" % f)
1356 errors += 1
1361 errors += 1
1357
1362
1358 for f in filelinkrevs:
1363 for f in filelinkrevs:
1359 if f not in filenodes:
1364 if f not in filenodes:
1360 self.ui.warn("file %s in changeset but not in manifest\n" % f)
1365 self.ui.warn("file %s in changeset but not in manifest\n" % f)
1361 errors += 1
1366 errors += 1
1362
1367
1363 self.ui.status("checking files\n")
1368 self.ui.status("checking files\n")
1364 ff = filenodes.keys()
1369 ff = filenodes.keys()
1365 ff.sort()
1370 ff.sort()
1366 for f in ff:
1371 for f in ff:
1367 if f == "/dev/null": continue
1372 if f == "/dev/null": continue
1368 files += 1
1373 files += 1
1369 fl = self.file(f)
1374 fl = self.file(f)
1370 nodes = { nullid: 1 }
1375 nodes = { nullid: 1 }
1371 seen = {}
1376 seen = {}
1372 for i in range(fl.count()):
1377 for i in range(fl.count()):
1373 revisions += 1
1378 revisions += 1
1374 n = fl.node(i)
1379 n = fl.node(i)
1375
1380
1376 if n in seen:
1381 if n in seen:
1377 self.ui.warn("%s: duplicate revision %d\n" % (f, i))
1382 self.ui.warn("%s: duplicate revision %d\n" % (f, i))
1378 errors += 1
1383 errors += 1
1379
1384
1380 if n not in filenodes[f]:
1385 if n not in filenodes[f]:
1381 self.ui.warn("%s: %d:%s not in manifests\n"
1386 self.ui.warn("%s: %d:%s not in manifests\n"
1382 % (f, i, short(n)))
1387 % (f, i, short(n)))
1383 errors += 1
1388 errors += 1
1384 else:
1389 else:
1385 del filenodes[f][n]
1390 del filenodes[f][n]
1386
1391
1387 flr = fl.linkrev(n)
1392 flr = fl.linkrev(n)
1388 if flr not in filelinkrevs[f]:
1393 if flr not in filelinkrevs[f]:
1389 self.ui.warn("%s:%s points to unexpected changeset %d\n"
1394 self.ui.warn("%s:%s points to unexpected changeset %d\n"
1390 % (f, short(n), fl.linkrev(n)))
1395 % (f, short(n), fl.linkrev(n)))
1391 errors += 1
1396 errors += 1
1392 else:
1397 else:
1393 filelinkrevs[f].remove(flr)
1398 filelinkrevs[f].remove(flr)
1394
1399
1395 # verify contents
1400 # verify contents
1396 try:
1401 try:
1397 t = fl.read(n)
1402 t = fl.read(n)
1398 except Exception, inst:
1403 except Exception, inst:
1399 self.ui.warn("unpacking file %s %s: %s\n"
1404 self.ui.warn("unpacking file %s %s: %s\n"
1400 % (f, short(n), inst))
1405 % (f, short(n), inst))
1401 errors += 1
1406 errors += 1
1402
1407
1403 # verify parents
1408 # verify parents
1404 (p1, p2) = fl.parents(n)
1409 (p1, p2) = fl.parents(n)
1405 if p1 not in nodes:
1410 if p1 not in nodes:
1406 self.ui.warn("file %s:%s unknown parent 1 %s" %
1411 self.ui.warn("file %s:%s unknown parent 1 %s" %
1407 (f, short(n), short(p1)))
1412 (f, short(n), short(p1)))
1408 errors += 1
1413 errors += 1
1409 if p2 not in nodes:
1414 if p2 not in nodes:
1410 self.ui.warn("file %s:%s unknown parent 2 %s" %
1415 self.ui.warn("file %s:%s unknown parent 2 %s" %
1411 (f, short(n), short(p1)))
1416 (f, short(n), short(p1)))
1412 errors += 1
1417 errors += 1
1413 nodes[n] = 1
1418 nodes[n] = 1
1414
1419
1415 # cross-check
1420 # cross-check
1416 for node in filenodes[f]:
1421 for node in filenodes[f]:
1417 self.ui.warn("node %s in manifests not in %s\n"
1422 self.ui.warn("node %s in manifests not in %s\n"
1418 % (hex(node), f))
1423 % (hex(node), f))
1419 errors += 1
1424 errors += 1
1420
1425
1421 self.ui.status("%d files, %d changesets, %d total revisions\n" %
1426 self.ui.status("%d files, %d changesets, %d total revisions\n" %
1422 (files, changesets, revisions))
1427 (files, changesets, revisions))
1423
1428
1424 if errors:
1429 if errors:
1425 self.ui.warn("%d integrity errors encountered!\n" % errors)
1430 self.ui.warn("%d integrity errors encountered!\n" % errors)
1426 return 1
1431 return 1
General Comments 0
You need to be logged in to leave comments. Login now