Show More
@@ -1,588 +1,588 b'' | |||||
1 | HGRC(5) |
|
1 | HGRC(5) | |
2 | ======= |
|
2 | ======= | |
3 | Bryan O'Sullivan <bos@serpentine.com> |
|
3 | Bryan O'Sullivan <bos@serpentine.com> | |
4 |
|
4 | |||
5 | NAME |
|
5 | NAME | |
6 | ---- |
|
6 | ---- | |
7 | hgrc - configuration files for Mercurial |
|
7 | hgrc - configuration files for Mercurial | |
8 |
|
8 | |||
9 | SYNOPSIS |
|
9 | SYNOPSIS | |
10 | -------- |
|
10 | -------- | |
11 |
|
11 | |||
12 | The Mercurial system uses a set of configuration files to control |
|
12 | The Mercurial system uses a set of configuration files to control | |
13 | aspects of its behaviour. |
|
13 | aspects of its behaviour. | |
14 |
|
14 | |||
15 | FILES |
|
15 | FILES | |
16 | ----- |
|
16 | ----- | |
17 |
|
17 | |||
18 | Mercurial reads configuration data from several files, if they exist. |
|
18 | Mercurial reads configuration data from several files, if they exist. | |
19 | The names of these files depend on the system on which Mercurial is |
|
19 | The names of these files depend on the system on which Mercurial is | |
20 | installed. Windows registry keys contain PATH-like strings, every |
|
20 | installed. Windows registry keys contain PATH-like strings, every | |
21 | part must reference a Mercurial.ini file or be a directory where *.rc |
|
21 | part must reference a Mercurial.ini file or be a directory where *.rc | |
22 | files will be read. |
|
22 | files will be read. | |
23 |
|
23 | |||
24 | (Unix) <install-root>/etc/mercurial/hgrc.d/*.rc:: |
|
24 | (Unix) <install-root>/etc/mercurial/hgrc.d/*.rc:: | |
25 | (Unix) <install-root>/etc/mercurial/hgrc:: |
|
25 | (Unix) <install-root>/etc/mercurial/hgrc:: | |
26 | Per-installation configuration files, searched for in the |
|
26 | Per-installation configuration files, searched for in the | |
27 | directory where Mercurial is installed. For example, if installed |
|
27 | directory where Mercurial is installed. For example, if installed | |
28 | in /shared/tools, Mercurial will look in |
|
28 | in /shared/tools, Mercurial will look in | |
29 | /shared/tools/etc/mercurial/hgrc. Options in these files apply to |
|
29 | /shared/tools/etc/mercurial/hgrc. Options in these files apply to | |
30 | all Mercurial commands executed by any user in any directory. |
|
30 | all Mercurial commands executed by any user in any directory. | |
31 |
|
31 | |||
32 | (Unix) /etc/mercurial/hgrc.d/*.rc:: |
|
32 | (Unix) /etc/mercurial/hgrc.d/*.rc:: | |
33 | (Unix) /etc/mercurial/hgrc:: |
|
33 | (Unix) /etc/mercurial/hgrc:: | |
34 | (Windows) HKEY_LOCAL_MACHINE\SOFTWARE\Mercurial:: |
|
34 | (Windows) HKEY_LOCAL_MACHINE\SOFTWARE\Mercurial:: | |
35 | or:: |
|
35 | or:: | |
36 | (Windows) C:\Mercurial\Mercurial.ini:: |
|
36 | (Windows) C:\Mercurial\Mercurial.ini:: | |
37 | Per-system configuration files, for the system on which Mercurial |
|
37 | Per-system configuration files, for the system on which Mercurial | |
38 | is running. Options in these files apply to all Mercurial |
|
38 | is running. Options in these files apply to all Mercurial | |
39 | commands executed by any user in any directory. Options in these |
|
39 | commands executed by any user in any directory. Options in these | |
40 | files override per-installation options. |
|
40 | files override per-installation options. | |
41 |
|
41 | |||
42 | (Unix) $HOME/.hgrc:: |
|
42 | (Unix) $HOME/.hgrc:: | |
43 | (Windows) C:\Documents and Settings\USERNAME\Mercurial.ini:: |
|
43 | (Windows) C:\Documents and Settings\USERNAME\Mercurial.ini:: | |
44 | (Windows) $HOME\Mercurial.ini:: |
|
44 | (Windows) $HOME\Mercurial.ini:: | |
45 | Per-user configuration file, for the user running Mercurial. |
|
45 | Per-user configuration file, for the user running Mercurial. | |
46 | Options in this file apply to all Mercurial commands executed by |
|
46 | Options in this file apply to all Mercurial commands executed by | |
47 | any user in any directory. Options in this file override |
|
47 | any user in any directory. Options in this file override | |
48 | per-installation and per-system options. |
|
48 | per-installation and per-system options. | |
49 | On Windows system, one of these is chosen exclusively according |
|
49 | On Windows system, one of these is chosen exclusively according | |
50 | to definition of HOME environment variable. |
|
50 | to definition of HOME environment variable. | |
51 |
|
51 | |||
52 | (Unix, Windows) <repo>/.hg/hgrc:: |
|
52 | (Unix, Windows) <repo>/.hg/hgrc:: | |
53 | Per-repository configuration options that only apply in a |
|
53 | Per-repository configuration options that only apply in a | |
54 | particular repository. This file is not version-controlled, and |
|
54 | particular repository. This file is not version-controlled, and | |
55 | will not get transferred during a "clone" operation. Options in |
|
55 | will not get transferred during a "clone" operation. Options in | |
56 | this file override options in all other configuration files. |
|
56 | this file override options in all other configuration files. | |
57 | On Unix, most of this file will be ignored if it doesn't belong |
|
57 | On Unix, most of this file will be ignored if it doesn't belong | |
58 | to a trusted user or to a trusted group. See the documentation |
|
58 | to a trusted user or to a trusted group. See the documentation | |
59 | for the trusted section below for more details. |
|
59 | for the trusted section below for more details. | |
60 |
|
60 | |||
61 | SYNTAX |
|
61 | SYNTAX | |
62 | ------ |
|
62 | ------ | |
63 |
|
63 | |||
64 | A configuration file consists of sections, led by a "[section]" header |
|
64 | A configuration file consists of sections, led by a "[section]" header | |
65 | and followed by "name: value" entries; "name=value" is also accepted. |
|
65 | and followed by "name: value" entries; "name=value" is also accepted. | |
66 |
|
66 | |||
67 | [spam] |
|
67 | [spam] | |
68 | eggs=ham |
|
68 | eggs=ham | |
69 | green= |
|
69 | green= | |
70 | eggs |
|
70 | eggs | |
71 |
|
71 | |||
72 | Each line contains one entry. If the lines that follow are indented, |
|
72 | Each line contains one entry. If the lines that follow are indented, | |
73 | they are treated as continuations of that entry. |
|
73 | they are treated as continuations of that entry. | |
74 |
|
74 | |||
75 | Leading whitespace is removed from values. Empty lines are skipped. |
|
75 | Leading whitespace is removed from values. Empty lines are skipped. | |
76 |
|
76 | |||
77 | The optional values can contain format strings which refer to other |
|
77 | The optional values can contain format strings which refer to other | |
78 | values in the same section, or values in a special DEFAULT section. |
|
78 | values in the same section, or values in a special DEFAULT section. | |
79 |
|
79 | |||
80 | Lines beginning with "#" or ";" are ignored and may be used to provide |
|
80 | Lines beginning with "#" or ";" are ignored and may be used to provide | |
81 | comments. |
|
81 | comments. | |
82 |
|
82 | |||
83 | SECTIONS |
|
83 | SECTIONS | |
84 | -------- |
|
84 | -------- | |
85 |
|
85 | |||
86 | This section describes the different sections that may appear in a |
|
86 | This section describes the different sections that may appear in a | |
87 | Mercurial "hgrc" file, the purpose of each section, its possible |
|
87 | Mercurial "hgrc" file, the purpose of each section, its possible | |
88 | keys, and their possible values. |
|
88 | keys, and their possible values. | |
89 |
|
89 | |||
90 | decode/encode:: |
|
90 | decode/encode:: | |
91 | Filters for transforming files on checkout/checkin. This would |
|
91 | Filters for transforming files on checkout/checkin. This would | |
92 | typically be used for newline processing or other |
|
92 | typically be used for newline processing or other | |
93 | localization/canonicalization of files. |
|
93 | localization/canonicalization of files. | |
94 |
|
94 | |||
95 | Filters consist of a filter pattern followed by a filter command. |
|
95 | Filters consist of a filter pattern followed by a filter command. | |
96 | Filter patterns are globs by default, rooted at the repository |
|
96 | Filter patterns are globs by default, rooted at the repository | |
97 | root. For example, to match any file ending in ".txt" in the root |
|
97 | root. For example, to match any file ending in ".txt" in the root | |
98 | directory only, use the pattern "*.txt". To match any file ending |
|
98 | directory only, use the pattern "*.txt". To match any file ending | |
99 | in ".c" anywhere in the repository, use the pattern "**.c". |
|
99 | in ".c" anywhere in the repository, use the pattern "**.c". | |
100 |
|
100 | |||
101 | The filter command can start with a specifier, either "pipe:" or |
|
101 | The filter command can start with a specifier, either "pipe:" or | |
102 | "tempfile:". If no specifier is given, "pipe:" is used by default. |
|
102 | "tempfile:". If no specifier is given, "pipe:" is used by default. | |
103 |
|
103 | |||
104 | A "pipe:" command must accept data on stdin and return the |
|
104 | A "pipe:" command must accept data on stdin and return the | |
105 | transformed data on stdout. |
|
105 | transformed data on stdout. | |
106 |
|
106 | |||
107 | Pipe example: |
|
107 | Pipe example: | |
108 |
|
108 | |||
109 | [encode] |
|
109 | [encode] | |
110 | # uncompress gzip files on checkin to improve delta compression |
|
110 | # uncompress gzip files on checkin to improve delta compression | |
111 | # note: not necessarily a good idea, just an example |
|
111 | # note: not necessarily a good idea, just an example | |
112 | *.gz = pipe: gunzip |
|
112 | *.gz = pipe: gunzip | |
113 |
|
113 | |||
114 | [decode] |
|
114 | [decode] | |
115 | # recompress gzip files when writing them to the working dir (we |
|
115 | # recompress gzip files when writing them to the working dir (we | |
116 | # can safely omit "pipe:", because it's the default) |
|
116 | # can safely omit "pipe:", because it's the default) | |
117 | *.gz = gzip |
|
117 | *.gz = gzip | |
118 |
|
118 | |||
119 | A "tempfile:" command is a template. The string INFILE is replaced |
|
119 | A "tempfile:" command is a template. The string INFILE is replaced | |
120 | with the name of a temporary file that contains the data to be |
|
120 | with the name of a temporary file that contains the data to be | |
121 | filtered by the command. The string OUTFILE is replaced with the |
|
121 | filtered by the command. The string OUTFILE is replaced with the | |
122 | name of an empty temporary file, where the filtered data must be |
|
122 | name of an empty temporary file, where the filtered data must be | |
123 | written by the command. |
|
123 | written by the command. | |
124 |
|
124 | |||
125 | NOTE: the tempfile mechanism is recommended for Windows systems, |
|
125 | NOTE: the tempfile mechanism is recommended for Windows systems, | |
126 | where the standard shell I/O redirection operators often have |
|
126 | where the standard shell I/O redirection operators often have | |
127 | strange effects and may corrupt the contents of your files. |
|
127 | strange effects and may corrupt the contents of your files. | |
128 |
|
128 | |||
129 | The most common usage is for LF <-> CRLF translation on Windows. |
|
129 | The most common usage is for LF <-> CRLF translation on Windows. | |
130 | For this, use the "smart" convertors which check for binary files: |
|
130 | For this, use the "smart" convertors which check for binary files: | |
131 |
|
131 | |||
132 | [extensions] |
|
132 | [extensions] | |
133 | hgext.win32text = |
|
133 | hgext.win32text = | |
134 | [encode] |
|
134 | [encode] | |
135 | ** = cleverencode: |
|
135 | ** = cleverencode: | |
136 | [decode] |
|
136 | [decode] | |
137 | ** = cleverdecode: |
|
137 | ** = cleverdecode: | |
138 |
|
138 | |||
139 | or if you only want to translate certain files: |
|
139 | or if you only want to translate certain files: | |
140 |
|
140 | |||
141 | [extensions] |
|
141 | [extensions] | |
142 | hgext.win32text = |
|
142 | hgext.win32text = | |
143 | [encode] |
|
143 | [encode] | |
144 | **.txt = dumbencode: |
|
144 | **.txt = dumbencode: | |
145 | [decode] |
|
145 | [decode] | |
146 | **.txt = dumbdecode: |
|
146 | **.txt = dumbdecode: | |
147 |
|
147 | |||
148 | defaults:: |
|
148 | defaults:: | |
149 | Use the [defaults] section to define command defaults, i.e. the |
|
149 | Use the [defaults] section to define command defaults, i.e. the | |
150 | default options/arguments to pass to the specified commands. |
|
150 | default options/arguments to pass to the specified commands. | |
151 |
|
151 | |||
152 | The following example makes 'hg log' run in verbose mode, and |
|
152 | The following example makes 'hg log' run in verbose mode, and | |
153 | 'hg status' show only the modified files, by default. |
|
153 | 'hg status' show only the modified files, by default. | |
154 |
|
154 | |||
155 | [defaults] |
|
155 | [defaults] | |
156 | log = -v |
|
156 | log = -v | |
157 | status = -m |
|
157 | status = -m | |
158 |
|
158 | |||
159 | The actual commands, instead of their aliases, must be used when |
|
159 | The actual commands, instead of their aliases, must be used when | |
160 | defining command defaults. The command defaults will also be |
|
160 | defining command defaults. The command defaults will also be | |
161 | applied to the aliases of the commands defined. |
|
161 | applied to the aliases of the commands defined. | |
162 |
|
162 | |||
163 | diff:: |
|
163 | diff:: | |
164 | Settings used when displaying diffs. They are all boolean and |
|
164 | Settings used when displaying diffs. They are all boolean and | |
165 | defaults to False. |
|
165 | defaults to False. | |
166 | git;; |
|
166 | git;; | |
167 | Use git extended diff format. |
|
167 | Use git extended diff format. | |
168 | nodates;; |
|
168 | nodates;; | |
169 | Don't include dates in diff headers. |
|
169 | Don't include dates in diff headers. | |
170 | showfunc;; |
|
170 | showfunc;; | |
171 | Show which function each change is in. |
|
171 | Show which function each change is in. | |
172 | ignorews;; |
|
172 | ignorews;; | |
173 | Ignore white space when comparing lines. |
|
173 | Ignore white space when comparing lines. | |
174 | ignorewsamount;; |
|
174 | ignorewsamount;; | |
175 | Ignore changes in the amount of white space. |
|
175 | Ignore changes in the amount of white space. | |
176 | ignoreblanklines;; |
|
176 | ignoreblanklines;; | |
177 | Ignore changes whose lines are all blank. |
|
177 | Ignore changes whose lines are all blank. | |
178 |
|
178 | |||
179 | email:: |
|
179 | email:: | |
180 | Settings for extensions that send email messages. |
|
180 | Settings for extensions that send email messages. | |
181 | from;; |
|
181 | from;; | |
182 | Optional. Email address to use in "From" header and SMTP envelope |
|
182 | Optional. Email address to use in "From" header and SMTP envelope | |
183 | of outgoing messages. |
|
183 | of outgoing messages. | |
184 | to;; |
|
184 | to;; | |
185 | Optional. Comma-separated list of recipients' email addresses. |
|
185 | Optional. Comma-separated list of recipients' email addresses. | |
186 | cc;; |
|
186 | cc;; | |
187 | Optional. Comma-separated list of carbon copy recipients' |
|
187 | Optional. Comma-separated list of carbon copy recipients' | |
188 | email addresses. |
|
188 | email addresses. | |
189 | bcc;; |
|
189 | bcc;; | |
190 | Optional. Comma-separated list of blind carbon copy |
|
190 | Optional. Comma-separated list of blind carbon copy | |
191 | recipients' email addresses. Cannot be set interactively. |
|
191 | recipients' email addresses. Cannot be set interactively. | |
192 | method;; |
|
192 | method;; | |
193 | Optional. Method to use to send email messages. If value is |
|
193 | Optional. Method to use to send email messages. If value is | |
194 | "smtp" (default), use SMTP (see section "[smtp]" for |
|
194 | "smtp" (default), use SMTP (see section "[smtp]" for | |
195 | configuration). Otherwise, use as name of program to run that |
|
195 | configuration). Otherwise, use as name of program to run that | |
196 | acts like sendmail (takes "-f" option for sender, list of |
|
196 | acts like sendmail (takes "-f" option for sender, list of | |
197 | recipients on command line, message on stdin). Normally, setting |
|
197 | recipients on command line, message on stdin). Normally, setting | |
198 | this to "sendmail" or "/usr/sbin/sendmail" is enough to use |
|
198 | this to "sendmail" or "/usr/sbin/sendmail" is enough to use | |
199 | sendmail to send messages. |
|
199 | sendmail to send messages. | |
200 |
|
200 | |||
201 | Email example: |
|
201 | Email example: | |
202 |
|
202 | |||
203 | [email] |
|
203 | [email] | |
204 | from = Joseph User <joe.user@example.com> |
|
204 | from = Joseph User <joe.user@example.com> | |
205 | method = /usr/sbin/sendmail |
|
205 | method = /usr/sbin/sendmail | |
206 |
|
206 | |||
207 | extensions:: |
|
207 | extensions:: | |
208 | Mercurial has an extension mechanism for adding new features. To |
|
208 | Mercurial has an extension mechanism for adding new features. To | |
209 | enable an extension, create an entry for it in this section. |
|
209 | enable an extension, create an entry for it in this section. | |
210 |
|
210 | |||
211 | If you know that the extension is already in Python's search path, |
|
211 | If you know that the extension is already in Python's search path, | |
212 | you can give the name of the module, followed by "=", with nothing |
|
212 | you can give the name of the module, followed by "=", with nothing | |
213 | after the "=". |
|
213 | after the "=". | |
214 |
|
214 | |||
215 | Otherwise, give a name that you choose, followed by "=", followed by |
|
215 | Otherwise, give a name that you choose, followed by "=", followed by | |
216 | the path to the ".py" file (including the file name extension) that |
|
216 | the path to the ".py" file (including the file name extension) that | |
217 | defines the extension. |
|
217 | defines the extension. | |
218 |
|
218 | |||
219 | Example for ~/.hgrc: |
|
219 | Example for ~/.hgrc: | |
220 |
|
220 | |||
221 | [extensions] |
|
221 | [extensions] | |
222 | # (the mq extension will get loaded from mercurial's path) |
|
222 | # (the mq extension will get loaded from mercurial's path) | |
223 | hgext.mq = |
|
223 | hgext.mq = | |
224 | # (this extension will get loaded from the file specified) |
|
224 | # (this extension will get loaded from the file specified) | |
225 | myfeature = ~/.hgext/myfeature.py |
|
225 | myfeature = ~/.hgext/myfeature.py | |
226 |
|
226 | |||
227 | format:: |
|
227 | format:: | |
228 |
|
228 | |||
229 | usestore;; |
|
229 | usestore;; | |
230 | Enable or disable the "store" repository format which improves |
|
230 | Enable or disable the "store" repository format which improves | |
231 | compatibility with systems that fold case or otherwise mangle |
|
231 | compatibility with systems that fold case or otherwise mangle | |
232 | filenames. Enabled by default. Disabling this option will allow |
|
232 | filenames. Enabled by default. Disabling this option will allow | |
233 | you to store longer filenames in some situations at the expense of |
|
233 | you to store longer filenames in some situations at the expense of | |
234 | compatibility. |
|
234 | compatibility. | |
235 |
|
235 | |||
236 | hooks:: |
|
236 | hooks:: | |
237 | Commands or Python functions that get automatically executed by |
|
237 | Commands or Python functions that get automatically executed by | |
238 | various actions such as starting or finishing a commit. Multiple |
|
238 | various actions such as starting or finishing a commit. Multiple | |
239 | hooks can be run for the same action by appending a suffix to the |
|
239 | hooks can be run for the same action by appending a suffix to the | |
240 | action. Overriding a site-wide hook can be done by changing its |
|
240 | action. Overriding a site-wide hook can be done by changing its | |
241 | value or setting it to an empty string. |
|
241 | value or setting it to an empty string. | |
242 |
|
242 | |||
243 | Example .hg/hgrc: |
|
243 | Example .hg/hgrc: | |
244 |
|
244 | |||
245 | [hooks] |
|
245 | [hooks] | |
246 | # do not use the site-wide hook |
|
246 | # do not use the site-wide hook | |
247 | incoming = |
|
247 | incoming = | |
248 | incoming.email = /my/email/hook |
|
248 | incoming.email = /my/email/hook | |
249 | incoming.autobuild = /my/build/hook |
|
249 | incoming.autobuild = /my/build/hook | |
250 |
|
250 | |||
251 | Most hooks are run with environment variables set that give added |
|
251 | Most hooks are run with environment variables set that give added | |
252 | useful information. For each hook below, the environment variables |
|
252 | useful information. For each hook below, the environment variables | |
253 | it is passed are listed with names of the form "$HG_foo". |
|
253 | it is passed are listed with names of the form "$HG_foo". | |
254 |
|
254 | |||
255 | changegroup;; |
|
255 | changegroup;; | |
256 | Run after a changegroup has been added via push, pull or |
|
256 | Run after a changegroup has been added via push, pull or | |
257 | unbundle. ID of the first new changeset is in $HG_NODE. URL from |
|
257 | unbundle. ID of the first new changeset is in $HG_NODE. URL from | |
258 | which changes came is in $HG_URL. |
|
258 | which changes came is in $HG_URL. | |
259 | commit;; |
|
259 | commit;; | |
260 | Run after a changeset has been created in the local repository. |
|
260 | Run after a changeset has been created in the local repository. | |
261 | ID of the newly created changeset is in $HG_NODE. Parent |
|
261 | ID of the newly created changeset is in $HG_NODE. Parent | |
262 | changeset IDs are in $HG_PARENT1 and $HG_PARENT2. |
|
262 | changeset IDs are in $HG_PARENT1 and $HG_PARENT2. | |
263 | incoming;; |
|
263 | incoming;; | |
264 | Run after a changeset has been pulled, pushed, or unbundled into |
|
264 | Run after a changeset has been pulled, pushed, or unbundled into | |
265 | the local repository. The ID of the newly arrived changeset is in |
|
265 | the local repository. The ID of the newly arrived changeset is in | |
266 | $HG_NODE. URL that was source of changes came is in $HG_URL. |
|
266 | $HG_NODE. URL that was source of changes came is in $HG_URL. | |
267 | outgoing;; |
|
267 | outgoing;; | |
268 | Run after sending changes from local repository to another. ID of |
|
268 | Run after sending changes from local repository to another. ID of | |
269 | first changeset sent is in $HG_NODE. Source of operation is in |
|
269 | first changeset sent is in $HG_NODE. Source of operation is in | |
270 | $HG_SOURCE; see "preoutgoing" hook for description. |
|
270 | $HG_SOURCE; see "preoutgoing" hook for description. | |
271 | post-<command>;; |
|
271 | post-<command>;; | |
272 | Run after successful invocations of the associated command. The |
|
272 | Run after successful invocations of the associated command. The | |
273 | contents of the command line are passed as $HG_ARGS and the result |
|
273 | contents of the command line are passed as $HG_ARGS and the result | |
274 | code in $HG_RESULT. Hook failure is ignored. |
|
274 | code in $HG_RESULT. Hook failure is ignored. | |
275 | pre-<command>;; |
|
275 | pre-<command>;; | |
276 | Run before executing the associated command. The contents of the |
|
276 | Run before executing the associated command. The contents of the | |
277 | command line are passed as $HG_ARGS. If the hook returns failure, |
|
277 | command line are passed as $HG_ARGS. If the hook returns failure, | |
278 | the command doesn't execute and Mercurial returns the failure code. |
|
278 | the command doesn't execute and Mercurial returns the failure code. | |
279 | prechangegroup;; |
|
279 | prechangegroup;; | |
280 | Run before a changegroup is added via push, pull or unbundle. |
|
280 | Run before a changegroup is added via push, pull or unbundle. | |
281 | Exit status 0 allows the changegroup to proceed. Non-zero status |
|
281 | Exit status 0 allows the changegroup to proceed. Non-zero status | |
282 | will cause the push, pull or unbundle to fail. URL from which |
|
282 | will cause the push, pull or unbundle to fail. URL from which | |
283 | changes will come is in $HG_URL. |
|
283 | changes will come is in $HG_URL. | |
284 | precommit;; |
|
284 | precommit;; | |
285 | Run before starting a local commit. Exit status 0 allows the |
|
285 | Run before starting a local commit. Exit status 0 allows the | |
286 | commit to proceed. Non-zero status will cause the commit to fail. |
|
286 | commit to proceed. Non-zero status will cause the commit to fail. | |
287 | Parent changeset IDs are in $HG_PARENT1 and $HG_PARENT2. |
|
287 | Parent changeset IDs are in $HG_PARENT1 and $HG_PARENT2. | |
288 | preoutgoing;; |
|
288 | preoutgoing;; | |
289 | Run before collecting changes to send from the local repository to |
|
289 | Run before collecting changes to send from the local repository to | |
290 | another. Non-zero status will cause failure. This lets you |
|
290 | another. Non-zero status will cause failure. This lets you | |
291 | prevent pull over http or ssh. Also prevents against local pull, |
|
291 | prevent pull over http or ssh. Also prevents against local pull, | |
292 | push (outbound) or bundle commands, but not effective, since you |
|
292 | push (outbound) or bundle commands, but not effective, since you | |
293 | can just copy files instead then. Source of operation is in |
|
293 | can just copy files instead then. Source of operation is in | |
294 | $HG_SOURCE. If "serve", operation is happening on behalf of |
|
294 | $HG_SOURCE. If "serve", operation is happening on behalf of | |
295 | remote ssh or http repository. If "push", "pull" or "bundle", |
|
295 | remote ssh or http repository. If "push", "pull" or "bundle", | |
296 | operation is happening on behalf of repository on same system. |
|
296 | operation is happening on behalf of repository on same system. | |
297 | pretag;; |
|
297 | pretag;; | |
298 | Run before creating a tag. Exit status 0 allows the tag to be |
|
298 | Run before creating a tag. Exit status 0 allows the tag to be | |
299 | created. Non-zero status will cause the tag to fail. ID of |
|
299 | created. Non-zero status will cause the tag to fail. ID of | |
300 | changeset to tag is in $HG_NODE. Name of tag is in $HG_TAG. Tag |
|
300 | changeset to tag is in $HG_NODE. Name of tag is in $HG_TAG. Tag | |
301 | is local if $HG_LOCAL=1, in repo if $HG_LOCAL=0. |
|
301 | is local if $HG_LOCAL=1, in repo if $HG_LOCAL=0. | |
302 | pretxnchangegroup;; |
|
302 | pretxnchangegroup;; | |
303 | Run after a changegroup has been added via push, pull or unbundle, |
|
303 | Run after a changegroup has been added via push, pull or unbundle, | |
304 | but before the transaction has been committed. Changegroup is |
|
304 | but before the transaction has been committed. Changegroup is | |
305 | visible to hook program. This lets you validate incoming changes |
|
305 | visible to hook program. This lets you validate incoming changes | |
306 | before accepting them. Passed the ID of the first new changeset |
|
306 | before accepting them. Passed the ID of the first new changeset | |
307 | in $HG_NODE. Exit status 0 allows the transaction to commit. |
|
307 | in $HG_NODE. Exit status 0 allows the transaction to commit. | |
308 | Non-zero status will cause the transaction to be rolled back and |
|
308 | Non-zero status will cause the transaction to be rolled back and | |
309 | the push, pull or unbundle will fail. URL that was source of |
|
309 | the push, pull or unbundle will fail. URL that was source of | |
310 | changes is in $HG_URL. |
|
310 | changes is in $HG_URL. | |
311 | pretxncommit;; |
|
311 | pretxncommit;; | |
312 | Run after a changeset has been created but the transaction not yet |
|
312 | Run after a changeset has been created but the transaction not yet | |
313 | committed. Changeset is visible to hook program. This lets you |
|
313 | committed. Changeset is visible to hook program. This lets you | |
314 | validate commit message and changes. Exit status 0 allows the |
|
314 | validate commit message and changes. Exit status 0 allows the | |
315 | commit to proceed. Non-zero status will cause the transaction to |
|
315 | commit to proceed. Non-zero status will cause the transaction to | |
316 | be rolled back. ID of changeset is in $HG_NODE. Parent changeset |
|
316 | be rolled back. ID of changeset is in $HG_NODE. Parent changeset | |
317 | IDs are in $HG_PARENT1 and $HG_PARENT2. |
|
317 | IDs are in $HG_PARENT1 and $HG_PARENT2. | |
318 | preupdate;; |
|
318 | preupdate;; | |
319 | Run before updating the working directory. Exit status 0 allows |
|
319 | Run before updating the working directory. Exit status 0 allows | |
320 | the update to proceed. Non-zero status will prevent the update. |
|
320 | the update to proceed. Non-zero status will prevent the update. | |
321 | Changeset ID of first new parent is in $HG_PARENT1. If merge, ID |
|
321 | Changeset ID of first new parent is in $HG_PARENT1. If merge, ID | |
322 | of second new parent is in $HG_PARENT2. |
|
322 | of second new parent is in $HG_PARENT2. | |
323 | tag;; |
|
323 | tag;; | |
324 | Run after a tag is created. ID of tagged changeset is in |
|
324 | Run after a tag is created. ID of tagged changeset is in | |
325 | $HG_NODE. Name of tag is in $HG_TAG. Tag is local if |
|
325 | $HG_NODE. Name of tag is in $HG_TAG. Tag is local if | |
326 | $HG_LOCAL=1, in repo if $HG_LOCAL=0. |
|
326 | $HG_LOCAL=1, in repo if $HG_LOCAL=0. | |
327 | update;; |
|
327 | update;; | |
328 | Run after updating the working directory. Changeset ID of first |
|
328 | Run after updating the working directory. Changeset ID of first | |
329 | new parent is in $HG_PARENT1. If merge, ID of second new parent |
|
329 | new parent is in $HG_PARENT1. If merge, ID of second new parent | |
330 | is in $HG_PARENT2. If update succeeded, $HG_ERROR=0. If update |
|
330 | is in $HG_PARENT2. If update succeeded, $HG_ERROR=0. If update | |
331 | failed (e.g. because conflicts not resolved), $HG_ERROR=1. |
|
331 | failed (e.g. because conflicts not resolved), $HG_ERROR=1. | |
332 |
|
332 | |||
333 | Note: it is generally better to use standard hooks rather than the |
|
333 | Note: it is generally better to use standard hooks rather than the | |
334 | generic pre- and post- command hooks as they are guaranteed to be |
|
334 | generic pre- and post- command hooks as they are guaranteed to be | |
335 | called in the appropriate contexts for influencing transactions. |
|
335 | called in the appropriate contexts for influencing transactions. | |
336 | Also, hooks like "commit" will be called in all contexts that |
|
336 | Also, hooks like "commit" will be called in all contexts that | |
337 | generate a commit (eg. tag) and not just the commit command. |
|
337 | generate a commit (eg. tag) and not just the commit command. | |
338 |
|
338 | |||
339 | Note2: Environment variables with empty values may not be passed to |
|
339 | Note2: Environment variables with empty values may not be passed to | |
340 | hooks on platforms like Windows. For instance, $HG_PARENT2 will |
|
340 | hooks on platforms like Windows. For instance, $HG_PARENT2 will | |
341 | not be available under Windows for non-merge changesets while being |
|
341 | not be available under Windows for non-merge changesets while being | |
342 | set to an empty value under Unix-like systems. |
|
342 | set to an empty value under Unix-like systems. | |
343 |
|
343 | |||
344 | The syntax for Python hooks is as follows: |
|
344 | The syntax for Python hooks is as follows: | |
345 |
|
345 | |||
346 | hookname = python:modulename.submodule.callable |
|
346 | hookname = python:modulename.submodule.callable | |
347 |
|
347 | |||
348 | Python hooks are run within the Mercurial process. Each hook is |
|
348 | Python hooks are run within the Mercurial process. Each hook is | |
349 | called with at least three keyword arguments: a ui object (keyword |
|
349 | called with at least three keyword arguments: a ui object (keyword | |
350 | "ui"), a repository object (keyword "repo"), and a "hooktype" |
|
350 | "ui"), a repository object (keyword "repo"), and a "hooktype" | |
351 | keyword that tells what kind of hook is used. Arguments listed as |
|
351 | keyword that tells what kind of hook is used. Arguments listed as | |
352 | environment variables above are passed as keyword arguments, with no |
|
352 | environment variables above are passed as keyword arguments, with no | |
353 | "HG_" prefix, and names in lower case. |
|
353 | "HG_" prefix, and names in lower case. | |
354 |
|
354 | |||
355 | If a Python hook returns a "true" value or raises an exception, this |
|
355 | If a Python hook returns a "true" value or raises an exception, this | |
356 | is treated as failure of the hook. |
|
356 | is treated as failure of the hook. | |
357 |
|
357 | |||
358 | http_proxy:: |
|
358 | http_proxy:: | |
359 | Used to access web-based Mercurial repositories through a HTTP |
|
359 | Used to access web-based Mercurial repositories through a HTTP | |
360 | proxy. |
|
360 | proxy. | |
361 | host;; |
|
361 | host;; | |
362 | Host name and (optional) port of the proxy server, for example |
|
362 | Host name and (optional) port of the proxy server, for example | |
363 | "myproxy:8000". |
|
363 | "myproxy:8000". | |
364 | no;; |
|
364 | no;; | |
365 | Optional. Comma-separated list of host names that should bypass |
|
365 | Optional. Comma-separated list of host names that should bypass | |
366 | the proxy. |
|
366 | the proxy. | |
367 | passwd;; |
|
367 | passwd;; | |
368 | Optional. Password to authenticate with at the proxy server. |
|
368 | Optional. Password to authenticate with at the proxy server. | |
369 | user;; |
|
369 | user;; | |
370 | Optional. User name to authenticate with at the proxy server. |
|
370 | Optional. User name to authenticate with at the proxy server. | |
371 |
|
371 | |||
372 | smtp:: |
|
372 | smtp:: | |
373 | Configuration for extensions that need to send email messages. |
|
373 | Configuration for extensions that need to send email messages. | |
374 | host;; |
|
374 | host;; | |
375 | Host name of mail server, e.g. "mail.example.com". |
|
375 | Host name of mail server, e.g. "mail.example.com". | |
376 | port;; |
|
376 | port;; | |
377 | Optional. Port to connect to on mail server. Default: 25. |
|
377 | Optional. Port to connect to on mail server. Default: 25. | |
378 | tls;; |
|
378 | tls;; | |
379 | Optional. Whether to connect to mail server using TLS. True or |
|
379 | Optional. Whether to connect to mail server using TLS. True or | |
380 | False. Default: False. |
|
380 | False. Default: False. | |
381 | username;; |
|
381 | username;; | |
382 | Optional. User name to authenticate to SMTP server with. |
|
382 | Optional. User name to authenticate to SMTP server with. | |
383 | If username is specified, password must also be specified. |
|
383 | If username is specified, password must also be specified. | |
384 | Default: none. |
|
384 | Default: none. | |
385 | password;; |
|
385 | password;; | |
386 | Optional. Password to authenticate to SMTP server with. |
|
386 | Optional. Password to authenticate to SMTP server with. | |
387 | If username is specified, password must also be specified. |
|
387 | If username is specified, password must also be specified. | |
388 | Default: none. |
|
388 | Default: none. | |
389 | local_hostname;; |
|
389 | local_hostname;; | |
390 | Optional. It's the hostname that the sender can use to identify itself |
|
390 | Optional. It's the hostname that the sender can use to identify itself | |
391 | to the MTA. |
|
391 | to the MTA. | |
392 |
|
392 | |||
393 | paths:: |
|
393 | paths:: | |
394 | Assigns symbolic names to repositories. The left side is the |
|
394 | Assigns symbolic names to repositories. The left side is the | |
395 | symbolic name, and the right gives the directory or URL that is the |
|
395 | symbolic name, and the right gives the directory or URL that is the | |
396 | location of the repository. Default paths can be declared by |
|
396 | location of the repository. Default paths can be declared by | |
397 | setting the following entries. |
|
397 | setting the following entries. | |
398 | default;; |
|
398 | default;; | |
399 | Directory or URL to use when pulling if no source is specified. |
|
399 | Directory or URL to use when pulling if no source is specified. | |
400 | Default is set to repository from which the current repository |
|
400 | Default is set to repository from which the current repository | |
401 | was cloned. |
|
401 | was cloned. | |
402 | default-push;; |
|
402 | default-push;; | |
403 | Optional. Directory or URL to use when pushing if no destination |
|
403 | Optional. Directory or URL to use when pushing if no destination | |
404 | is specified. |
|
404 | is specified. | |
405 |
|
405 | |||
406 | server:: |
|
406 | server:: | |
407 | Controls generic server settings. |
|
407 | Controls generic server settings. | |
408 | uncompressed;; |
|
408 | uncompressed;; | |
409 | Whether to allow clients to clone a repo using the uncompressed |
|
409 | Whether to allow clients to clone a repo using the uncompressed | |
410 | streaming protocol. This transfers about 40% more data than a |
|
410 | streaming protocol. This transfers about 40% more data than a | |
411 | regular clone, but uses less memory and CPU on both server and |
|
411 | regular clone, but uses less memory and CPU on both server and | |
412 | client. Over a LAN (100Mbps or better) or a very fast WAN, an |
|
412 | client. Over a LAN (100Mbps or better) or a very fast WAN, an | |
413 | uncompressed streaming clone is a lot faster (~10x) than a regular |
|
413 | uncompressed streaming clone is a lot faster (~10x) than a regular | |
414 | clone. Over most WAN connections (anything slower than about |
|
414 | clone. Over most WAN connections (anything slower than about | |
415 | 6Mbps), uncompressed streaming is slower, because of the extra |
|
415 | 6Mbps), uncompressed streaming is slower, because of the extra | |
416 | data transfer overhead. Default is False. |
|
416 | data transfer overhead. Default is False. | |
417 |
|
417 | |||
418 | trusted:: |
|
418 | trusted:: | |
419 | For security reasons, Mercurial will not use the settings in |
|
419 | For security reasons, Mercurial will not use the settings in | |
420 | the .hg/hgrc file from a repository if it doesn't belong to a |
|
420 | the .hg/hgrc file from a repository if it doesn't belong to a | |
421 | trusted user or to a trusted group. The main exception is the |
|
421 | trusted user or to a trusted group. The main exception is the | |
422 | web interface, which automatically uses some safe settings, since |
|
422 | web interface, which automatically uses some safe settings, since | |
423 | it's common to serve repositories from different users. |
|
423 | it's common to serve repositories from different users. | |
424 |
|
424 | |||
425 | This section specifies what users and groups are trusted. The |
|
425 | This section specifies what users and groups are trusted. The | |
426 | current user is always trusted. To trust everybody, list a user |
|
426 | current user is always trusted. To trust everybody, list a user | |
427 | or a group with name "*". |
|
427 | or a group with name "*". | |
428 |
|
428 | |||
429 | users;; |
|
429 | users;; | |
430 | Comma-separated list of trusted users. |
|
430 | Comma-separated list of trusted users. | |
431 | groups;; |
|
431 | groups;; | |
432 | Comma-separated list of trusted groups. |
|
432 | Comma-separated list of trusted groups. | |
433 |
|
433 | |||
434 | ui:: |
|
434 | ui:: | |
435 | User interface controls. |
|
435 | User interface controls. | |
436 | debug;; |
|
436 | debug;; | |
437 | Print debugging information. True or False. Default is False. |
|
437 | Print debugging information. True or False. Default is False. | |
438 | editor;; |
|
438 | editor;; | |
439 | The editor to use during a commit. Default is $EDITOR or "vi". |
|
439 | The editor to use during a commit. Default is $EDITOR or "vi". | |
440 | fallbackencoding;; |
|
440 | fallbackencoding;; | |
441 | Encoding to try if it's not possible to decode the changelog using |
|
441 | Encoding to try if it's not possible to decode the changelog using | |
442 | UTF-8. Default is ISO-8859-1. |
|
442 | UTF-8. Default is ISO-8859-1. | |
443 | ignore;; |
|
443 | ignore;; | |
444 | A file to read per-user ignore patterns from. This file should be in |
|
444 | A file to read per-user ignore patterns from. This file should be in | |
445 | the same format as a repository-wide .hgignore file. This option |
|
445 | the same format as a repository-wide .hgignore file. This option | |
446 | supports hook syntax, so if you want to specify multiple ignore |
|
446 | supports hook syntax, so if you want to specify multiple ignore | |
447 | files, you can do so by setting something like |
|
447 | files, you can do so by setting something like | |
448 | "ignore.other = ~/.hgignore2". For details of the ignore file |
|
448 | "ignore.other = ~/.hgignore2". For details of the ignore file | |
449 | format, see the hgignore(5) man page. |
|
449 | format, see the hgignore(5) man page. | |
450 | interactive;; |
|
450 | interactive;; | |
451 | Allow to prompt the user. True or False. Default is True. |
|
451 | Allow to prompt the user. True or False. Default is True. | |
452 | logtemplate;; |
|
452 | logtemplate;; | |
453 | Template string for commands that print changesets. |
|
453 | Template string for commands that print changesets. | |
454 | merge;; |
|
454 | merge;; | |
455 | The conflict resolution program to use during a manual merge. |
|
455 | The conflict resolution program to use during a manual merge. | |
456 | Default is "hgmerge". |
|
456 | Default is "hgmerge". | |
457 | patch;; |
|
457 | patch;; | |
458 | command to use to apply patches. Look for 'gpatch' or 'patch' in PATH if |
|
458 | command to use to apply patches. Look for 'gpatch' or 'patch' in PATH if | |
459 | unset. |
|
459 | unset. | |
460 | quiet;; |
|
460 | quiet;; | |
461 | Reduce the amount of output printed. True or False. Default is False. |
|
461 | Reduce the amount of output printed. True or False. Default is False. | |
462 | remotecmd;; |
|
462 | remotecmd;; | |
463 | remote command to use for clone/push/pull operations. Default is 'hg'. |
|
463 | remote command to use for clone/push/pull operations. Default is 'hg'. | |
464 | report_untrusted;; |
|
464 | report_untrusted;; | |
465 | Warn if a .hg/hgrc file is ignored due to not being owned by a |
|
465 | Warn if a .hg/hgrc file is ignored due to not being owned by a | |
466 | trusted user or group. True or False. Default is True. |
|
466 | trusted user or group. True or False. Default is True. | |
467 | slash;; |
|
467 | slash;; | |
468 | Display paths using a slash ("/") as the path separator. This only |
|
468 | Display paths using a slash ("/") as the path separator. This only | |
469 | makes a difference on systems where the default path separator is not |
|
469 | makes a difference on systems where the default path separator is not | |
470 | the slash character (e.g. Windows uses the backslash character ("\")). |
|
470 | the slash character (e.g. Windows uses the backslash character ("\")). | |
471 | Default is False. |
|
471 | Default is False. | |
472 | ssh;; |
|
472 | ssh;; | |
473 | command to use for SSH connections. Default is 'ssh'. |
|
473 | command to use for SSH connections. Default is 'ssh'. | |
474 | strict;; |
|
474 | strict;; | |
475 | Require exact command names, instead of allowing unambiguous |
|
475 | Require exact command names, instead of allowing unambiguous | |
476 | abbreviations. True or False. Default is False. |
|
476 | abbreviations. True or False. Default is False. | |
477 | style;; |
|
477 | style;; | |
478 | Name of style to use for command output. |
|
478 | Name of style to use for command output. | |
479 | timeout;; |
|
479 | timeout;; | |
480 | The timeout used when a lock is held (in seconds), a negative value |
|
480 | The timeout used when a lock is held (in seconds), a negative value | |
481 | means no timeout. Default is 600. |
|
481 | means no timeout. Default is 600. | |
482 | username;; |
|
482 | username;; | |
483 | The committer of a changeset created when running "commit". |
|
483 | The committer of a changeset created when running "commit". | |
484 | Typically a person's name and email address, e.g. "Fred Widget |
|
484 | Typically a person's name and email address, e.g. "Fred Widget | |
485 | <fred@example.com>". Default is $EMAIL or username@hostname. |
|
485 | <fred@example.com>". Default is $EMAIL or username@hostname. | |
486 | If the username in hgrc is empty, it has to be specified manually or |
|
486 | If the username in hgrc is empty, it has to be specified manually or | |
487 | in a different hgrc file (e.g. $HOME/.hgrc, if the admin set "username =" |
|
487 | in a different hgrc file (e.g. $HOME/.hgrc, if the admin set "username =" | |
488 | in the system hgrc). |
|
488 | in the system hgrc). | |
489 | verbose;; |
|
489 | verbose;; | |
490 | Increase the amount of output printed. True or False. Default is False. |
|
490 | Increase the amount of output printed. True or False. Default is False. | |
491 |
|
491 | |||
492 |
|
492 | |||
493 | web:: |
|
493 | web:: | |
494 | Web interface configuration. |
|
494 | Web interface configuration. | |
495 | accesslog;; |
|
495 | accesslog;; | |
496 | Where to output the access log. Default is stdout. |
|
496 | Where to output the access log. Default is stdout. | |
497 | address;; |
|
497 | address;; | |
498 | Interface address to bind to. Default is all. |
|
498 | Interface address to bind to. Default is all. | |
499 | allow_archive;; |
|
499 | allow_archive;; | |
500 | List of archive format (bz2, gz, zip) allowed for downloading. |
|
500 | List of archive format (bz2, gz, zip) allowed for downloading. | |
501 | Default is empty. |
|
501 | Default is empty. | |
502 | allowbz2;; |
|
502 | allowbz2;; | |
503 | (DEPRECATED) Whether to allow .tar.bz2 downloading of repo revisions. |
|
503 | (DEPRECATED) Whether to allow .tar.bz2 downloading of repo revisions. | |
504 | Default is false. |
|
504 | Default is false. | |
505 | allowgz;; |
|
505 | allowgz;; | |
506 | (DEPRECATED) Whether to allow .tar.gz downloading of repo revisions. |
|
506 | (DEPRECATED) Whether to allow .tar.gz downloading of repo revisions. | |
507 | Default is false. |
|
507 | Default is false. | |
508 | allowpull;; |
|
508 | allowpull;; | |
509 | Whether to allow pulling from the repository. Default is true. |
|
509 | Whether to allow pulling from the repository. Default is true. | |
510 | allow_push;; |
|
510 | allow_push;; | |
511 | Whether to allow pushing to the repository. If empty or not set, |
|
511 | Whether to allow pushing to the repository. If empty or not set, | |
512 | push is not allowed. If the special value "*", any remote user |
|
512 | push is not allowed. If the special value "*", any remote user | |
513 | can push, including unauthenticated users. Otherwise, the remote |
|
513 | can push, including unauthenticated users. Otherwise, the remote | |
514 | user must have been authenticated, and the authenticated user name |
|
514 | user must have been authenticated, and the authenticated user name | |
515 | must be present in this list (separated by whitespace or ","). |
|
515 | must be present in this list (separated by whitespace or ","). | |
516 | The contents of the allow_push list are examined after the |
|
516 | The contents of the allow_push list are examined after the | |
517 | deny_push list. |
|
517 | deny_push list. | |
518 | allowzip;; |
|
518 | allowzip;; | |
519 | (DEPRECATED) Whether to allow .zip downloading of repo revisions. |
|
519 | (DEPRECATED) Whether to allow .zip downloading of repo revisions. | |
520 | Default is false. This feature creates temporary files. |
|
520 | Default is false. This feature creates temporary files. | |
521 | baseurl;; |
|
521 | baseurl;; | |
522 | Base URL to use when publishing URLs in other locations, so |
|
522 | Base URL to use when publishing URLs in other locations, so | |
523 | third-party tools like email notification hooks can construct URLs. |
|
523 | third-party tools like email notification hooks can construct URLs. | |
524 | Example: "http://hgserver/repos/" |
|
524 | Example: "http://hgserver/repos/" | |
525 | contact;; |
|
525 | contact;; | |
526 | Name or email address of the person in charge of the repository. |
|
526 | Name or email address of the person in charge of the repository. | |
527 | Default is "unknown". |
|
527 | Defaults to ui.username or $EMAIL or "unknown" if unset or empty. | |
528 | deny_push;; |
|
528 | deny_push;; | |
529 | Whether to deny pushing to the repository. If empty or not set, |
|
529 | Whether to deny pushing to the repository. If empty or not set, | |
530 | push is not denied. If the special value "*", all remote users |
|
530 | push is not denied. If the special value "*", all remote users | |
531 | are denied push. Otherwise, unauthenticated users are all denied, |
|
531 | are denied push. Otherwise, unauthenticated users are all denied, | |
532 | and any authenticated user name present in this list (separated by |
|
532 | and any authenticated user name present in this list (separated by | |
533 | whitespace or ",") is also denied. The contents of the deny_push |
|
533 | whitespace or ",") is also denied. The contents of the deny_push | |
534 | list are examined before the allow_push list. |
|
534 | list are examined before the allow_push list. | |
535 | description;; |
|
535 | description;; | |
536 | Textual description of the repository's purpose or contents. |
|
536 | Textual description of the repository's purpose or contents. | |
537 | Default is "unknown". |
|
537 | Default is "unknown". | |
538 | encoding;; |
|
538 | encoding;; | |
539 | Character encoding name. |
|
539 | Character encoding name. | |
540 | Example: "UTF-8" |
|
540 | Example: "UTF-8" | |
541 | errorlog;; |
|
541 | errorlog;; | |
542 | Where to output the error log. Default is stderr. |
|
542 | Where to output the error log. Default is stderr. | |
543 | hidden;; |
|
543 | hidden;; | |
544 | Whether to hide the repository in the hgwebdir index. Default is false. |
|
544 | Whether to hide the repository in the hgwebdir index. Default is false. | |
545 | ipv6;; |
|
545 | ipv6;; | |
546 | Whether to use IPv6. Default is false. |
|
546 | Whether to use IPv6. Default is false. | |
547 | name;; |
|
547 | name;; | |
548 | Repository name to use in the web interface. Default is current |
|
548 | Repository name to use in the web interface. Default is current | |
549 | working directory. |
|
549 | working directory. | |
550 | maxchanges;; |
|
550 | maxchanges;; | |
551 | Maximum number of changes to list on the changelog. Default is 10. |
|
551 | Maximum number of changes to list on the changelog. Default is 10. | |
552 | maxfiles;; |
|
552 | maxfiles;; | |
553 | Maximum number of files to list per changeset. Default is 10. |
|
553 | Maximum number of files to list per changeset. Default is 10. | |
554 | port;; |
|
554 | port;; | |
555 | Port to listen on. Default is 8000. |
|
555 | Port to listen on. Default is 8000. | |
556 | push_ssl;; |
|
556 | push_ssl;; | |
557 | Whether to require that inbound pushes be transported over SSL to |
|
557 | Whether to require that inbound pushes be transported over SSL to | |
558 | prevent password sniffing. Default is true. |
|
558 | prevent password sniffing. Default is true. | |
559 | staticurl;; |
|
559 | staticurl;; | |
560 | Base URL to use for static files. If unset, static files (e.g. |
|
560 | Base URL to use for static files. If unset, static files (e.g. | |
561 | the hgicon.png favicon) will be served by the CGI script itself. |
|
561 | the hgicon.png favicon) will be served by the CGI script itself. | |
562 | Use this setting to serve them directly with the HTTP server. |
|
562 | Use this setting to serve them directly with the HTTP server. | |
563 | Example: "http://hgserver/static/" |
|
563 | Example: "http://hgserver/static/" | |
564 | stripes;; |
|
564 | stripes;; | |
565 | How many lines a "zebra stripe" should span in multiline output. |
|
565 | How many lines a "zebra stripe" should span in multiline output. | |
566 | Default is 1; set to 0 to disable. |
|
566 | Default is 1; set to 0 to disable. | |
567 | style;; |
|
567 | style;; | |
568 | Which template map style to use. |
|
568 | Which template map style to use. | |
569 | templates;; |
|
569 | templates;; | |
570 | Where to find the HTML templates. Default is install path. |
|
570 | Where to find the HTML templates. Default is install path. | |
571 |
|
571 | |||
572 |
|
572 | |||
573 | AUTHOR |
|
573 | AUTHOR | |
574 | ------ |
|
574 | ------ | |
575 | Bryan O'Sullivan <bos@serpentine.com>. |
|
575 | Bryan O'Sullivan <bos@serpentine.com>. | |
576 |
|
576 | |||
577 | Mercurial was written by Matt Mackall <mpm@selenic.com>. |
|
577 | Mercurial was written by Matt Mackall <mpm@selenic.com>. | |
578 |
|
578 | |||
579 | SEE ALSO |
|
579 | SEE ALSO | |
580 | -------- |
|
580 | -------- | |
581 | hg(1), hgignore(5) |
|
581 | hg(1), hgignore(5) | |
582 |
|
582 | |||
583 | COPYING |
|
583 | COPYING | |
584 | ------- |
|
584 | ------- | |
585 | This manual page is copyright 2005 Bryan O'Sullivan. |
|
585 | This manual page is copyright 2005 Bryan O'Sullivan. | |
586 | Mercurial is copyright 2005-2007 Matt Mackall. |
|
586 | Mercurial is copyright 2005-2007 Matt Mackall. | |
587 | Free use of this software is granted under the terms of the GNU General |
|
587 | Free use of this software is granted under the terms of the GNU General | |
588 | Public License (GPL). |
|
588 | Public License (GPL). |
@@ -1,99 +1,108 b'' | |||||
1 | # hgweb/common.py - Utility functions needed by hgweb_mod and hgwebdir_mod |
|
1 | # hgweb/common.py - Utility functions needed by hgweb_mod and hgwebdir_mod | |
2 | # |
|
2 | # | |
3 | # Copyright 21 May 2005 - (c) 2005 Jake Edge <jake@edge2.net> |
|
3 | # Copyright 21 May 2005 - (c) 2005 Jake Edge <jake@edge2.net> | |
4 | # Copyright 2005, 2006 Matt Mackall <mpm@selenic.com> |
|
4 | # Copyright 2005, 2006 Matt Mackall <mpm@selenic.com> | |
5 | # |
|
5 | # | |
6 | # This software may be used and distributed according to the terms |
|
6 | # This software may be used and distributed according to the terms | |
7 | # of the GNU General Public License, incorporated herein by reference. |
|
7 | # of the GNU General Public License, incorporated herein by reference. | |
8 |
|
8 | |||
9 | import errno, mimetypes, os |
|
9 | import errno, mimetypes, os | |
10 |
|
10 | |||
11 | class ErrorResponse(Exception): |
|
11 | class ErrorResponse(Exception): | |
12 | def __init__(self, code, message=None): |
|
12 | def __init__(self, code, message=None): | |
13 | Exception.__init__(self) |
|
13 | Exception.__init__(self) | |
14 | self.code = code |
|
14 | self.code = code | |
15 | if message: |
|
15 | if message: | |
16 | self.message = message |
|
16 | self.message = message | |
17 | else: |
|
17 | else: | |
18 | self.message = _statusmessage(code) |
|
18 | self.message = _statusmessage(code) | |
19 |
|
19 | |||
20 | def _statusmessage(code): |
|
20 | def _statusmessage(code): | |
21 | from BaseHTTPServer import BaseHTTPRequestHandler |
|
21 | from BaseHTTPServer import BaseHTTPRequestHandler | |
22 | responses = BaseHTTPRequestHandler.responses |
|
22 | responses = BaseHTTPRequestHandler.responses | |
23 | return responses.get(code, ('Error', 'Unknown error'))[0] |
|
23 | return responses.get(code, ('Error', 'Unknown error'))[0] | |
24 |
|
24 | |||
25 | def statusmessage(code): |
|
25 | def statusmessage(code): | |
26 | return '%d %s' % (code, _statusmessage(code)) |
|
26 | return '%d %s' % (code, _statusmessage(code)) | |
27 |
|
27 | |||
28 | def get_mtime(repo_path): |
|
28 | def get_mtime(repo_path): | |
29 | store_path = os.path.join(repo_path, ".hg") |
|
29 | store_path = os.path.join(repo_path, ".hg") | |
30 | if not os.path.isdir(os.path.join(store_path, "data")): |
|
30 | if not os.path.isdir(os.path.join(store_path, "data")): | |
31 | store_path = os.path.join(store_path, "store") |
|
31 | store_path = os.path.join(store_path, "store") | |
32 | cl_path = os.path.join(store_path, "00changelog.i") |
|
32 | cl_path = os.path.join(store_path, "00changelog.i") | |
33 | if os.path.exists(cl_path): |
|
33 | if os.path.exists(cl_path): | |
34 | return os.stat(cl_path).st_mtime |
|
34 | return os.stat(cl_path).st_mtime | |
35 | else: |
|
35 | else: | |
36 | return os.stat(store_path).st_mtime |
|
36 | return os.stat(store_path).st_mtime | |
37 |
|
37 | |||
38 | def staticfile(directory, fname, req): |
|
38 | def staticfile(directory, fname, req): | |
39 | """return a file inside directory with guessed content-type header |
|
39 | """return a file inside directory with guessed content-type header | |
40 |
|
40 | |||
41 | fname always uses '/' as directory separator and isn't allowed to |
|
41 | fname always uses '/' as directory separator and isn't allowed to | |
42 | contain unusual path components. |
|
42 | contain unusual path components. | |
43 | Content-type is guessed using the mimetypes module. |
|
43 | Content-type is guessed using the mimetypes module. | |
44 | Return an empty string if fname is illegal or file not found. |
|
44 | Return an empty string if fname is illegal or file not found. | |
45 |
|
45 | |||
46 | """ |
|
46 | """ | |
47 | parts = fname.split('/') |
|
47 | parts = fname.split('/') | |
48 | path = directory |
|
48 | path = directory | |
49 | for part in parts: |
|
49 | for part in parts: | |
50 | if (part in ('', os.curdir, os.pardir) or |
|
50 | if (part in ('', os.curdir, os.pardir) or | |
51 | os.sep in part or os.altsep is not None and os.altsep in part): |
|
51 | os.sep in part or os.altsep is not None and os.altsep in part): | |
52 | return "" |
|
52 | return "" | |
53 | path = os.path.join(path, part) |
|
53 | path = os.path.join(path, part) | |
54 | try: |
|
54 | try: | |
55 | os.stat(path) |
|
55 | os.stat(path) | |
56 | ct = mimetypes.guess_type(path)[0] or "text/plain" |
|
56 | ct = mimetypes.guess_type(path)[0] or "text/plain" | |
57 | req.header([('Content-type', ct), |
|
57 | req.header([('Content-type', ct), | |
58 | ('Content-length', str(os.path.getsize(path)))]) |
|
58 | ('Content-length', str(os.path.getsize(path)))]) | |
59 | return file(path, 'rb').read() |
|
59 | return file(path, 'rb').read() | |
60 | except TypeError: |
|
60 | except TypeError: | |
61 | raise ErrorResponse(500, 'illegal file name') |
|
61 | raise ErrorResponse(500, 'illegal file name') | |
62 | except OSError, err: |
|
62 | except OSError, err: | |
63 | if err.errno == errno.ENOENT: |
|
63 | if err.errno == errno.ENOENT: | |
64 | raise ErrorResponse(404) |
|
64 | raise ErrorResponse(404) | |
65 | else: |
|
65 | else: | |
66 | raise ErrorResponse(500, err.strerror) |
|
66 | raise ErrorResponse(500, err.strerror) | |
67 |
|
67 | |||
68 | def style_map(templatepath, style): |
|
68 | def style_map(templatepath, style): | |
69 | """Return path to mapfile for a given style. |
|
69 | """Return path to mapfile for a given style. | |
70 |
|
70 | |||
71 | Searches mapfile in the following locations: |
|
71 | Searches mapfile in the following locations: | |
72 | 1. templatepath/style/map |
|
72 | 1. templatepath/style/map | |
73 | 2. templatepath/map-style |
|
73 | 2. templatepath/map-style | |
74 | 3. templatepath/map |
|
74 | 3. templatepath/map | |
75 | """ |
|
75 | """ | |
76 | locations = style and [os.path.join(style, "map"), "map-"+style] or [] |
|
76 | locations = style and [os.path.join(style, "map"), "map-"+style] or [] | |
77 | locations.append("map") |
|
77 | locations.append("map") | |
78 | for location in locations: |
|
78 | for location in locations: | |
79 | mapfile = os.path.join(templatepath, location) |
|
79 | mapfile = os.path.join(templatepath, location) | |
80 | if os.path.isfile(mapfile): |
|
80 | if os.path.isfile(mapfile): | |
81 | return mapfile |
|
81 | return mapfile | |
82 | raise RuntimeError("No hgweb templates found in %r" % templatepath) |
|
82 | raise RuntimeError("No hgweb templates found in %r" % templatepath) | |
83 |
|
83 | |||
84 | def paritygen(stripecount, offset=0): |
|
84 | def paritygen(stripecount, offset=0): | |
85 | """count parity of horizontal stripes for easier reading""" |
|
85 | """count parity of horizontal stripes for easier reading""" | |
86 | if stripecount and offset: |
|
86 | if stripecount and offset: | |
87 | # account for offset, e.g. due to building the list in reverse |
|
87 | # account for offset, e.g. due to building the list in reverse | |
88 | count = (stripecount + offset) % stripecount |
|
88 | count = (stripecount + offset) % stripecount | |
89 | parity = (stripecount + offset) / stripecount & 1 |
|
89 | parity = (stripecount + offset) / stripecount & 1 | |
90 | else: |
|
90 | else: | |
91 | count = 0 |
|
91 | count = 0 | |
92 | parity = 0 |
|
92 | parity = 0 | |
93 | while True: |
|
93 | while True: | |
94 | yield parity |
|
94 | yield parity | |
95 | count += 1 |
|
95 | count += 1 | |
96 | if stripecount and count >= stripecount: |
|
96 | if stripecount and count >= stripecount: | |
97 | parity = 1 - parity |
|
97 | parity = 1 - parity | |
98 | count = 0 |
|
98 | count = 0 | |
99 |
|
99 | |||
|
100 | def get_contact(config): | |||
|
101 | """Return repo contact information or empty string. | |||
|
102 | ||||
|
103 | web.contact is the primary source, but if that is not set, try | |||
|
104 | ui.username or $EMAIL as a fallback to display something useful. | |||
|
105 | """ | |||
|
106 | return (config("web", "contact") or | |||
|
107 | config("ui", "username") or | |||
|
108 | os.environ.get("EMAIL") or "") |
@@ -1,911 +1,909 b'' | |||||
1 | # hgweb/hgweb_mod.py - Web interface for a repository. |
|
1 | # hgweb/hgweb_mod.py - Web interface for a repository. | |
2 | # |
|
2 | # | |
3 | # Copyright 21 May 2005 - (c) 2005 Jake Edge <jake@edge2.net> |
|
3 | # Copyright 21 May 2005 - (c) 2005 Jake Edge <jake@edge2.net> | |
4 | # Copyright 2005-2007 Matt Mackall <mpm@selenic.com> |
|
4 | # Copyright 2005-2007 Matt Mackall <mpm@selenic.com> | |
5 | # |
|
5 | # | |
6 | # This software may be used and distributed according to the terms |
|
6 | # This software may be used and distributed according to the terms | |
7 | # of the GNU General Public License, incorporated herein by reference. |
|
7 | # of the GNU General Public License, incorporated herein by reference. | |
8 |
|
8 | |||
9 | import os, mimetypes, re, mimetools, cStringIO |
|
9 | import os, mimetypes, re, mimetools, cStringIO | |
10 | from mercurial.node import * |
|
10 | from mercurial.node import * | |
11 | from mercurial import mdiff, ui, hg, util, archival, patch |
|
11 | from mercurial import mdiff, ui, hg, util, archival, patch | |
12 | from mercurial import revlog, templater |
|
12 | from mercurial import revlog, templater | |
13 | from common import ErrorResponse, get_mtime, style_map, paritygen |
|
13 | from common import ErrorResponse, get_mtime, style_map, paritygen, get_contact | |
14 | from request import wsgirequest |
|
14 | from request import wsgirequest | |
15 | import webcommands, protocol |
|
15 | import webcommands, protocol | |
16 |
|
16 | |||
17 | shortcuts = { |
|
17 | shortcuts = { | |
18 | 'cl': [('cmd', ['changelog']), ('rev', None)], |
|
18 | 'cl': [('cmd', ['changelog']), ('rev', None)], | |
19 | 'sl': [('cmd', ['shortlog']), ('rev', None)], |
|
19 | 'sl': [('cmd', ['shortlog']), ('rev', None)], | |
20 | 'cs': [('cmd', ['changeset']), ('node', None)], |
|
20 | 'cs': [('cmd', ['changeset']), ('node', None)], | |
21 | 'f': [('cmd', ['file']), ('filenode', None)], |
|
21 | 'f': [('cmd', ['file']), ('filenode', None)], | |
22 | 'fl': [('cmd', ['filelog']), ('filenode', None)], |
|
22 | 'fl': [('cmd', ['filelog']), ('filenode', None)], | |
23 | 'fd': [('cmd', ['filediff']), ('node', None)], |
|
23 | 'fd': [('cmd', ['filediff']), ('node', None)], | |
24 | 'fa': [('cmd', ['annotate']), ('filenode', None)], |
|
24 | 'fa': [('cmd', ['annotate']), ('filenode', None)], | |
25 | 'mf': [('cmd', ['manifest']), ('manifest', None)], |
|
25 | 'mf': [('cmd', ['manifest']), ('manifest', None)], | |
26 | 'ca': [('cmd', ['archive']), ('node', None)], |
|
26 | 'ca': [('cmd', ['archive']), ('node', None)], | |
27 | 'tags': [('cmd', ['tags'])], |
|
27 | 'tags': [('cmd', ['tags'])], | |
28 | 'tip': [('cmd', ['changeset']), ('node', ['tip'])], |
|
28 | 'tip': [('cmd', ['changeset']), ('node', ['tip'])], | |
29 | 'static': [('cmd', ['static']), ('file', None)] |
|
29 | 'static': [('cmd', ['static']), ('file', None)] | |
30 | } |
|
30 | } | |
31 |
|
31 | |||
32 | def _up(p): |
|
32 | def _up(p): | |
33 | if p[0] != "/": |
|
33 | if p[0] != "/": | |
34 | p = "/" + p |
|
34 | p = "/" + p | |
35 | if p[-1] == "/": |
|
35 | if p[-1] == "/": | |
36 | p = p[:-1] |
|
36 | p = p[:-1] | |
37 | up = os.path.dirname(p) |
|
37 | up = os.path.dirname(p) | |
38 | if up == "/": |
|
38 | if up == "/": | |
39 | return "/" |
|
39 | return "/" | |
40 | return up + "/" |
|
40 | return up + "/" | |
41 |
|
41 | |||
42 | def revnavgen(pos, pagelen, limit, nodefunc): |
|
42 | def revnavgen(pos, pagelen, limit, nodefunc): | |
43 | def seq(factor, limit=None): |
|
43 | def seq(factor, limit=None): | |
44 | if limit: |
|
44 | if limit: | |
45 | yield limit |
|
45 | yield limit | |
46 | if limit >= 20 and limit <= 40: |
|
46 | if limit >= 20 and limit <= 40: | |
47 | yield 50 |
|
47 | yield 50 | |
48 | else: |
|
48 | else: | |
49 | yield 1 * factor |
|
49 | yield 1 * factor | |
50 | yield 3 * factor |
|
50 | yield 3 * factor | |
51 | for f in seq(factor * 10): |
|
51 | for f in seq(factor * 10): | |
52 | yield f |
|
52 | yield f | |
53 |
|
53 | |||
54 | def nav(**map): |
|
54 | def nav(**map): | |
55 | l = [] |
|
55 | l = [] | |
56 | last = 0 |
|
56 | last = 0 | |
57 | for f in seq(1, pagelen): |
|
57 | for f in seq(1, pagelen): | |
58 | if f < pagelen or f <= last: |
|
58 | if f < pagelen or f <= last: | |
59 | continue |
|
59 | continue | |
60 | if f > limit: |
|
60 | if f > limit: | |
61 | break |
|
61 | break | |
62 | last = f |
|
62 | last = f | |
63 | if pos + f < limit: |
|
63 | if pos + f < limit: | |
64 | l.append(("+%d" % f, hex(nodefunc(pos + f).node()))) |
|
64 | l.append(("+%d" % f, hex(nodefunc(pos + f).node()))) | |
65 | if pos - f >= 0: |
|
65 | if pos - f >= 0: | |
66 | l.insert(0, ("-%d" % f, hex(nodefunc(pos - f).node()))) |
|
66 | l.insert(0, ("-%d" % f, hex(nodefunc(pos - f).node()))) | |
67 |
|
67 | |||
68 | try: |
|
68 | try: | |
69 | yield {"label": "(0)", "node": hex(nodefunc('0').node())} |
|
69 | yield {"label": "(0)", "node": hex(nodefunc('0').node())} | |
70 |
|
70 | |||
71 | for label, node in l: |
|
71 | for label, node in l: | |
72 | yield {"label": label, "node": node} |
|
72 | yield {"label": label, "node": node} | |
73 |
|
73 | |||
74 | yield {"label": "tip", "node": "tip"} |
|
74 | yield {"label": "tip", "node": "tip"} | |
75 | except hg.RepoError: |
|
75 | except hg.RepoError: | |
76 | pass |
|
76 | pass | |
77 |
|
77 | |||
78 | return nav |
|
78 | return nav | |
79 |
|
79 | |||
80 | class hgweb(object): |
|
80 | class hgweb(object): | |
81 | def __init__(self, repo, name=None): |
|
81 | def __init__(self, repo, name=None): | |
82 | if isinstance(repo, str): |
|
82 | if isinstance(repo, str): | |
83 | parentui = ui.ui(report_untrusted=False, interactive=False) |
|
83 | parentui = ui.ui(report_untrusted=False, interactive=False) | |
84 | self.repo = hg.repository(parentui, repo) |
|
84 | self.repo = hg.repository(parentui, repo) | |
85 | else: |
|
85 | else: | |
86 | self.repo = repo |
|
86 | self.repo = repo | |
87 |
|
87 | |||
88 | self.mtime = -1 |
|
88 | self.mtime = -1 | |
89 | self.reponame = name |
|
89 | self.reponame = name | |
90 | self.archives = 'zip', 'gz', 'bz2' |
|
90 | self.archives = 'zip', 'gz', 'bz2' | |
91 | self.stripecount = 1 |
|
91 | self.stripecount = 1 | |
92 | # a repo owner may set web.templates in .hg/hgrc to get any file |
|
92 | # a repo owner may set web.templates in .hg/hgrc to get any file | |
93 | # readable by the user running the CGI script |
|
93 | # readable by the user running the CGI script | |
94 | self.templatepath = self.config("web", "templates", |
|
94 | self.templatepath = self.config("web", "templates", | |
95 | templater.templatepath(), |
|
95 | templater.templatepath(), | |
96 | untrusted=False) |
|
96 | untrusted=False) | |
97 |
|
97 | |||
98 | # The CGI scripts are often run by a user different from the repo owner. |
|
98 | # The CGI scripts are often run by a user different from the repo owner. | |
99 | # Trust the settings from the .hg/hgrc files by default. |
|
99 | # Trust the settings from the .hg/hgrc files by default. | |
100 | def config(self, section, name, default=None, untrusted=True): |
|
100 | def config(self, section, name, default=None, untrusted=True): | |
101 | return self.repo.ui.config(section, name, default, |
|
101 | return self.repo.ui.config(section, name, default, | |
102 | untrusted=untrusted) |
|
102 | untrusted=untrusted) | |
103 |
|
103 | |||
104 | def configbool(self, section, name, default=False, untrusted=True): |
|
104 | def configbool(self, section, name, default=False, untrusted=True): | |
105 | return self.repo.ui.configbool(section, name, default, |
|
105 | return self.repo.ui.configbool(section, name, default, | |
106 | untrusted=untrusted) |
|
106 | untrusted=untrusted) | |
107 |
|
107 | |||
108 | def configlist(self, section, name, default=None, untrusted=True): |
|
108 | def configlist(self, section, name, default=None, untrusted=True): | |
109 | return self.repo.ui.configlist(section, name, default, |
|
109 | return self.repo.ui.configlist(section, name, default, | |
110 | untrusted=untrusted) |
|
110 | untrusted=untrusted) | |
111 |
|
111 | |||
112 | def refresh(self): |
|
112 | def refresh(self): | |
113 | mtime = get_mtime(self.repo.root) |
|
113 | mtime = get_mtime(self.repo.root) | |
114 | if mtime != self.mtime: |
|
114 | if mtime != self.mtime: | |
115 | self.mtime = mtime |
|
115 | self.mtime = mtime | |
116 | self.repo = hg.repository(self.repo.ui, self.repo.root) |
|
116 | self.repo = hg.repository(self.repo.ui, self.repo.root) | |
117 | self.maxchanges = int(self.config("web", "maxchanges", 10)) |
|
117 | self.maxchanges = int(self.config("web", "maxchanges", 10)) | |
118 | self.stripecount = int(self.config("web", "stripes", 1)) |
|
118 | self.stripecount = int(self.config("web", "stripes", 1)) | |
119 | self.maxshortchanges = int(self.config("web", "maxshortchanges", 60)) |
|
119 | self.maxshortchanges = int(self.config("web", "maxshortchanges", 60)) | |
120 | self.maxfiles = int(self.config("web", "maxfiles", 10)) |
|
120 | self.maxfiles = int(self.config("web", "maxfiles", 10)) | |
121 | self.allowpull = self.configbool("web", "allowpull", True) |
|
121 | self.allowpull = self.configbool("web", "allowpull", True) | |
122 | self.encoding = self.config("web", "encoding", util._encoding) |
|
122 | self.encoding = self.config("web", "encoding", util._encoding) | |
123 |
|
123 | |||
124 | def run(self): |
|
124 | def run(self): | |
125 | if not os.environ.get('GATEWAY_INTERFACE', '').startswith("CGI/1."): |
|
125 | if not os.environ.get('GATEWAY_INTERFACE', '').startswith("CGI/1."): | |
126 | raise RuntimeError("This function is only intended to be called while running as a CGI script.") |
|
126 | raise RuntimeError("This function is only intended to be called while running as a CGI script.") | |
127 | import mercurial.hgweb.wsgicgi as wsgicgi |
|
127 | import mercurial.hgweb.wsgicgi as wsgicgi | |
128 | wsgicgi.launch(self) |
|
128 | wsgicgi.launch(self) | |
129 |
|
129 | |||
130 | def __call__(self, env, respond): |
|
130 | def __call__(self, env, respond): | |
131 | req = wsgirequest(env, respond) |
|
131 | req = wsgirequest(env, respond) | |
132 | self.run_wsgi(req) |
|
132 | self.run_wsgi(req) | |
133 | return req |
|
133 | return req | |
134 |
|
134 | |||
135 | def run_wsgi(self, req): |
|
135 | def run_wsgi(self, req): | |
136 |
|
136 | |||
137 | self.refresh() |
|
137 | self.refresh() | |
138 |
|
138 | |||
139 | # expand form shortcuts |
|
139 | # expand form shortcuts | |
140 |
|
140 | |||
141 | for k in shortcuts.iterkeys(): |
|
141 | for k in shortcuts.iterkeys(): | |
142 | if k in req.form: |
|
142 | if k in req.form: | |
143 | for name, value in shortcuts[k]: |
|
143 | for name, value in shortcuts[k]: | |
144 | if value is None: |
|
144 | if value is None: | |
145 | value = req.form[k] |
|
145 | value = req.form[k] | |
146 | req.form[name] = value |
|
146 | req.form[name] = value | |
147 | del req.form[k] |
|
147 | del req.form[k] | |
148 |
|
148 | |||
149 | # work with CGI variables to create coherent structure |
|
149 | # work with CGI variables to create coherent structure | |
150 | # use SCRIPT_NAME, PATH_INFO and QUERY_STRING as well as our REPO_NAME |
|
150 | # use SCRIPT_NAME, PATH_INFO and QUERY_STRING as well as our REPO_NAME | |
151 |
|
151 | |||
152 | req.url = req.env['SCRIPT_NAME'] |
|
152 | req.url = req.env['SCRIPT_NAME'] | |
153 | if not req.url.endswith('/'): |
|
153 | if not req.url.endswith('/'): | |
154 | req.url += '/' |
|
154 | req.url += '/' | |
155 | if req.env.has_key('REPO_NAME'): |
|
155 | if req.env.has_key('REPO_NAME'): | |
156 | req.url += req.env['REPO_NAME'] + '/' |
|
156 | req.url += req.env['REPO_NAME'] + '/' | |
157 |
|
157 | |||
158 | if req.env.get('PATH_INFO'): |
|
158 | if req.env.get('PATH_INFO'): | |
159 | parts = req.env.get('PATH_INFO').strip('/').split('/') |
|
159 | parts = req.env.get('PATH_INFO').strip('/').split('/') | |
160 | repo_parts = req.env.get('REPO_NAME', '').split('/') |
|
160 | repo_parts = req.env.get('REPO_NAME', '').split('/') | |
161 | if parts[:len(repo_parts)] == repo_parts: |
|
161 | if parts[:len(repo_parts)] == repo_parts: | |
162 | parts = parts[len(repo_parts):] |
|
162 | parts = parts[len(repo_parts):] | |
163 | query = '/'.join(parts) |
|
163 | query = '/'.join(parts) | |
164 | else: |
|
164 | else: | |
165 | query = req.env['QUERY_STRING'].split('&', 1)[0] |
|
165 | query = req.env['QUERY_STRING'].split('&', 1)[0] | |
166 | query = query.split(';', 1)[0] |
|
166 | query = query.split(';', 1)[0] | |
167 |
|
167 | |||
168 | # translate user-visible url structure to internal structure |
|
168 | # translate user-visible url structure to internal structure | |
169 |
|
169 | |||
170 | args = query.split('/', 2) |
|
170 | args = query.split('/', 2) | |
171 | if 'cmd' not in req.form and args and args[0]: |
|
171 | if 'cmd' not in req.form and args and args[0]: | |
172 |
|
172 | |||
173 | cmd = args.pop(0) |
|
173 | cmd = args.pop(0) | |
174 | style = cmd.rfind('-') |
|
174 | style = cmd.rfind('-') | |
175 | if style != -1: |
|
175 | if style != -1: | |
176 | req.form['style'] = [cmd[:style]] |
|
176 | req.form['style'] = [cmd[:style]] | |
177 | cmd = cmd[style+1:] |
|
177 | cmd = cmd[style+1:] | |
178 |
|
178 | |||
179 | # avoid accepting e.g. style parameter as command |
|
179 | # avoid accepting e.g. style parameter as command | |
180 | if hasattr(webcommands, cmd) or hasattr(protocol, cmd): |
|
180 | if hasattr(webcommands, cmd) or hasattr(protocol, cmd): | |
181 | req.form['cmd'] = [cmd] |
|
181 | req.form['cmd'] = [cmd] | |
182 |
|
182 | |||
183 | if args and args[0]: |
|
183 | if args and args[0]: | |
184 | node = args.pop(0) |
|
184 | node = args.pop(0) | |
185 | req.form['node'] = [node] |
|
185 | req.form['node'] = [node] | |
186 | if args: |
|
186 | if args: | |
187 | req.form['file'] = args |
|
187 | req.form['file'] = args | |
188 |
|
188 | |||
189 | if cmd == 'static': |
|
189 | if cmd == 'static': | |
190 | req.form['file'] = req.form['node'] |
|
190 | req.form['file'] = req.form['node'] | |
191 | elif cmd == 'archive': |
|
191 | elif cmd == 'archive': | |
192 | fn = req.form['node'][0] |
|
192 | fn = req.form['node'][0] | |
193 | for type_, spec in self.archive_specs.iteritems(): |
|
193 | for type_, spec in self.archive_specs.iteritems(): | |
194 | ext = spec[2] |
|
194 | ext = spec[2] | |
195 | if fn.endswith(ext): |
|
195 | if fn.endswith(ext): | |
196 | req.form['node'] = [fn[:-len(ext)]] |
|
196 | req.form['node'] = [fn[:-len(ext)]] | |
197 | req.form['type'] = [type_] |
|
197 | req.form['type'] = [type_] | |
198 |
|
198 | |||
199 | # actually process the request |
|
199 | # actually process the request | |
200 |
|
200 | |||
201 | try: |
|
201 | try: | |
202 |
|
202 | |||
203 | cmd = req.form.get('cmd', [''])[0] |
|
203 | cmd = req.form.get('cmd', [''])[0] | |
204 | if hasattr(protocol, cmd): |
|
204 | if hasattr(protocol, cmd): | |
205 | method = getattr(protocol, cmd) |
|
205 | method = getattr(protocol, cmd) | |
206 | method(self, req) |
|
206 | method(self, req) | |
207 | else: |
|
207 | else: | |
208 | tmpl = self.templater(req) |
|
208 | tmpl = self.templater(req) | |
209 | if cmd == '': |
|
209 | if cmd == '': | |
210 | req.form['cmd'] = [tmpl.cache['default']] |
|
210 | req.form['cmd'] = [tmpl.cache['default']] | |
211 | cmd = req.form['cmd'][0] |
|
211 | cmd = req.form['cmd'][0] | |
212 | method = getattr(webcommands, cmd) |
|
212 | method = getattr(webcommands, cmd) | |
213 | method(self, req, tmpl) |
|
213 | method(self, req, tmpl) | |
214 | del tmpl |
|
214 | del tmpl | |
215 |
|
215 | |||
216 | except revlog.LookupError, err: |
|
216 | except revlog.LookupError, err: | |
217 | req.respond(404, tmpl( |
|
217 | req.respond(404, tmpl( | |
218 | 'error', error='revision not found: %s' % err.name)) |
|
218 | 'error', error='revision not found: %s' % err.name)) | |
219 | except (hg.RepoError, revlog.RevlogError), inst: |
|
219 | except (hg.RepoError, revlog.RevlogError), inst: | |
220 | req.respond('500 Internal Server Error', |
|
220 | req.respond('500 Internal Server Error', | |
221 | tmpl('error', error=str(inst))) |
|
221 | tmpl('error', error=str(inst))) | |
222 | except ErrorResponse, inst: |
|
222 | except ErrorResponse, inst: | |
223 | req.respond(inst.code, tmpl('error', error=inst.message)) |
|
223 | req.respond(inst.code, tmpl('error', error=inst.message)) | |
224 | except AttributeError: |
|
224 | except AttributeError: | |
225 | req.respond(400, tmpl('error', error='No such method: ' + cmd)) |
|
225 | req.respond(400, tmpl('error', error='No such method: ' + cmd)) | |
226 |
|
226 | |||
227 | def templater(self, req): |
|
227 | def templater(self, req): | |
228 |
|
228 | |||
229 | # determine scheme, port and server name |
|
229 | # determine scheme, port and server name | |
230 | # this is needed to create absolute urls |
|
230 | # this is needed to create absolute urls | |
231 |
|
231 | |||
232 | proto = req.env.get('wsgi.url_scheme') |
|
232 | proto = req.env.get('wsgi.url_scheme') | |
233 | if proto == 'https': |
|
233 | if proto == 'https': | |
234 | proto = 'https' |
|
234 | proto = 'https' | |
235 | default_port = "443" |
|
235 | default_port = "443" | |
236 | else: |
|
236 | else: | |
237 | proto = 'http' |
|
237 | proto = 'http' | |
238 | default_port = "80" |
|
238 | default_port = "80" | |
239 |
|
239 | |||
240 | port = req.env["SERVER_PORT"] |
|
240 | port = req.env["SERVER_PORT"] | |
241 | port = port != default_port and (":" + port) or "" |
|
241 | port = port != default_port and (":" + port) or "" | |
242 | urlbase = '%s://%s%s' % (proto, req.env['SERVER_NAME'], port) |
|
242 | urlbase = '%s://%s%s' % (proto, req.env['SERVER_NAME'], port) | |
243 | staticurl = self.config("web", "staticurl") or req.url + 'static/' |
|
243 | staticurl = self.config("web", "staticurl") or req.url + 'static/' | |
244 | if not staticurl.endswith('/'): |
|
244 | if not staticurl.endswith('/'): | |
245 | staticurl += '/' |
|
245 | staticurl += '/' | |
246 |
|
246 | |||
247 | # some functions for the templater |
|
247 | # some functions for the templater | |
248 |
|
248 | |||
249 | def header(**map): |
|
249 | def header(**map): | |
250 | header_file = cStringIO.StringIO( |
|
250 | header_file = cStringIO.StringIO( | |
251 | ''.join(tmpl("header", encoding=self.encoding, **map))) |
|
251 | ''.join(tmpl("header", encoding=self.encoding, **map))) | |
252 | msg = mimetools.Message(header_file, 0) |
|
252 | msg = mimetools.Message(header_file, 0) | |
253 | req.header(msg.items()) |
|
253 | req.header(msg.items()) | |
254 | yield header_file.read() |
|
254 | yield header_file.read() | |
255 |
|
255 | |||
256 | def rawfileheader(**map): |
|
256 | def rawfileheader(**map): | |
257 | req.header([('Content-type', map['mimetype']), |
|
257 | req.header([('Content-type', map['mimetype']), | |
258 | ('Content-disposition', 'filename=%s' % map['file']), |
|
258 | ('Content-disposition', 'filename=%s' % map['file']), | |
259 | ('Content-length', str(len(map['raw'])))]) |
|
259 | ('Content-length', str(len(map['raw'])))]) | |
260 | yield '' |
|
260 | yield '' | |
261 |
|
261 | |||
262 | def footer(**map): |
|
262 | def footer(**map): | |
263 | yield tmpl("footer", **map) |
|
263 | yield tmpl("footer", **map) | |
264 |
|
264 | |||
265 | def motd(**map): |
|
265 | def motd(**map): | |
266 | yield self.config("web", "motd", "") |
|
266 | yield self.config("web", "motd", "") | |
267 |
|
267 | |||
268 | def sessionvars(**map): |
|
268 | def sessionvars(**map): | |
269 | fields = [] |
|
269 | fields = [] | |
270 | if req.form.has_key('style'): |
|
270 | if req.form.has_key('style'): | |
271 | style = req.form['style'][0] |
|
271 | style = req.form['style'][0] | |
272 | if style != self.config('web', 'style', ''): |
|
272 | if style != self.config('web', 'style', ''): | |
273 | fields.append(('style', style)) |
|
273 | fields.append(('style', style)) | |
274 |
|
274 | |||
275 | separator = req.url[-1] == '?' and ';' or '?' |
|
275 | separator = req.url[-1] == '?' and ';' or '?' | |
276 | for name, value in fields: |
|
276 | for name, value in fields: | |
277 | yield dict(name=name, value=value, separator=separator) |
|
277 | yield dict(name=name, value=value, separator=separator) | |
278 | separator = ';' |
|
278 | separator = ';' | |
279 |
|
279 | |||
280 | # figure out which style to use |
|
280 | # figure out which style to use | |
281 |
|
281 | |||
282 | style = self.config("web", "style", "") |
|
282 | style = self.config("web", "style", "") | |
283 | if req.form.has_key('style'): |
|
283 | if req.form.has_key('style'): | |
284 | style = req.form['style'][0] |
|
284 | style = req.form['style'][0] | |
285 | mapfile = style_map(self.templatepath, style) |
|
285 | mapfile = style_map(self.templatepath, style) | |
286 |
|
286 | |||
287 | if not self.reponame: |
|
287 | if not self.reponame: | |
288 | self.reponame = (self.config("web", "name") |
|
288 | self.reponame = (self.config("web", "name") | |
289 | or req.env.get('REPO_NAME') |
|
289 | or req.env.get('REPO_NAME') | |
290 | or req.url.strip('/') or self.repo.root) |
|
290 | or req.url.strip('/') or self.repo.root) | |
291 |
|
291 | |||
292 | # create the templater |
|
292 | # create the templater | |
293 |
|
293 | |||
294 | tmpl = templater.templater(mapfile, templater.common_filters, |
|
294 | tmpl = templater.templater(mapfile, templater.common_filters, | |
295 | defaults={"url": req.url, |
|
295 | defaults={"url": req.url, | |
296 | "staticurl": staticurl, |
|
296 | "staticurl": staticurl, | |
297 | "urlbase": urlbase, |
|
297 | "urlbase": urlbase, | |
298 | "repo": self.reponame, |
|
298 | "repo": self.reponame, | |
299 | "header": header, |
|
299 | "header": header, | |
300 | "footer": footer, |
|
300 | "footer": footer, | |
301 | "motd": motd, |
|
301 | "motd": motd, | |
302 | "rawfileheader": rawfileheader, |
|
302 | "rawfileheader": rawfileheader, | |
303 | "sessionvars": sessionvars |
|
303 | "sessionvars": sessionvars | |
304 | }) |
|
304 | }) | |
305 | return tmpl |
|
305 | return tmpl | |
306 |
|
306 | |||
307 | def archivelist(self, nodeid): |
|
307 | def archivelist(self, nodeid): | |
308 | allowed = self.configlist("web", "allow_archive") |
|
308 | allowed = self.configlist("web", "allow_archive") | |
309 | for i, spec in self.archive_specs.iteritems(): |
|
309 | for i, spec in self.archive_specs.iteritems(): | |
310 | if i in allowed or self.configbool("web", "allow" + i): |
|
310 | if i in allowed or self.configbool("web", "allow" + i): | |
311 | yield {"type" : i, "extension" : spec[2], "node" : nodeid} |
|
311 | yield {"type" : i, "extension" : spec[2], "node" : nodeid} | |
312 |
|
312 | |||
313 | def listfilediffs(self, tmpl, files, changeset): |
|
313 | def listfilediffs(self, tmpl, files, changeset): | |
314 | for f in files[:self.maxfiles]: |
|
314 | for f in files[:self.maxfiles]: | |
315 | yield tmpl("filedifflink", node=hex(changeset), file=f) |
|
315 | yield tmpl("filedifflink", node=hex(changeset), file=f) | |
316 | if len(files) > self.maxfiles: |
|
316 | if len(files) > self.maxfiles: | |
317 | yield tmpl("fileellipses") |
|
317 | yield tmpl("fileellipses") | |
318 |
|
318 | |||
319 | def siblings(self, siblings=[], hiderev=None, **args): |
|
319 | def siblings(self, siblings=[], hiderev=None, **args): | |
320 | siblings = [s for s in siblings if s.node() != nullid] |
|
320 | siblings = [s for s in siblings if s.node() != nullid] | |
321 | if len(siblings) == 1 and siblings[0].rev() == hiderev: |
|
321 | if len(siblings) == 1 and siblings[0].rev() == hiderev: | |
322 | return |
|
322 | return | |
323 | for s in siblings: |
|
323 | for s in siblings: | |
324 | d = {'node': hex(s.node()), 'rev': s.rev()} |
|
324 | d = {'node': hex(s.node()), 'rev': s.rev()} | |
325 | if hasattr(s, 'path'): |
|
325 | if hasattr(s, 'path'): | |
326 | d['file'] = s.path() |
|
326 | d['file'] = s.path() | |
327 | d.update(args) |
|
327 | d.update(args) | |
328 | yield d |
|
328 | yield d | |
329 |
|
329 | |||
330 | def renamelink(self, fl, node): |
|
330 | def renamelink(self, fl, node): | |
331 | r = fl.renamed(node) |
|
331 | r = fl.renamed(node) | |
332 | if r: |
|
332 | if r: | |
333 | return [dict(file=r[0], node=hex(r[1]))] |
|
333 | return [dict(file=r[0], node=hex(r[1]))] | |
334 | return [] |
|
334 | return [] | |
335 |
|
335 | |||
336 | def nodetagsdict(self, node): |
|
336 | def nodetagsdict(self, node): | |
337 | return [{"name": i} for i in self.repo.nodetags(node)] |
|
337 | return [{"name": i} for i in self.repo.nodetags(node)] | |
338 |
|
338 | |||
339 | def nodebranchdict(self, ctx): |
|
339 | def nodebranchdict(self, ctx): | |
340 | branches = [] |
|
340 | branches = [] | |
341 | branch = ctx.branch() |
|
341 | branch = ctx.branch() | |
342 | # If this is an empty repo, ctx.node() == nullid, |
|
342 | # If this is an empty repo, ctx.node() == nullid, | |
343 | # ctx.branch() == 'default', but branchtags() is |
|
343 | # ctx.branch() == 'default', but branchtags() is | |
344 | # an empty dict. Using dict.get avoids a traceback. |
|
344 | # an empty dict. Using dict.get avoids a traceback. | |
345 | if self.repo.branchtags().get(branch) == ctx.node(): |
|
345 | if self.repo.branchtags().get(branch) == ctx.node(): | |
346 | branches.append({"name": branch}) |
|
346 | branches.append({"name": branch}) | |
347 | return branches |
|
347 | return branches | |
348 |
|
348 | |||
349 | def showtag(self, tmpl, t1, node=nullid, **args): |
|
349 | def showtag(self, tmpl, t1, node=nullid, **args): | |
350 | for t in self.repo.nodetags(node): |
|
350 | for t in self.repo.nodetags(node): | |
351 | yield tmpl(t1, tag=t, **args) |
|
351 | yield tmpl(t1, tag=t, **args) | |
352 |
|
352 | |||
353 | def diff(self, tmpl, node1, node2, files): |
|
353 | def diff(self, tmpl, node1, node2, files): | |
354 | def filterfiles(filters, files): |
|
354 | def filterfiles(filters, files): | |
355 | l = [x for x in files if x in filters] |
|
355 | l = [x for x in files if x in filters] | |
356 |
|
356 | |||
357 | for t in filters: |
|
357 | for t in filters: | |
358 | if t and t[-1] != os.sep: |
|
358 | if t and t[-1] != os.sep: | |
359 | t += os.sep |
|
359 | t += os.sep | |
360 | l += [x for x in files if x.startswith(t)] |
|
360 | l += [x for x in files if x.startswith(t)] | |
361 | return l |
|
361 | return l | |
362 |
|
362 | |||
363 | parity = paritygen(self.stripecount) |
|
363 | parity = paritygen(self.stripecount) | |
364 | def diffblock(diff, f, fn): |
|
364 | def diffblock(diff, f, fn): | |
365 | yield tmpl("diffblock", |
|
365 | yield tmpl("diffblock", | |
366 | lines=prettyprintlines(diff), |
|
366 | lines=prettyprintlines(diff), | |
367 | parity=parity.next(), |
|
367 | parity=parity.next(), | |
368 | file=f, |
|
368 | file=f, | |
369 | filenode=hex(fn or nullid)) |
|
369 | filenode=hex(fn or nullid)) | |
370 |
|
370 | |||
371 | def prettyprintlines(diff): |
|
371 | def prettyprintlines(diff): | |
372 | for l in diff.splitlines(1): |
|
372 | for l in diff.splitlines(1): | |
373 | if l.startswith('+'): |
|
373 | if l.startswith('+'): | |
374 | yield tmpl("difflineplus", line=l) |
|
374 | yield tmpl("difflineplus", line=l) | |
375 | elif l.startswith('-'): |
|
375 | elif l.startswith('-'): | |
376 | yield tmpl("difflineminus", line=l) |
|
376 | yield tmpl("difflineminus", line=l) | |
377 | elif l.startswith('@'): |
|
377 | elif l.startswith('@'): | |
378 | yield tmpl("difflineat", line=l) |
|
378 | yield tmpl("difflineat", line=l) | |
379 | else: |
|
379 | else: | |
380 | yield tmpl("diffline", line=l) |
|
380 | yield tmpl("diffline", line=l) | |
381 |
|
381 | |||
382 | r = self.repo |
|
382 | r = self.repo | |
383 | c1 = r.changectx(node1) |
|
383 | c1 = r.changectx(node1) | |
384 | c2 = r.changectx(node2) |
|
384 | c2 = r.changectx(node2) | |
385 | date1 = util.datestr(c1.date()) |
|
385 | date1 = util.datestr(c1.date()) | |
386 | date2 = util.datestr(c2.date()) |
|
386 | date2 = util.datestr(c2.date()) | |
387 |
|
387 | |||
388 | modified, added, removed, deleted, unknown = r.status(node1, node2)[:5] |
|
388 | modified, added, removed, deleted, unknown = r.status(node1, node2)[:5] | |
389 | if files: |
|
389 | if files: | |
390 | modified, added, removed = map(lambda x: filterfiles(files, x), |
|
390 | modified, added, removed = map(lambda x: filterfiles(files, x), | |
391 | (modified, added, removed)) |
|
391 | (modified, added, removed)) | |
392 |
|
392 | |||
393 | diffopts = patch.diffopts(self.repo.ui, untrusted=True) |
|
393 | diffopts = patch.diffopts(self.repo.ui, untrusted=True) | |
394 | for f in modified: |
|
394 | for f in modified: | |
395 | to = c1.filectx(f).data() |
|
395 | to = c1.filectx(f).data() | |
396 | tn = c2.filectx(f).data() |
|
396 | tn = c2.filectx(f).data() | |
397 | yield diffblock(mdiff.unidiff(to, date1, tn, date2, f, f, |
|
397 | yield diffblock(mdiff.unidiff(to, date1, tn, date2, f, f, | |
398 | opts=diffopts), f, tn) |
|
398 | opts=diffopts), f, tn) | |
399 | for f in added: |
|
399 | for f in added: | |
400 | to = None |
|
400 | to = None | |
401 | tn = c2.filectx(f).data() |
|
401 | tn = c2.filectx(f).data() | |
402 | yield diffblock(mdiff.unidiff(to, date1, tn, date2, f, f, |
|
402 | yield diffblock(mdiff.unidiff(to, date1, tn, date2, f, f, | |
403 | opts=diffopts), f, tn) |
|
403 | opts=diffopts), f, tn) | |
404 | for f in removed: |
|
404 | for f in removed: | |
405 | to = c1.filectx(f).data() |
|
405 | to = c1.filectx(f).data() | |
406 | tn = None |
|
406 | tn = None | |
407 | yield diffblock(mdiff.unidiff(to, date1, tn, date2, f, f, |
|
407 | yield diffblock(mdiff.unidiff(to, date1, tn, date2, f, f, | |
408 | opts=diffopts), f, tn) |
|
408 | opts=diffopts), f, tn) | |
409 |
|
409 | |||
410 | def changelog(self, tmpl, ctx, shortlog=False): |
|
410 | def changelog(self, tmpl, ctx, shortlog=False): | |
411 | def changelist(limit=0,**map): |
|
411 | def changelist(limit=0,**map): | |
412 | cl = self.repo.changelog |
|
412 | cl = self.repo.changelog | |
413 | l = [] # build a list in forward order for efficiency |
|
413 | l = [] # build a list in forward order for efficiency | |
414 | for i in xrange(start, end): |
|
414 | for i in xrange(start, end): | |
415 | ctx = self.repo.changectx(i) |
|
415 | ctx = self.repo.changectx(i) | |
416 | n = ctx.node() |
|
416 | n = ctx.node() | |
417 |
|
417 | |||
418 | l.insert(0, {"parity": parity.next(), |
|
418 | l.insert(0, {"parity": parity.next(), | |
419 | "author": ctx.user(), |
|
419 | "author": ctx.user(), | |
420 | "parent": self.siblings(ctx.parents(), i - 1), |
|
420 | "parent": self.siblings(ctx.parents(), i - 1), | |
421 | "child": self.siblings(ctx.children(), i + 1), |
|
421 | "child": self.siblings(ctx.children(), i + 1), | |
422 | "changelogtag": self.showtag("changelogtag",n), |
|
422 | "changelogtag": self.showtag("changelogtag",n), | |
423 | "desc": ctx.description(), |
|
423 | "desc": ctx.description(), | |
424 | "date": ctx.date(), |
|
424 | "date": ctx.date(), | |
425 | "files": self.listfilediffs(tmpl, ctx.files(), n), |
|
425 | "files": self.listfilediffs(tmpl, ctx.files(), n), | |
426 | "rev": i, |
|
426 | "rev": i, | |
427 | "node": hex(n), |
|
427 | "node": hex(n), | |
428 | "tags": self.nodetagsdict(n), |
|
428 | "tags": self.nodetagsdict(n), | |
429 | "branches": self.nodebranchdict(ctx)}) |
|
429 | "branches": self.nodebranchdict(ctx)}) | |
430 |
|
430 | |||
431 | if limit > 0: |
|
431 | if limit > 0: | |
432 | l = l[:limit] |
|
432 | l = l[:limit] | |
433 |
|
433 | |||
434 | for e in l: |
|
434 | for e in l: | |
435 | yield e |
|
435 | yield e | |
436 |
|
436 | |||
437 | maxchanges = shortlog and self.maxshortchanges or self.maxchanges |
|
437 | maxchanges = shortlog and self.maxshortchanges or self.maxchanges | |
438 | cl = self.repo.changelog |
|
438 | cl = self.repo.changelog | |
439 | count = cl.count() |
|
439 | count = cl.count() | |
440 | pos = ctx.rev() |
|
440 | pos = ctx.rev() | |
441 | start = max(0, pos - maxchanges + 1) |
|
441 | start = max(0, pos - maxchanges + 1) | |
442 | end = min(count, start + maxchanges) |
|
442 | end = min(count, start + maxchanges) | |
443 | pos = end - 1 |
|
443 | pos = end - 1 | |
444 | parity = paritygen(self.stripecount, offset=start-end) |
|
444 | parity = paritygen(self.stripecount, offset=start-end) | |
445 |
|
445 | |||
446 | changenav = revnavgen(pos, maxchanges, count, self.repo.changectx) |
|
446 | changenav = revnavgen(pos, maxchanges, count, self.repo.changectx) | |
447 |
|
447 | |||
448 | yield tmpl(shortlog and 'shortlog' or 'changelog', |
|
448 | yield tmpl(shortlog and 'shortlog' or 'changelog', | |
449 | changenav=changenav, |
|
449 | changenav=changenav, | |
450 | node=hex(cl.tip()), |
|
450 | node=hex(cl.tip()), | |
451 | rev=pos, changesets=count, |
|
451 | rev=pos, changesets=count, | |
452 | entries=lambda **x: changelist(limit=0,**x), |
|
452 | entries=lambda **x: changelist(limit=0,**x), | |
453 | latestentry=lambda **x: changelist(limit=1,**x), |
|
453 | latestentry=lambda **x: changelist(limit=1,**x), | |
454 | archives=self.archivelist("tip")) |
|
454 | archives=self.archivelist("tip")) | |
455 |
|
455 | |||
456 | def search(self, tmpl, query): |
|
456 | def search(self, tmpl, query): | |
457 |
|
457 | |||
458 | def changelist(**map): |
|
458 | def changelist(**map): | |
459 | cl = self.repo.changelog |
|
459 | cl = self.repo.changelog | |
460 | count = 0 |
|
460 | count = 0 | |
461 | qw = query.lower().split() |
|
461 | qw = query.lower().split() | |
462 |
|
462 | |||
463 | def revgen(): |
|
463 | def revgen(): | |
464 | for i in xrange(cl.count() - 1, 0, -100): |
|
464 | for i in xrange(cl.count() - 1, 0, -100): | |
465 | l = [] |
|
465 | l = [] | |
466 | for j in xrange(max(0, i - 100), i): |
|
466 | for j in xrange(max(0, i - 100), i): | |
467 | ctx = self.repo.changectx(j) |
|
467 | ctx = self.repo.changectx(j) | |
468 | l.append(ctx) |
|
468 | l.append(ctx) | |
469 | l.reverse() |
|
469 | l.reverse() | |
470 | for e in l: |
|
470 | for e in l: | |
471 | yield e |
|
471 | yield e | |
472 |
|
472 | |||
473 | for ctx in revgen(): |
|
473 | for ctx in revgen(): | |
474 | miss = 0 |
|
474 | miss = 0 | |
475 | for q in qw: |
|
475 | for q in qw: | |
476 | if not (q in ctx.user().lower() or |
|
476 | if not (q in ctx.user().lower() or | |
477 | q in ctx.description().lower() or |
|
477 | q in ctx.description().lower() or | |
478 | q in " ".join(ctx.files()).lower()): |
|
478 | q in " ".join(ctx.files()).lower()): | |
479 | miss = 1 |
|
479 | miss = 1 | |
480 | break |
|
480 | break | |
481 | if miss: |
|
481 | if miss: | |
482 | continue |
|
482 | continue | |
483 |
|
483 | |||
484 | count += 1 |
|
484 | count += 1 | |
485 | n = ctx.node() |
|
485 | n = ctx.node() | |
486 |
|
486 | |||
487 | yield tmpl('searchentry', |
|
487 | yield tmpl('searchentry', | |
488 | parity=parity.next(), |
|
488 | parity=parity.next(), | |
489 | author=ctx.user(), |
|
489 | author=ctx.user(), | |
490 | parent=self.siblings(ctx.parents()), |
|
490 | parent=self.siblings(ctx.parents()), | |
491 | child=self.siblings(ctx.children()), |
|
491 | child=self.siblings(ctx.children()), | |
492 | changelogtag=self.showtag("changelogtag",n), |
|
492 | changelogtag=self.showtag("changelogtag",n), | |
493 | desc=ctx.description(), |
|
493 | desc=ctx.description(), | |
494 | date=ctx.date(), |
|
494 | date=ctx.date(), | |
495 | files=self.listfilediffs(tmpl, ctx.files(), n), |
|
495 | files=self.listfilediffs(tmpl, ctx.files(), n), | |
496 | rev=ctx.rev(), |
|
496 | rev=ctx.rev(), | |
497 | node=hex(n), |
|
497 | node=hex(n), | |
498 | tags=self.nodetagsdict(n), |
|
498 | tags=self.nodetagsdict(n), | |
499 | branches=self.nodebranchdict(ctx)) |
|
499 | branches=self.nodebranchdict(ctx)) | |
500 |
|
500 | |||
501 | if count >= self.maxchanges: |
|
501 | if count >= self.maxchanges: | |
502 | break |
|
502 | break | |
503 |
|
503 | |||
504 | cl = self.repo.changelog |
|
504 | cl = self.repo.changelog | |
505 | parity = paritygen(self.stripecount) |
|
505 | parity = paritygen(self.stripecount) | |
506 |
|
506 | |||
507 | yield tmpl('search', |
|
507 | yield tmpl('search', | |
508 | query=query, |
|
508 | query=query, | |
509 | node=hex(cl.tip()), |
|
509 | node=hex(cl.tip()), | |
510 | entries=changelist, |
|
510 | entries=changelist, | |
511 | archives=self.archivelist("tip")) |
|
511 | archives=self.archivelist("tip")) | |
512 |
|
512 | |||
513 | def changeset(self, tmpl, ctx): |
|
513 | def changeset(self, tmpl, ctx): | |
514 | n = ctx.node() |
|
514 | n = ctx.node() | |
515 | parents = ctx.parents() |
|
515 | parents = ctx.parents() | |
516 | p1 = parents[0].node() |
|
516 | p1 = parents[0].node() | |
517 |
|
517 | |||
518 | files = [] |
|
518 | files = [] | |
519 | parity = paritygen(self.stripecount) |
|
519 | parity = paritygen(self.stripecount) | |
520 | for f in ctx.files(): |
|
520 | for f in ctx.files(): | |
521 | files.append(tmpl("filenodelink", |
|
521 | files.append(tmpl("filenodelink", | |
522 | node=hex(n), file=f, |
|
522 | node=hex(n), file=f, | |
523 | parity=parity.next())) |
|
523 | parity=parity.next())) | |
524 |
|
524 | |||
525 | def diff(**map): |
|
525 | def diff(**map): | |
526 | yield self.diff(tmpl, p1, n, None) |
|
526 | yield self.diff(tmpl, p1, n, None) | |
527 |
|
527 | |||
528 | yield tmpl('changeset', |
|
528 | yield tmpl('changeset', | |
529 | diff=diff, |
|
529 | diff=diff, | |
530 | rev=ctx.rev(), |
|
530 | rev=ctx.rev(), | |
531 | node=hex(n), |
|
531 | node=hex(n), | |
532 | parent=self.siblings(parents), |
|
532 | parent=self.siblings(parents), | |
533 | child=self.siblings(ctx.children()), |
|
533 | child=self.siblings(ctx.children()), | |
534 | changesettag=self.showtag("changesettag",n), |
|
534 | changesettag=self.showtag("changesettag",n), | |
535 | author=ctx.user(), |
|
535 | author=ctx.user(), | |
536 | desc=ctx.description(), |
|
536 | desc=ctx.description(), | |
537 | date=ctx.date(), |
|
537 | date=ctx.date(), | |
538 | files=files, |
|
538 | files=files, | |
539 | archives=self.archivelist(hex(n)), |
|
539 | archives=self.archivelist(hex(n)), | |
540 | tags=self.nodetagsdict(n), |
|
540 | tags=self.nodetagsdict(n), | |
541 | branches=self.nodebranchdict(ctx)) |
|
541 | branches=self.nodebranchdict(ctx)) | |
542 |
|
542 | |||
543 | def filelog(self, tmpl, fctx): |
|
543 | def filelog(self, tmpl, fctx): | |
544 | f = fctx.path() |
|
544 | f = fctx.path() | |
545 | fl = fctx.filelog() |
|
545 | fl = fctx.filelog() | |
546 | count = fl.count() |
|
546 | count = fl.count() | |
547 | pagelen = self.maxshortchanges |
|
547 | pagelen = self.maxshortchanges | |
548 | pos = fctx.filerev() |
|
548 | pos = fctx.filerev() | |
549 | start = max(0, pos - pagelen + 1) |
|
549 | start = max(0, pos - pagelen + 1) | |
550 | end = min(count, start + pagelen) |
|
550 | end = min(count, start + pagelen) | |
551 | pos = end - 1 |
|
551 | pos = end - 1 | |
552 | parity = paritygen(self.stripecount, offset=start-end) |
|
552 | parity = paritygen(self.stripecount, offset=start-end) | |
553 |
|
553 | |||
554 | def entries(limit=0, **map): |
|
554 | def entries(limit=0, **map): | |
555 | l = [] |
|
555 | l = [] | |
556 |
|
556 | |||
557 | for i in xrange(start, end): |
|
557 | for i in xrange(start, end): | |
558 | ctx = fctx.filectx(i) |
|
558 | ctx = fctx.filectx(i) | |
559 | n = fl.node(i) |
|
559 | n = fl.node(i) | |
560 |
|
560 | |||
561 | l.insert(0, {"parity": parity.next(), |
|
561 | l.insert(0, {"parity": parity.next(), | |
562 | "filerev": i, |
|
562 | "filerev": i, | |
563 | "file": f, |
|
563 | "file": f, | |
564 | "node": hex(ctx.node()), |
|
564 | "node": hex(ctx.node()), | |
565 | "author": ctx.user(), |
|
565 | "author": ctx.user(), | |
566 | "date": ctx.date(), |
|
566 | "date": ctx.date(), | |
567 | "rename": self.renamelink(fl, n), |
|
567 | "rename": self.renamelink(fl, n), | |
568 | "parent": self.siblings(fctx.parents()), |
|
568 | "parent": self.siblings(fctx.parents()), | |
569 | "child": self.siblings(fctx.children()), |
|
569 | "child": self.siblings(fctx.children()), | |
570 | "desc": ctx.description()}) |
|
570 | "desc": ctx.description()}) | |
571 |
|
571 | |||
572 | if limit > 0: |
|
572 | if limit > 0: | |
573 | l = l[:limit] |
|
573 | l = l[:limit] | |
574 |
|
574 | |||
575 | for e in l: |
|
575 | for e in l: | |
576 | yield e |
|
576 | yield e | |
577 |
|
577 | |||
578 | nodefunc = lambda x: fctx.filectx(fileid=x) |
|
578 | nodefunc = lambda x: fctx.filectx(fileid=x) | |
579 | nav = revnavgen(pos, pagelen, count, nodefunc) |
|
579 | nav = revnavgen(pos, pagelen, count, nodefunc) | |
580 | yield tmpl("filelog", file=f, node=hex(fctx.node()), nav=nav, |
|
580 | yield tmpl("filelog", file=f, node=hex(fctx.node()), nav=nav, | |
581 | entries=lambda **x: entries(limit=0, **x), |
|
581 | entries=lambda **x: entries(limit=0, **x), | |
582 | latestentry=lambda **x: entries(limit=1, **x)) |
|
582 | latestentry=lambda **x: entries(limit=1, **x)) | |
583 |
|
583 | |||
584 | def filerevision(self, tmpl, fctx): |
|
584 | def filerevision(self, tmpl, fctx): | |
585 | f = fctx.path() |
|
585 | f = fctx.path() | |
586 | text = fctx.data() |
|
586 | text = fctx.data() | |
587 | fl = fctx.filelog() |
|
587 | fl = fctx.filelog() | |
588 | n = fctx.filenode() |
|
588 | n = fctx.filenode() | |
589 | parity = paritygen(self.stripecount) |
|
589 | parity = paritygen(self.stripecount) | |
590 |
|
590 | |||
591 | mt = mimetypes.guess_type(f)[0] |
|
591 | mt = mimetypes.guess_type(f)[0] | |
592 | rawtext = text |
|
592 | rawtext = text | |
593 | if util.binary(text): |
|
593 | if util.binary(text): | |
594 | mt = mt or 'application/octet-stream' |
|
594 | mt = mt or 'application/octet-stream' | |
595 | text = "(binary:%s)" % mt |
|
595 | text = "(binary:%s)" % mt | |
596 | mt = mt or 'text/plain' |
|
596 | mt = mt or 'text/plain' | |
597 |
|
597 | |||
598 | def lines(): |
|
598 | def lines(): | |
599 | for l, t in enumerate(text.splitlines(1)): |
|
599 | for l, t in enumerate(text.splitlines(1)): | |
600 | yield {"line": t, |
|
600 | yield {"line": t, | |
601 | "linenumber": "% 6d" % (l + 1), |
|
601 | "linenumber": "% 6d" % (l + 1), | |
602 | "parity": parity.next()} |
|
602 | "parity": parity.next()} | |
603 |
|
603 | |||
604 | yield tmpl("filerevision", |
|
604 | yield tmpl("filerevision", | |
605 | file=f, |
|
605 | file=f, | |
606 | path=_up(f), |
|
606 | path=_up(f), | |
607 | text=lines(), |
|
607 | text=lines(), | |
608 | raw=rawtext, |
|
608 | raw=rawtext, | |
609 | mimetype=mt, |
|
609 | mimetype=mt, | |
610 | rev=fctx.rev(), |
|
610 | rev=fctx.rev(), | |
611 | node=hex(fctx.node()), |
|
611 | node=hex(fctx.node()), | |
612 | author=fctx.user(), |
|
612 | author=fctx.user(), | |
613 | date=fctx.date(), |
|
613 | date=fctx.date(), | |
614 | desc=fctx.description(), |
|
614 | desc=fctx.description(), | |
615 | parent=self.siblings(fctx.parents()), |
|
615 | parent=self.siblings(fctx.parents()), | |
616 | child=self.siblings(fctx.children()), |
|
616 | child=self.siblings(fctx.children()), | |
617 | rename=self.renamelink(fl, n), |
|
617 | rename=self.renamelink(fl, n), | |
618 | permissions=fctx.manifest().flags(f)) |
|
618 | permissions=fctx.manifest().flags(f)) | |
619 |
|
619 | |||
620 | def fileannotate(self, tmpl, fctx): |
|
620 | def fileannotate(self, tmpl, fctx): | |
621 | f = fctx.path() |
|
621 | f = fctx.path() | |
622 | n = fctx.filenode() |
|
622 | n = fctx.filenode() | |
623 | fl = fctx.filelog() |
|
623 | fl = fctx.filelog() | |
624 | parity = paritygen(self.stripecount) |
|
624 | parity = paritygen(self.stripecount) | |
625 |
|
625 | |||
626 | def annotate(**map): |
|
626 | def annotate(**map): | |
627 | last = None |
|
627 | last = None | |
628 | for f, l in fctx.annotate(follow=True): |
|
628 | for f, l in fctx.annotate(follow=True): | |
629 | fnode = f.filenode() |
|
629 | fnode = f.filenode() | |
630 | name = self.repo.ui.shortuser(f.user()) |
|
630 | name = self.repo.ui.shortuser(f.user()) | |
631 |
|
631 | |||
632 | if last != fnode: |
|
632 | if last != fnode: | |
633 | last = fnode |
|
633 | last = fnode | |
634 |
|
634 | |||
635 | yield {"parity": parity.next(), |
|
635 | yield {"parity": parity.next(), | |
636 | "node": hex(f.node()), |
|
636 | "node": hex(f.node()), | |
637 | "rev": f.rev(), |
|
637 | "rev": f.rev(), | |
638 | "author": name, |
|
638 | "author": name, | |
639 | "file": f.path(), |
|
639 | "file": f.path(), | |
640 | "line": l} |
|
640 | "line": l} | |
641 |
|
641 | |||
642 | yield tmpl("fileannotate", |
|
642 | yield tmpl("fileannotate", | |
643 | file=f, |
|
643 | file=f, | |
644 | annotate=annotate, |
|
644 | annotate=annotate, | |
645 | path=_up(f), |
|
645 | path=_up(f), | |
646 | rev=fctx.rev(), |
|
646 | rev=fctx.rev(), | |
647 | node=hex(fctx.node()), |
|
647 | node=hex(fctx.node()), | |
648 | author=fctx.user(), |
|
648 | author=fctx.user(), | |
649 | date=fctx.date(), |
|
649 | date=fctx.date(), | |
650 | desc=fctx.description(), |
|
650 | desc=fctx.description(), | |
651 | rename=self.renamelink(fl, n), |
|
651 | rename=self.renamelink(fl, n), | |
652 | parent=self.siblings(fctx.parents()), |
|
652 | parent=self.siblings(fctx.parents()), | |
653 | child=self.siblings(fctx.children()), |
|
653 | child=self.siblings(fctx.children()), | |
654 | permissions=fctx.manifest().flags(f)) |
|
654 | permissions=fctx.manifest().flags(f)) | |
655 |
|
655 | |||
656 | def manifest(self, tmpl, ctx, path): |
|
656 | def manifest(self, tmpl, ctx, path): | |
657 | mf = ctx.manifest() |
|
657 | mf = ctx.manifest() | |
658 | node = ctx.node() |
|
658 | node = ctx.node() | |
659 |
|
659 | |||
660 | files = {} |
|
660 | files = {} | |
661 | parity = paritygen(self.stripecount) |
|
661 | parity = paritygen(self.stripecount) | |
662 |
|
662 | |||
663 | if path and path[-1] != "/": |
|
663 | if path and path[-1] != "/": | |
664 | path += "/" |
|
664 | path += "/" | |
665 | l = len(path) |
|
665 | l = len(path) | |
666 | abspath = "/" + path |
|
666 | abspath = "/" + path | |
667 |
|
667 | |||
668 | for f, n in mf.items(): |
|
668 | for f, n in mf.items(): | |
669 | if f[:l] != path: |
|
669 | if f[:l] != path: | |
670 | continue |
|
670 | continue | |
671 | remain = f[l:] |
|
671 | remain = f[l:] | |
672 | if "/" in remain: |
|
672 | if "/" in remain: | |
673 | short = remain[:remain.index("/") + 1] # bleah |
|
673 | short = remain[:remain.index("/") + 1] # bleah | |
674 | files[short] = (f, None) |
|
674 | files[short] = (f, None) | |
675 | else: |
|
675 | else: | |
676 | short = os.path.basename(remain) |
|
676 | short = os.path.basename(remain) | |
677 | files[short] = (f, n) |
|
677 | files[short] = (f, n) | |
678 |
|
678 | |||
679 | if not files: |
|
679 | if not files: | |
680 | raise ErrorResponse(404, 'Path not found: ' + path) |
|
680 | raise ErrorResponse(404, 'Path not found: ' + path) | |
681 |
|
681 | |||
682 | def filelist(**map): |
|
682 | def filelist(**map): | |
683 | fl = files.keys() |
|
683 | fl = files.keys() | |
684 | fl.sort() |
|
684 | fl.sort() | |
685 | for f in fl: |
|
685 | for f in fl: | |
686 | full, fnode = files[f] |
|
686 | full, fnode = files[f] | |
687 | if not fnode: |
|
687 | if not fnode: | |
688 | continue |
|
688 | continue | |
689 |
|
689 | |||
690 | fctx = ctx.filectx(full) |
|
690 | fctx = ctx.filectx(full) | |
691 | yield {"file": full, |
|
691 | yield {"file": full, | |
692 | "parity": parity.next(), |
|
692 | "parity": parity.next(), | |
693 | "basename": f, |
|
693 | "basename": f, | |
694 | "date": fctx.changectx().date(), |
|
694 | "date": fctx.changectx().date(), | |
695 | "size": fctx.size(), |
|
695 | "size": fctx.size(), | |
696 | "permissions": mf.flags(full)} |
|
696 | "permissions": mf.flags(full)} | |
697 |
|
697 | |||
698 | def dirlist(**map): |
|
698 | def dirlist(**map): | |
699 | fl = files.keys() |
|
699 | fl = files.keys() | |
700 | fl.sort() |
|
700 | fl.sort() | |
701 | for f in fl: |
|
701 | for f in fl: | |
702 | full, fnode = files[f] |
|
702 | full, fnode = files[f] | |
703 | if fnode: |
|
703 | if fnode: | |
704 | continue |
|
704 | continue | |
705 |
|
705 | |||
706 | yield {"parity": parity.next(), |
|
706 | yield {"parity": parity.next(), | |
707 | "path": "%s%s" % (abspath, f), |
|
707 | "path": "%s%s" % (abspath, f), | |
708 | "basename": f[:-1]} |
|
708 | "basename": f[:-1]} | |
709 |
|
709 | |||
710 | yield tmpl("manifest", |
|
710 | yield tmpl("manifest", | |
711 | rev=ctx.rev(), |
|
711 | rev=ctx.rev(), | |
712 | node=hex(node), |
|
712 | node=hex(node), | |
713 | path=abspath, |
|
713 | path=abspath, | |
714 | up=_up(abspath), |
|
714 | up=_up(abspath), | |
715 | upparity=parity.next(), |
|
715 | upparity=parity.next(), | |
716 | fentries=filelist, |
|
716 | fentries=filelist, | |
717 | dentries=dirlist, |
|
717 | dentries=dirlist, | |
718 | archives=self.archivelist(hex(node)), |
|
718 | archives=self.archivelist(hex(node)), | |
719 | tags=self.nodetagsdict(node), |
|
719 | tags=self.nodetagsdict(node), | |
720 | branches=self.nodebranchdict(ctx)) |
|
720 | branches=self.nodebranchdict(ctx)) | |
721 |
|
721 | |||
722 | def tags(self, tmpl): |
|
722 | def tags(self, tmpl): | |
723 | i = self.repo.tagslist() |
|
723 | i = self.repo.tagslist() | |
724 | i.reverse() |
|
724 | i.reverse() | |
725 | parity = paritygen(self.stripecount) |
|
725 | parity = paritygen(self.stripecount) | |
726 |
|
726 | |||
727 | def entries(notip=False,limit=0, **map): |
|
727 | def entries(notip=False,limit=0, **map): | |
728 | count = 0 |
|
728 | count = 0 | |
729 | for k, n in i: |
|
729 | for k, n in i: | |
730 | if notip and k == "tip": |
|
730 | if notip and k == "tip": | |
731 | continue |
|
731 | continue | |
732 | if limit > 0 and count >= limit: |
|
732 | if limit > 0 and count >= limit: | |
733 | continue |
|
733 | continue | |
734 | count = count + 1 |
|
734 | count = count + 1 | |
735 | yield {"parity": parity.next(), |
|
735 | yield {"parity": parity.next(), | |
736 | "tag": k, |
|
736 | "tag": k, | |
737 | "date": self.repo.changectx(n).date(), |
|
737 | "date": self.repo.changectx(n).date(), | |
738 | "node": hex(n)} |
|
738 | "node": hex(n)} | |
739 |
|
739 | |||
740 | yield tmpl("tags", |
|
740 | yield tmpl("tags", | |
741 | node=hex(self.repo.changelog.tip()), |
|
741 | node=hex(self.repo.changelog.tip()), | |
742 | entries=lambda **x: entries(False,0, **x), |
|
742 | entries=lambda **x: entries(False,0, **x), | |
743 | entriesnotip=lambda **x: entries(True,0, **x), |
|
743 | entriesnotip=lambda **x: entries(True,0, **x), | |
744 | latestentry=lambda **x: entries(True,1, **x)) |
|
744 | latestentry=lambda **x: entries(True,1, **x)) | |
745 |
|
745 | |||
746 | def summary(self, tmpl): |
|
746 | def summary(self, tmpl): | |
747 | i = self.repo.tagslist() |
|
747 | i = self.repo.tagslist() | |
748 | i.reverse() |
|
748 | i.reverse() | |
749 |
|
749 | |||
750 | def tagentries(**map): |
|
750 | def tagentries(**map): | |
751 | parity = paritygen(self.stripecount) |
|
751 | parity = paritygen(self.stripecount) | |
752 | count = 0 |
|
752 | count = 0 | |
753 | for k, n in i: |
|
753 | for k, n in i: | |
754 | if k == "tip": # skip tip |
|
754 | if k == "tip": # skip tip | |
755 | continue; |
|
755 | continue; | |
756 |
|
756 | |||
757 | count += 1 |
|
757 | count += 1 | |
758 | if count > 10: # limit to 10 tags |
|
758 | if count > 10: # limit to 10 tags | |
759 | break; |
|
759 | break; | |
760 |
|
760 | |||
761 | yield tmpl("tagentry", |
|
761 | yield tmpl("tagentry", | |
762 | parity=parity.next(), |
|
762 | parity=parity.next(), | |
763 | tag=k, |
|
763 | tag=k, | |
764 | node=hex(n), |
|
764 | node=hex(n), | |
765 | date=self.repo.changectx(n).date()) |
|
765 | date=self.repo.changectx(n).date()) | |
766 |
|
766 | |||
767 |
|
767 | |||
768 | def branches(**map): |
|
768 | def branches(**map): | |
769 | parity = paritygen(self.stripecount) |
|
769 | parity = paritygen(self.stripecount) | |
770 |
|
770 | |||
771 | b = self.repo.branchtags() |
|
771 | b = self.repo.branchtags() | |
772 | l = [(-self.repo.changelog.rev(n), n, t) for t, n in b.items()] |
|
772 | l = [(-self.repo.changelog.rev(n), n, t) for t, n in b.items()] | |
773 | l.sort() |
|
773 | l.sort() | |
774 |
|
774 | |||
775 | for r,n,t in l: |
|
775 | for r,n,t in l: | |
776 | ctx = self.repo.changectx(n) |
|
776 | ctx = self.repo.changectx(n) | |
777 |
|
777 | |||
778 | yield {'parity': parity.next(), |
|
778 | yield {'parity': parity.next(), | |
779 | 'branch': t, |
|
779 | 'branch': t, | |
780 | 'node': hex(n), |
|
780 | 'node': hex(n), | |
781 | 'date': ctx.date()} |
|
781 | 'date': ctx.date()} | |
782 |
|
782 | |||
783 | def changelist(**map): |
|
783 | def changelist(**map): | |
784 | parity = paritygen(self.stripecount, offset=start-end) |
|
784 | parity = paritygen(self.stripecount, offset=start-end) | |
785 | l = [] # build a list in forward order for efficiency |
|
785 | l = [] # build a list in forward order for efficiency | |
786 | for i in xrange(start, end): |
|
786 | for i in xrange(start, end): | |
787 | ctx = self.repo.changectx(i) |
|
787 | ctx = self.repo.changectx(i) | |
788 | n = ctx.node() |
|
788 | n = ctx.node() | |
789 | hn = hex(n) |
|
789 | hn = hex(n) | |
790 |
|
790 | |||
791 | l.insert(0, tmpl( |
|
791 | l.insert(0, tmpl( | |
792 | 'shortlogentry', |
|
792 | 'shortlogentry', | |
793 | parity=parity.next(), |
|
793 | parity=parity.next(), | |
794 | author=ctx.user(), |
|
794 | author=ctx.user(), | |
795 | desc=ctx.description(), |
|
795 | desc=ctx.description(), | |
796 | date=ctx.date(), |
|
796 | date=ctx.date(), | |
797 | rev=i, |
|
797 | rev=i, | |
798 | node=hn, |
|
798 | node=hn, | |
799 | tags=self.nodetagsdict(n), |
|
799 | tags=self.nodetagsdict(n), | |
800 | branches=self.nodebranchdict(ctx))) |
|
800 | branches=self.nodebranchdict(ctx))) | |
801 |
|
801 | |||
802 | yield l |
|
802 | yield l | |
803 |
|
803 | |||
804 | cl = self.repo.changelog |
|
804 | cl = self.repo.changelog | |
805 | count = cl.count() |
|
805 | count = cl.count() | |
806 | start = max(0, count - self.maxchanges) |
|
806 | start = max(0, count - self.maxchanges) | |
807 | end = min(count, start + self.maxchanges) |
|
807 | end = min(count, start + self.maxchanges) | |
808 |
|
808 | |||
809 | yield tmpl("summary", |
|
809 | yield tmpl("summary", | |
810 | desc=self.config("web", "description", "unknown"), |
|
810 | desc=self.config("web", "description", "unknown"), | |
811 |
owner=(self.config |
|
811 | owner=get_contact(self.config) or "unknown", | |
812 | self.config("web", "contact") or # deprecated |
|
|||
813 | self.config("web", "author", "unknown")), # also |
|
|||
814 | lastchange=cl.read(cl.tip())[2], |
|
812 | lastchange=cl.read(cl.tip())[2], | |
815 | tags=tagentries, |
|
813 | tags=tagentries, | |
816 | branches=branches, |
|
814 | branches=branches, | |
817 | shortlog=changelist, |
|
815 | shortlog=changelist, | |
818 | node=hex(cl.tip()), |
|
816 | node=hex(cl.tip()), | |
819 | archives=self.archivelist("tip")) |
|
817 | archives=self.archivelist("tip")) | |
820 |
|
818 | |||
821 | def filediff(self, tmpl, fctx): |
|
819 | def filediff(self, tmpl, fctx): | |
822 | n = fctx.node() |
|
820 | n = fctx.node() | |
823 | path = fctx.path() |
|
821 | path = fctx.path() | |
824 | parents = fctx.parents() |
|
822 | parents = fctx.parents() | |
825 | p1 = parents and parents[0].node() or nullid |
|
823 | p1 = parents and parents[0].node() or nullid | |
826 |
|
824 | |||
827 | def diff(**map): |
|
825 | def diff(**map): | |
828 | yield self.diff(tmpl, p1, n, [path]) |
|
826 | yield self.diff(tmpl, p1, n, [path]) | |
829 |
|
827 | |||
830 | yield tmpl("filediff", |
|
828 | yield tmpl("filediff", | |
831 | file=path, |
|
829 | file=path, | |
832 | node=hex(n), |
|
830 | node=hex(n), | |
833 | rev=fctx.rev(), |
|
831 | rev=fctx.rev(), | |
834 | parent=self.siblings(parents), |
|
832 | parent=self.siblings(parents), | |
835 | child=self.siblings(fctx.children()), |
|
833 | child=self.siblings(fctx.children()), | |
836 | diff=diff) |
|
834 | diff=diff) | |
837 |
|
835 | |||
838 | archive_specs = { |
|
836 | archive_specs = { | |
839 | 'bz2': ('application/x-tar', 'tbz2', '.tar.bz2', None), |
|
837 | 'bz2': ('application/x-tar', 'tbz2', '.tar.bz2', None), | |
840 | 'gz': ('application/x-tar', 'tgz', '.tar.gz', None), |
|
838 | 'gz': ('application/x-tar', 'tgz', '.tar.gz', None), | |
841 | 'zip': ('application/zip', 'zip', '.zip', None), |
|
839 | 'zip': ('application/zip', 'zip', '.zip', None), | |
842 | } |
|
840 | } | |
843 |
|
841 | |||
844 | def archive(self, tmpl, req, key, type_): |
|
842 | def archive(self, tmpl, req, key, type_): | |
845 | reponame = re.sub(r"\W+", "-", os.path.basename(self.reponame)) |
|
843 | reponame = re.sub(r"\W+", "-", os.path.basename(self.reponame)) | |
846 | cnode = self.repo.lookup(key) |
|
844 | cnode = self.repo.lookup(key) | |
847 | arch_version = key |
|
845 | arch_version = key | |
848 | if cnode == key or key == 'tip': |
|
846 | if cnode == key or key == 'tip': | |
849 | arch_version = short(cnode) |
|
847 | arch_version = short(cnode) | |
850 | name = "%s-%s" % (reponame, arch_version) |
|
848 | name = "%s-%s" % (reponame, arch_version) | |
851 | mimetype, artype, extension, encoding = self.archive_specs[type_] |
|
849 | mimetype, artype, extension, encoding = self.archive_specs[type_] | |
852 | headers = [('Content-type', mimetype), |
|
850 | headers = [('Content-type', mimetype), | |
853 | ('Content-disposition', 'attachment; filename=%s%s' % |
|
851 | ('Content-disposition', 'attachment; filename=%s%s' % | |
854 | (name, extension))] |
|
852 | (name, extension))] | |
855 | if encoding: |
|
853 | if encoding: | |
856 | headers.append(('Content-encoding', encoding)) |
|
854 | headers.append(('Content-encoding', encoding)) | |
857 | req.header(headers) |
|
855 | req.header(headers) | |
858 | archival.archive(self.repo, req.out, cnode, artype, prefix=name) |
|
856 | archival.archive(self.repo, req.out, cnode, artype, prefix=name) | |
859 |
|
857 | |||
860 | # add tags to things |
|
858 | # add tags to things | |
861 | # tags -> list of changesets corresponding to tags |
|
859 | # tags -> list of changesets corresponding to tags | |
862 | # find tag, changeset, file |
|
860 | # find tag, changeset, file | |
863 |
|
861 | |||
864 | def cleanpath(self, path): |
|
862 | def cleanpath(self, path): | |
865 | path = path.lstrip('/') |
|
863 | path = path.lstrip('/') | |
866 | return util.canonpath(self.repo.root, '', path) |
|
864 | return util.canonpath(self.repo.root, '', path) | |
867 |
|
865 | |||
868 | def changectx(self, req): |
|
866 | def changectx(self, req): | |
869 | if req.form.has_key('node'): |
|
867 | if req.form.has_key('node'): | |
870 | changeid = req.form['node'][0] |
|
868 | changeid = req.form['node'][0] | |
871 | elif req.form.has_key('manifest'): |
|
869 | elif req.form.has_key('manifest'): | |
872 | changeid = req.form['manifest'][0] |
|
870 | changeid = req.form['manifest'][0] | |
873 | else: |
|
871 | else: | |
874 | changeid = self.repo.changelog.count() - 1 |
|
872 | changeid = self.repo.changelog.count() - 1 | |
875 |
|
873 | |||
876 | try: |
|
874 | try: | |
877 | ctx = self.repo.changectx(changeid) |
|
875 | ctx = self.repo.changectx(changeid) | |
878 | except hg.RepoError: |
|
876 | except hg.RepoError: | |
879 | man = self.repo.manifest |
|
877 | man = self.repo.manifest | |
880 | mn = man.lookup(changeid) |
|
878 | mn = man.lookup(changeid) | |
881 | ctx = self.repo.changectx(man.linkrev(mn)) |
|
879 | ctx = self.repo.changectx(man.linkrev(mn)) | |
882 |
|
880 | |||
883 | return ctx |
|
881 | return ctx | |
884 |
|
882 | |||
885 | def filectx(self, req): |
|
883 | def filectx(self, req): | |
886 | path = self.cleanpath(req.form['file'][0]) |
|
884 | path = self.cleanpath(req.form['file'][0]) | |
887 | if req.form.has_key('node'): |
|
885 | if req.form.has_key('node'): | |
888 | changeid = req.form['node'][0] |
|
886 | changeid = req.form['node'][0] | |
889 | else: |
|
887 | else: | |
890 | changeid = req.form['filenode'][0] |
|
888 | changeid = req.form['filenode'][0] | |
891 | try: |
|
889 | try: | |
892 | ctx = self.repo.changectx(changeid) |
|
890 | ctx = self.repo.changectx(changeid) | |
893 | fctx = ctx.filectx(path) |
|
891 | fctx = ctx.filectx(path) | |
894 | except hg.RepoError: |
|
892 | except hg.RepoError: | |
895 | fctx = self.repo.filectx(path, fileid=changeid) |
|
893 | fctx = self.repo.filectx(path, fileid=changeid) | |
896 |
|
894 | |||
897 | return fctx |
|
895 | return fctx | |
898 |
|
896 | |||
899 | def check_perm(self, req, op, default): |
|
897 | def check_perm(self, req, op, default): | |
900 | '''check permission for operation based on user auth. |
|
898 | '''check permission for operation based on user auth. | |
901 | return true if op allowed, else false. |
|
899 | return true if op allowed, else false. | |
902 | default is policy to use if no config given.''' |
|
900 | default is policy to use if no config given.''' | |
903 |
|
901 | |||
904 | user = req.env.get('REMOTE_USER') |
|
902 | user = req.env.get('REMOTE_USER') | |
905 |
|
903 | |||
906 | deny = self.configlist('web', 'deny_' + op) |
|
904 | deny = self.configlist('web', 'deny_' + op) | |
907 | if deny and (not user or deny == ['*'] or user in deny): |
|
905 | if deny and (not user or deny == ['*'] or user in deny): | |
908 | return False |
|
906 | return False | |
909 |
|
907 | |||
910 | allow = self.configlist('web', 'allow_' + op) |
|
908 | allow = self.configlist('web', 'allow_' + op) | |
911 | return (allow and (allow == ['*'] or user in allow)) or default |
|
909 | return (allow and (allow == ['*'] or user in allow)) or default |
@@ -1,277 +1,276 b'' | |||||
1 | # hgweb/hgwebdir_mod.py - Web interface for a directory of repositories. |
|
1 | # hgweb/hgwebdir_mod.py - Web interface for a directory of repositories. | |
2 | # |
|
2 | # | |
3 | # Copyright 21 May 2005 - (c) 2005 Jake Edge <jake@edge2.net> |
|
3 | # Copyright 21 May 2005 - (c) 2005 Jake Edge <jake@edge2.net> | |
4 | # Copyright 2005, 2006 Matt Mackall <mpm@selenic.com> |
|
4 | # Copyright 2005, 2006 Matt Mackall <mpm@selenic.com> | |
5 | # |
|
5 | # | |
6 | # This software may be used and distributed according to the terms |
|
6 | # This software may be used and distributed according to the terms | |
7 | # of the GNU General Public License, incorporated herein by reference. |
|
7 | # of the GNU General Public License, incorporated herein by reference. | |
8 |
|
8 | |||
9 | import os, mimetools, cStringIO |
|
9 | import os, mimetools, cStringIO | |
10 | from mercurial.i18n import gettext as _ |
|
10 | from mercurial.i18n import gettext as _ | |
11 | from mercurial import ui, hg, util, templater |
|
11 | from mercurial import ui, hg, util, templater | |
12 | from common import ErrorResponse, get_mtime, staticfile, style_map, paritygen |
|
12 | from common import ErrorResponse, get_mtime, staticfile, style_map, paritygen, \ | |
|
13 | get_contact | |||
13 | from hgweb_mod import hgweb |
|
14 | from hgweb_mod import hgweb | |
14 | from request import wsgirequest |
|
15 | from request import wsgirequest | |
15 |
|
16 | |||
16 | # This is a stopgap |
|
17 | # This is a stopgap | |
17 | class hgwebdir(object): |
|
18 | class hgwebdir(object): | |
18 | def __init__(self, config, parentui=None): |
|
19 | def __init__(self, config, parentui=None): | |
19 | def cleannames(items): |
|
20 | def cleannames(items): | |
20 | return [(util.pconvert(name).strip('/'), path) |
|
21 | return [(util.pconvert(name).strip('/'), path) | |
21 | for name, path in items] |
|
22 | for name, path in items] | |
22 |
|
23 | |||
23 | self.parentui = parentui or ui.ui(report_untrusted=False, |
|
24 | self.parentui = parentui or ui.ui(report_untrusted=False, | |
24 | interactive = False) |
|
25 | interactive = False) | |
25 | self.motd = None |
|
26 | self.motd = None | |
26 | self.style = None |
|
27 | self.style = None | |
27 | self.stripecount = None |
|
28 | self.stripecount = None | |
28 | self.repos_sorted = ('name', False) |
|
29 | self.repos_sorted = ('name', False) | |
29 | if isinstance(config, (list, tuple)): |
|
30 | if isinstance(config, (list, tuple)): | |
30 | self.repos = cleannames(config) |
|
31 | self.repos = cleannames(config) | |
31 | self.repos_sorted = ('', False) |
|
32 | self.repos_sorted = ('', False) | |
32 | elif isinstance(config, dict): |
|
33 | elif isinstance(config, dict): | |
33 | self.repos = cleannames(config.items()) |
|
34 | self.repos = cleannames(config.items()) | |
34 | self.repos.sort() |
|
35 | self.repos.sort() | |
35 | else: |
|
36 | else: | |
36 | if isinstance(config, util.configparser): |
|
37 | if isinstance(config, util.configparser): | |
37 | cp = config |
|
38 | cp = config | |
38 | else: |
|
39 | else: | |
39 | cp = util.configparser() |
|
40 | cp = util.configparser() | |
40 | cp.read(config) |
|
41 | cp.read(config) | |
41 | self.repos = [] |
|
42 | self.repos = [] | |
42 | if cp.has_section('web'): |
|
43 | if cp.has_section('web'): | |
43 | if cp.has_option('web', 'motd'): |
|
44 | if cp.has_option('web', 'motd'): | |
44 | self.motd = cp.get('web', 'motd') |
|
45 | self.motd = cp.get('web', 'motd') | |
45 | if cp.has_option('web', 'style'): |
|
46 | if cp.has_option('web', 'style'): | |
46 | self.style = cp.get('web', 'style') |
|
47 | self.style = cp.get('web', 'style') | |
47 | if cp.has_option('web', 'stripes'): |
|
48 | if cp.has_option('web', 'stripes'): | |
48 | self.stripecount = int(cp.get('web', 'stripes')) |
|
49 | self.stripecount = int(cp.get('web', 'stripes')) | |
49 | if cp.has_section('paths'): |
|
50 | if cp.has_section('paths'): | |
50 | self.repos.extend(cleannames(cp.items('paths'))) |
|
51 | self.repos.extend(cleannames(cp.items('paths'))) | |
51 | if cp.has_section('collections'): |
|
52 | if cp.has_section('collections'): | |
52 | for prefix, root in cp.items('collections'): |
|
53 | for prefix, root in cp.items('collections'): | |
53 | for path in util.walkrepos(root): |
|
54 | for path in util.walkrepos(root): | |
54 | repo = os.path.normpath(path) |
|
55 | repo = os.path.normpath(path) | |
55 | name = repo |
|
56 | name = repo | |
56 | if name.startswith(prefix): |
|
57 | if name.startswith(prefix): | |
57 | name = name[len(prefix):] |
|
58 | name = name[len(prefix):] | |
58 | self.repos.append((name.lstrip(os.sep), repo)) |
|
59 | self.repos.append((name.lstrip(os.sep), repo)) | |
59 | self.repos.sort() |
|
60 | self.repos.sort() | |
60 |
|
61 | |||
61 | def run(self): |
|
62 | def run(self): | |
62 | if not os.environ.get('GATEWAY_INTERFACE', '').startswith("CGI/1."): |
|
63 | if not os.environ.get('GATEWAY_INTERFACE', '').startswith("CGI/1."): | |
63 | raise RuntimeError("This function is only intended to be called while running as a CGI script.") |
|
64 | raise RuntimeError("This function is only intended to be called while running as a CGI script.") | |
64 | import mercurial.hgweb.wsgicgi as wsgicgi |
|
65 | import mercurial.hgweb.wsgicgi as wsgicgi | |
65 | wsgicgi.launch(self) |
|
66 | wsgicgi.launch(self) | |
66 |
|
67 | |||
67 | def __call__(self, env, respond): |
|
68 | def __call__(self, env, respond): | |
68 | req = wsgirequest(env, respond) |
|
69 | req = wsgirequest(env, respond) | |
69 | self.run_wsgi(req) |
|
70 | self.run_wsgi(req) | |
70 | return req |
|
71 | return req | |
71 |
|
72 | |||
72 | def run_wsgi(self, req): |
|
73 | def run_wsgi(self, req): | |
73 |
|
74 | |||
74 | try: |
|
75 | try: | |
75 | try: |
|
76 | try: | |
76 |
|
77 | |||
77 | virtual = req.env.get("PATH_INFO", "").strip('/') |
|
78 | virtual = req.env.get("PATH_INFO", "").strip('/') | |
78 |
|
79 | |||
79 | # a static file |
|
80 | # a static file | |
80 | if virtual.startswith('static/') or 'static' in req.form: |
|
81 | if virtual.startswith('static/') or 'static' in req.form: | |
81 | static = os.path.join(templater.templatepath(), 'static') |
|
82 | static = os.path.join(templater.templatepath(), 'static') | |
82 | if virtual.startswith('static/'): |
|
83 | if virtual.startswith('static/'): | |
83 | fname = virtual[7:] |
|
84 | fname = virtual[7:] | |
84 | else: |
|
85 | else: | |
85 | fname = req.form['static'][0] |
|
86 | fname = req.form['static'][0] | |
86 | req.write(staticfile(static, fname, req)) |
|
87 | req.write(staticfile(static, fname, req)) | |
87 | return |
|
88 | return | |
88 |
|
89 | |||
89 | # top-level index |
|
90 | # top-level index | |
90 | elif not virtual: |
|
91 | elif not virtual: | |
91 | tmpl = self.templater(req) |
|
92 | tmpl = self.templater(req) | |
92 | self.makeindex(req, tmpl) |
|
93 | self.makeindex(req, tmpl) | |
93 | return |
|
94 | return | |
94 |
|
95 | |||
95 | # nested indexes and hgwebs |
|
96 | # nested indexes and hgwebs | |
96 | repos = dict(self.repos) |
|
97 | repos = dict(self.repos) | |
97 | while virtual: |
|
98 | while virtual: | |
98 | real = repos.get(virtual) |
|
99 | real = repos.get(virtual) | |
99 | if real: |
|
100 | if real: | |
100 | req.env['REPO_NAME'] = virtual |
|
101 | req.env['REPO_NAME'] = virtual | |
101 | try: |
|
102 | try: | |
102 | repo = hg.repository(self.parentui, real) |
|
103 | repo = hg.repository(self.parentui, real) | |
103 | hgweb(repo).run_wsgi(req) |
|
104 | hgweb(repo).run_wsgi(req) | |
104 | return |
|
105 | return | |
105 | except IOError, inst: |
|
106 | except IOError, inst: | |
106 | raise ErrorResponse(500, inst.strerror) |
|
107 | raise ErrorResponse(500, inst.strerror) | |
107 | except hg.RepoError, inst: |
|
108 | except hg.RepoError, inst: | |
108 | raise ErrorResponse(500, str(inst)) |
|
109 | raise ErrorResponse(500, str(inst)) | |
109 |
|
110 | |||
110 | # browse subdirectories |
|
111 | # browse subdirectories | |
111 | subdir = virtual + '/' |
|
112 | subdir = virtual + '/' | |
112 | if [r for r in repos if r.startswith(subdir)]: |
|
113 | if [r for r in repos if r.startswith(subdir)]: | |
113 | tmpl = self.templater(req) |
|
114 | tmpl = self.templater(req) | |
114 | self.makeindex(req, tmpl, subdir) |
|
115 | self.makeindex(req, tmpl, subdir) | |
115 | return |
|
116 | return | |
116 |
|
117 | |||
117 | up = virtual.rfind('/') |
|
118 | up = virtual.rfind('/') | |
118 | if up < 0: |
|
119 | if up < 0: | |
119 | break |
|
120 | break | |
120 | virtual = virtual[:up] |
|
121 | virtual = virtual[:up] | |
121 |
|
122 | |||
122 | # prefixes not found |
|
123 | # prefixes not found | |
123 | tmpl = self.templater(req) |
|
124 | tmpl = self.templater(req) | |
124 | req.respond(404, tmpl("notfound", repo=virtual)) |
|
125 | req.respond(404, tmpl("notfound", repo=virtual)) | |
125 |
|
126 | |||
126 | except ErrorResponse, err: |
|
127 | except ErrorResponse, err: | |
127 | tmpl = self.templater(req) |
|
128 | tmpl = self.templater(req) | |
128 | req.respond(err.code, tmpl('error', error=err.message or '')) |
|
129 | req.respond(err.code, tmpl('error', error=err.message or '')) | |
129 | finally: |
|
130 | finally: | |
130 | tmpl = None |
|
131 | tmpl = None | |
131 |
|
132 | |||
132 | def makeindex(self, req, tmpl, subdir=""): |
|
133 | def makeindex(self, req, tmpl, subdir=""): | |
133 |
|
134 | |||
134 | def archivelist(ui, nodeid, url): |
|
135 | def archivelist(ui, nodeid, url): | |
135 | allowed = ui.configlist("web", "allow_archive", untrusted=True) |
|
136 | allowed = ui.configlist("web", "allow_archive", untrusted=True) | |
136 | for i in [('zip', '.zip'), ('gz', '.tar.gz'), ('bz2', '.tar.bz2')]: |
|
137 | for i in [('zip', '.zip'), ('gz', '.tar.gz'), ('bz2', '.tar.bz2')]: | |
137 | if i[0] in allowed or ui.configbool("web", "allow" + i[0], |
|
138 | if i[0] in allowed or ui.configbool("web", "allow" + i[0], | |
138 | untrusted=True): |
|
139 | untrusted=True): | |
139 | yield {"type" : i[0], "extension": i[1], |
|
140 | yield {"type" : i[0], "extension": i[1], | |
140 | "node": nodeid, "url": url} |
|
141 | "node": nodeid, "url": url} | |
141 |
|
142 | |||
142 | def entries(sortcolumn="", descending=False, subdir="", **map): |
|
143 | def entries(sortcolumn="", descending=False, subdir="", **map): | |
143 | def sessionvars(**map): |
|
144 | def sessionvars(**map): | |
144 | fields = [] |
|
145 | fields = [] | |
145 | if req.form.has_key('style'): |
|
146 | if req.form.has_key('style'): | |
146 | style = req.form['style'][0] |
|
147 | style = req.form['style'][0] | |
147 | if style != get('web', 'style', ''): |
|
148 | if style != get('web', 'style', ''): | |
148 | fields.append(('style', style)) |
|
149 | fields.append(('style', style)) | |
149 |
|
150 | |||
150 | separator = url[-1] == '?' and ';' or '?' |
|
151 | separator = url[-1] == '?' and ';' or '?' | |
151 | for name, value in fields: |
|
152 | for name, value in fields: | |
152 | yield dict(name=name, value=value, separator=separator) |
|
153 | yield dict(name=name, value=value, separator=separator) | |
153 | separator = ';' |
|
154 | separator = ';' | |
154 |
|
155 | |||
155 | rows = [] |
|
156 | rows = [] | |
156 | parity = paritygen(self.stripecount) |
|
157 | parity = paritygen(self.stripecount) | |
157 | for name, path in self.repos: |
|
158 | for name, path in self.repos: | |
158 | if not name.startswith(subdir): |
|
159 | if not name.startswith(subdir): | |
159 | continue |
|
160 | continue | |
160 | name = name[len(subdir):] |
|
161 | name = name[len(subdir):] | |
161 |
|
162 | |||
162 | u = ui.ui(parentui=self.parentui) |
|
163 | u = ui.ui(parentui=self.parentui) | |
163 | try: |
|
164 | try: | |
164 | u.readconfig(os.path.join(path, '.hg', 'hgrc')) |
|
165 | u.readconfig(os.path.join(path, '.hg', 'hgrc')) | |
165 | except Exception, e: |
|
166 | except Exception, e: | |
166 | u.warn(_('error reading %s/.hg/hgrc: %s\n' % (path, e))) |
|
167 | u.warn(_('error reading %s/.hg/hgrc: %s\n' % (path, e))) | |
167 | continue |
|
168 | continue | |
168 | def get(section, name, default=None): |
|
169 | def get(section, name, default=None): | |
169 | return u.config(section, name, default, untrusted=True) |
|
170 | return u.config(section, name, default, untrusted=True) | |
170 |
|
171 | |||
171 | if u.configbool("web", "hidden", untrusted=True): |
|
172 | if u.configbool("web", "hidden", untrusted=True): | |
172 | continue |
|
173 | continue | |
173 |
|
174 | |||
174 | parts = [req.env['PATH_INFO'], name] |
|
175 | parts = [req.env['PATH_INFO'], name] | |
175 | if req.env['SCRIPT_NAME']: |
|
176 | if req.env['SCRIPT_NAME']: | |
176 | parts.insert(0, req.env['SCRIPT_NAME']) |
|
177 | parts.insert(0, req.env['SCRIPT_NAME']) | |
177 | url = ('/'.join(parts).replace("//", "/")) + '/' |
|
178 | url = ('/'.join(parts).replace("//", "/")) + '/' | |
178 |
|
179 | |||
179 | # update time with local timezone |
|
180 | # update time with local timezone | |
180 | try: |
|
181 | try: | |
181 | d = (get_mtime(path), util.makedate()[1]) |
|
182 | d = (get_mtime(path), util.makedate()[1]) | |
182 | except OSError: |
|
183 | except OSError: | |
183 | continue |
|
184 | continue | |
184 |
|
185 | |||
185 | contact = (get("ui", "username") or # preferred |
|
186 | contact = get_contact(get) | |
186 | get("web", "contact") or # deprecated |
|
|||
187 | get("web", "author", "")) # also |
|
|||
188 | description = get("web", "description", "") |
|
187 | description = get("web", "description", "") | |
189 | name = get("web", "name", name) |
|
188 | name = get("web", "name", name) | |
190 | row = dict(contact=contact or "unknown", |
|
189 | row = dict(contact=contact or "unknown", | |
191 | contact_sort=contact.upper() or "unknown", |
|
190 | contact_sort=contact.upper() or "unknown", | |
192 | name=name, |
|
191 | name=name, | |
193 | name_sort=name, |
|
192 | name_sort=name, | |
194 | url=url, |
|
193 | url=url, | |
195 | description=description or "unknown", |
|
194 | description=description or "unknown", | |
196 | description_sort=description.upper() or "unknown", |
|
195 | description_sort=description.upper() or "unknown", | |
197 | lastchange=d, |
|
196 | lastchange=d, | |
198 | lastchange_sort=d[1]-d[0], |
|
197 | lastchange_sort=d[1]-d[0], | |
199 | sessionvars=sessionvars, |
|
198 | sessionvars=sessionvars, | |
200 | archives=archivelist(u, "tip", url)) |
|
199 | archives=archivelist(u, "tip", url)) | |
201 | if (not sortcolumn |
|
200 | if (not sortcolumn | |
202 | or (sortcolumn, descending) == self.repos_sorted): |
|
201 | or (sortcolumn, descending) == self.repos_sorted): | |
203 | # fast path for unsorted output |
|
202 | # fast path for unsorted output | |
204 | row['parity'] = parity.next() |
|
203 | row['parity'] = parity.next() | |
205 | yield row |
|
204 | yield row | |
206 | else: |
|
205 | else: | |
207 | rows.append((row["%s_sort" % sortcolumn], row)) |
|
206 | rows.append((row["%s_sort" % sortcolumn], row)) | |
208 | if rows: |
|
207 | if rows: | |
209 | rows.sort() |
|
208 | rows.sort() | |
210 | if descending: |
|
209 | if descending: | |
211 | rows.reverse() |
|
210 | rows.reverse() | |
212 | for key, row in rows: |
|
211 | for key, row in rows: | |
213 | row['parity'] = parity.next() |
|
212 | row['parity'] = parity.next() | |
214 | yield row |
|
213 | yield row | |
215 |
|
214 | |||
216 | sortable = ["name", "description", "contact", "lastchange"] |
|
215 | sortable = ["name", "description", "contact", "lastchange"] | |
217 | sortcolumn, descending = self.repos_sorted |
|
216 | sortcolumn, descending = self.repos_sorted | |
218 | if req.form.has_key('sort'): |
|
217 | if req.form.has_key('sort'): | |
219 | sortcolumn = req.form['sort'][0] |
|
218 | sortcolumn = req.form['sort'][0] | |
220 | descending = sortcolumn.startswith('-') |
|
219 | descending = sortcolumn.startswith('-') | |
221 | if descending: |
|
220 | if descending: | |
222 | sortcolumn = sortcolumn[1:] |
|
221 | sortcolumn = sortcolumn[1:] | |
223 | if sortcolumn not in sortable: |
|
222 | if sortcolumn not in sortable: | |
224 | sortcolumn = "" |
|
223 | sortcolumn = "" | |
225 |
|
224 | |||
226 | sort = [("sort_%s" % column, |
|
225 | sort = [("sort_%s" % column, | |
227 | "%s%s" % ((not descending and column == sortcolumn) |
|
226 | "%s%s" % ((not descending and column == sortcolumn) | |
228 | and "-" or "", column)) |
|
227 | and "-" or "", column)) | |
229 | for column in sortable] |
|
228 | for column in sortable] | |
230 | req.write(tmpl("index", entries=entries, subdir=subdir, |
|
229 | req.write(tmpl("index", entries=entries, subdir=subdir, | |
231 | sortcolumn=sortcolumn, descending=descending, |
|
230 | sortcolumn=sortcolumn, descending=descending, | |
232 | **dict(sort))) |
|
231 | **dict(sort))) | |
233 |
|
232 | |||
234 | def templater(self, req): |
|
233 | def templater(self, req): | |
235 |
|
234 | |||
236 | def header(**map): |
|
235 | def header(**map): | |
237 | header_file = cStringIO.StringIO( |
|
236 | header_file = cStringIO.StringIO( | |
238 | ''.join(tmpl("header", encoding=util._encoding, **map))) |
|
237 | ''.join(tmpl("header", encoding=util._encoding, **map))) | |
239 | msg = mimetools.Message(header_file, 0) |
|
238 | msg = mimetools.Message(header_file, 0) | |
240 | req.header(msg.items()) |
|
239 | req.header(msg.items()) | |
241 | yield header_file.read() |
|
240 | yield header_file.read() | |
242 |
|
241 | |||
243 | def footer(**map): |
|
242 | def footer(**map): | |
244 | yield tmpl("footer", **map) |
|
243 | yield tmpl("footer", **map) | |
245 |
|
244 | |||
246 | def motd(**map): |
|
245 | def motd(**map): | |
247 | if self.motd is not None: |
|
246 | if self.motd is not None: | |
248 | yield self.motd |
|
247 | yield self.motd | |
249 | else: |
|
248 | else: | |
250 | yield config('web', 'motd', '') |
|
249 | yield config('web', 'motd', '') | |
251 |
|
250 | |||
252 | def config(section, name, default=None, untrusted=True): |
|
251 | def config(section, name, default=None, untrusted=True): | |
253 | return self.parentui.config(section, name, default, untrusted) |
|
252 | return self.parentui.config(section, name, default, untrusted) | |
254 |
|
253 | |||
255 | url = req.env.get('SCRIPT_NAME', '') |
|
254 | url = req.env.get('SCRIPT_NAME', '') | |
256 | if not url.endswith('/'): |
|
255 | if not url.endswith('/'): | |
257 | url += '/' |
|
256 | url += '/' | |
258 |
|
257 | |||
259 | staticurl = config('web', 'staticurl') or url + 'static/' |
|
258 | staticurl = config('web', 'staticurl') or url + 'static/' | |
260 | if not staticurl.endswith('/'): |
|
259 | if not staticurl.endswith('/'): | |
261 | staticurl += '/' |
|
260 | staticurl += '/' | |
262 |
|
261 | |||
263 | style = self.style |
|
262 | style = self.style | |
264 | if style is None: |
|
263 | if style is None: | |
265 | style = config('web', 'style', '') |
|
264 | style = config('web', 'style', '') | |
266 | if req.form.has_key('style'): |
|
265 | if req.form.has_key('style'): | |
267 | style = req.form['style'][0] |
|
266 | style = req.form['style'][0] | |
268 | if self.stripecount is None: |
|
267 | if self.stripecount is None: | |
269 | self.stripecount = int(config('web', 'stripes', 1)) |
|
268 | self.stripecount = int(config('web', 'stripes', 1)) | |
270 | mapfile = style_map(templater.templatepath(), style) |
|
269 | mapfile = style_map(templater.templatepath(), style) | |
271 | tmpl = templater.templater(mapfile, templater.common_filters, |
|
270 | tmpl = templater.templater(mapfile, templater.common_filters, | |
272 | defaults={"header": header, |
|
271 | defaults={"header": header, | |
273 | "footer": footer, |
|
272 | "footer": footer, | |
274 | "motd": motd, |
|
273 | "motd": motd, | |
275 | "url": url, |
|
274 | "url": url, | |
276 | "staticurl": staticurl}) |
|
275 | "staticurl": staticurl}) | |
277 | return tmpl |
|
276 | return tmpl |
@@ -1,582 +1,583 b'' | |||||
1 | #!/usr/bin/env python |
|
1 | #!/usr/bin/env python | |
2 | # |
|
2 | # | |
3 | # run-tests.py - Run a set of tests on Mercurial |
|
3 | # run-tests.py - Run a set of tests on Mercurial | |
4 | # |
|
4 | # | |
5 | # Copyright 2006 Matt Mackall <mpm@selenic.com> |
|
5 | # Copyright 2006 Matt Mackall <mpm@selenic.com> | |
6 | # |
|
6 | # | |
7 | # This software may be used and distributed according to the terms |
|
7 | # This software may be used and distributed according to the terms | |
8 | # of the GNU General Public License, incorporated herein by reference. |
|
8 | # of the GNU General Public License, incorporated herein by reference. | |
9 |
|
9 | |||
10 | import difflib |
|
10 | import difflib | |
11 | import errno |
|
11 | import errno | |
12 | import optparse |
|
12 | import optparse | |
13 | import os |
|
13 | import os | |
14 | import popen2 |
|
14 | import popen2 | |
15 | import re |
|
15 | import re | |
16 | import shutil |
|
16 | import shutil | |
17 | import signal |
|
17 | import signal | |
18 | import sys |
|
18 | import sys | |
19 | import tempfile |
|
19 | import tempfile | |
20 | import time |
|
20 | import time | |
21 |
|
21 | |||
22 | # reserved exit code to skip test (used by hghave) |
|
22 | # reserved exit code to skip test (used by hghave) | |
23 | SKIPPED_STATUS = 80 |
|
23 | SKIPPED_STATUS = 80 | |
24 | SKIPPED_PREFIX = 'skipped: ' |
|
24 | SKIPPED_PREFIX = 'skipped: ' | |
25 |
|
25 | |||
26 | required_tools = ["python", "diff", "grep", "unzip", "gunzip", "bunzip2", "sed"] |
|
26 | required_tools = ["python", "diff", "grep", "unzip", "gunzip", "bunzip2", "sed"] | |
27 |
|
27 | |||
28 | parser = optparse.OptionParser("%prog [options] [tests]") |
|
28 | parser = optparse.OptionParser("%prog [options] [tests]") | |
29 | parser.add_option("-C", "--annotate", action="store_true", |
|
29 | parser.add_option("-C", "--annotate", action="store_true", | |
30 | help="output files annotated with coverage") |
|
30 | help="output files annotated with coverage") | |
31 | parser.add_option("--child", type="int", |
|
31 | parser.add_option("--child", type="int", | |
32 | help="run as child process, summary to given fd") |
|
32 | help="run as child process, summary to given fd") | |
33 | parser.add_option("-c", "--cover", action="store_true", |
|
33 | parser.add_option("-c", "--cover", action="store_true", | |
34 | help="print a test coverage report") |
|
34 | help="print a test coverage report") | |
35 | parser.add_option("-f", "--first", action="store_true", |
|
35 | parser.add_option("-f", "--first", action="store_true", | |
36 | help="exit on the first test failure") |
|
36 | help="exit on the first test failure") | |
37 | parser.add_option("-i", "--interactive", action="store_true", |
|
37 | parser.add_option("-i", "--interactive", action="store_true", | |
38 | help="prompt to accept changed output") |
|
38 | help="prompt to accept changed output") | |
39 | parser.add_option("-j", "--jobs", type="int", |
|
39 | parser.add_option("-j", "--jobs", type="int", | |
40 | help="number of jobs to run in parallel") |
|
40 | help="number of jobs to run in parallel") | |
41 | parser.add_option("-R", "--restart", action="store_true", |
|
41 | parser.add_option("-R", "--restart", action="store_true", | |
42 | help="restart at last error") |
|
42 | help="restart at last error") | |
43 | parser.add_option("-p", "--port", type="int", |
|
43 | parser.add_option("-p", "--port", type="int", | |
44 | help="port on which servers should listen") |
|
44 | help="port on which servers should listen") | |
45 | parser.add_option("-r", "--retest", action="store_true", |
|
45 | parser.add_option("-r", "--retest", action="store_true", | |
46 | help="retest failed tests") |
|
46 | help="retest failed tests") | |
47 | parser.add_option("-s", "--cover_stdlib", action="store_true", |
|
47 | parser.add_option("-s", "--cover_stdlib", action="store_true", | |
48 | help="print a test coverage report inc. standard libraries") |
|
48 | help="print a test coverage report inc. standard libraries") | |
49 | parser.add_option("-t", "--timeout", type="int", |
|
49 | parser.add_option("-t", "--timeout", type="int", | |
50 | help="kill errant tests after TIMEOUT seconds") |
|
50 | help="kill errant tests after TIMEOUT seconds") | |
51 | parser.add_option("--tmpdir", type="string", |
|
51 | parser.add_option("--tmpdir", type="string", | |
52 | help="run tests in the given temporary directory") |
|
52 | help="run tests in the given temporary directory") | |
53 | parser.add_option("-v", "--verbose", action="store_true", |
|
53 | parser.add_option("-v", "--verbose", action="store_true", | |
54 | help="output verbose messages") |
|
54 | help="output verbose messages") | |
55 | parser.add_option("--with-hg", type="string", |
|
55 | parser.add_option("--with-hg", type="string", | |
56 | help="test existing install at given location") |
|
56 | help="test existing install at given location") | |
57 |
|
57 | |||
58 | parser.set_defaults(jobs=1, port=20059, timeout=180) |
|
58 | parser.set_defaults(jobs=1, port=20059, timeout=180) | |
59 | (options, args) = parser.parse_args() |
|
59 | (options, args) = parser.parse_args() | |
60 | verbose = options.verbose |
|
60 | verbose = options.verbose | |
61 | coverage = options.cover or options.cover_stdlib or options.annotate |
|
61 | coverage = options.cover or options.cover_stdlib or options.annotate | |
62 | python = sys.executable |
|
62 | python = sys.executable | |
63 |
|
63 | |||
64 | if options.jobs < 1: |
|
64 | if options.jobs < 1: | |
65 | print >> sys.stderr, 'ERROR: -j/--jobs must be positive' |
|
65 | print >> sys.stderr, 'ERROR: -j/--jobs must be positive' | |
66 | sys.exit(1) |
|
66 | sys.exit(1) | |
67 | if options.interactive and options.jobs > 1: |
|
67 | if options.interactive and options.jobs > 1: | |
68 | print >> sys.stderr, 'ERROR: cannot mix -interactive and --jobs > 1' |
|
68 | print >> sys.stderr, 'ERROR: cannot mix -interactive and --jobs > 1' | |
69 | sys.exit(1) |
|
69 | sys.exit(1) | |
70 |
|
70 | |||
71 | def vlog(*msg): |
|
71 | def vlog(*msg): | |
72 | if verbose: |
|
72 | if verbose: | |
73 | for m in msg: |
|
73 | for m in msg: | |
74 | print m, |
|
74 | print m, | |
75 |
|
75 | |||
76 |
|
76 | |||
77 | def splitnewlines(text): |
|
77 | def splitnewlines(text): | |
78 | '''like str.splitlines, but only split on newlines. |
|
78 | '''like str.splitlines, but only split on newlines. | |
79 | keep line endings.''' |
|
79 | keep line endings.''' | |
80 | i = 0 |
|
80 | i = 0 | |
81 | lines = [] |
|
81 | lines = [] | |
82 | while True: |
|
82 | while True: | |
83 | n = text.find('\n', i) |
|
83 | n = text.find('\n', i) | |
84 | if n == -1: |
|
84 | if n == -1: | |
85 | last = text[i:] |
|
85 | last = text[i:] | |
86 | if last: |
|
86 | if last: | |
87 | lines.append(last) |
|
87 | lines.append(last) | |
88 | return lines |
|
88 | return lines | |
89 | lines.append(text[i:n+1]) |
|
89 | lines.append(text[i:n+1]) | |
90 | i = n + 1 |
|
90 | i = n + 1 | |
91 |
|
91 | |||
92 | def extract_missing_features(lines): |
|
92 | def extract_missing_features(lines): | |
93 | '''Extract missing/unknown features log lines as a list''' |
|
93 | '''Extract missing/unknown features log lines as a list''' | |
94 | missing = [] |
|
94 | missing = [] | |
95 | for line in lines: |
|
95 | for line in lines: | |
96 | if not line.startswith(SKIPPED_PREFIX): |
|
96 | if not line.startswith(SKIPPED_PREFIX): | |
97 | continue |
|
97 | continue | |
98 | line = line.splitlines()[0] |
|
98 | line = line.splitlines()[0] | |
99 | missing.append(line[len(SKIPPED_PREFIX):]) |
|
99 | missing.append(line[len(SKIPPED_PREFIX):]) | |
100 |
|
100 | |||
101 | return missing |
|
101 | return missing | |
102 |
|
102 | |||
103 | def show_diff(expected, output): |
|
103 | def show_diff(expected, output): | |
104 | for line in difflib.unified_diff(expected, output, |
|
104 | for line in difflib.unified_diff(expected, output, | |
105 | "Expected output", "Test output"): |
|
105 | "Expected output", "Test output"): | |
106 | sys.stdout.write(line) |
|
106 | sys.stdout.write(line) | |
107 |
|
107 | |||
108 | def find_program(program): |
|
108 | def find_program(program): | |
109 | """Search PATH for a executable program""" |
|
109 | """Search PATH for a executable program""" | |
110 | for p in os.environ.get('PATH', os.defpath).split(os.pathsep): |
|
110 | for p in os.environ.get('PATH', os.defpath).split(os.pathsep): | |
111 | name = os.path.join(p, program) |
|
111 | name = os.path.join(p, program) | |
112 | if os.access(name, os.X_OK): |
|
112 | if os.access(name, os.X_OK): | |
113 | return name |
|
113 | return name | |
114 | return None |
|
114 | return None | |
115 |
|
115 | |||
116 | def check_required_tools(): |
|
116 | def check_required_tools(): | |
117 | # Before we go any further, check for pre-requisite tools |
|
117 | # Before we go any further, check for pre-requisite tools | |
118 | # stuff from coreutils (cat, rm, etc) are not tested |
|
118 | # stuff from coreutils (cat, rm, etc) are not tested | |
119 | for p in required_tools: |
|
119 | for p in required_tools: | |
120 | if os.name == 'nt': |
|
120 | if os.name == 'nt': | |
121 | p += '.exe' |
|
121 | p += '.exe' | |
122 | found = find_program(p) |
|
122 | found = find_program(p) | |
123 | if found: |
|
123 | if found: | |
124 | vlog("# Found prerequisite", p, "at", found) |
|
124 | vlog("# Found prerequisite", p, "at", found) | |
125 | else: |
|
125 | else: | |
126 | print "WARNING: Did not find prerequisite tool: "+p |
|
126 | print "WARNING: Did not find prerequisite tool: "+p | |
127 |
|
127 | |||
128 | def cleanup_exit(): |
|
128 | def cleanup_exit(): | |
129 | if verbose: |
|
129 | if verbose: | |
130 | print "# Cleaning up HGTMP", HGTMP |
|
130 | print "# Cleaning up HGTMP", HGTMP | |
131 | shutil.rmtree(HGTMP, True) |
|
131 | shutil.rmtree(HGTMP, True) | |
132 |
|
132 | |||
133 | def use_correct_python(): |
|
133 | def use_correct_python(): | |
134 | # some tests run python interpreter. they must use same |
|
134 | # some tests run python interpreter. they must use same | |
135 | # interpreter we use or bad things will happen. |
|
135 | # interpreter we use or bad things will happen. | |
136 | exedir, exename = os.path.split(sys.executable) |
|
136 | exedir, exename = os.path.split(sys.executable) | |
137 | if exename == 'python': |
|
137 | if exename == 'python': | |
138 | path = find_program('python') |
|
138 | path = find_program('python') | |
139 | if os.path.dirname(path) == exedir: |
|
139 | if os.path.dirname(path) == exedir: | |
140 | return |
|
140 | return | |
141 | vlog('# Making python executable in test path use correct Python') |
|
141 | vlog('# Making python executable in test path use correct Python') | |
142 | my_python = os.path.join(BINDIR, 'python') |
|
142 | my_python = os.path.join(BINDIR, 'python') | |
143 | try: |
|
143 | try: | |
144 | os.symlink(sys.executable, my_python) |
|
144 | os.symlink(sys.executable, my_python) | |
145 | except AttributeError: |
|
145 | except AttributeError: | |
146 | # windows fallback |
|
146 | # windows fallback | |
147 | shutil.copyfile(sys.executable, my_python) |
|
147 | shutil.copyfile(sys.executable, my_python) | |
148 | shutil.copymode(sys.executable, my_python) |
|
148 | shutil.copymode(sys.executable, my_python) | |
149 |
|
149 | |||
150 | def install_hg(): |
|
150 | def install_hg(): | |
151 | global python |
|
151 | global python | |
152 | vlog("# Performing temporary installation of HG") |
|
152 | vlog("# Performing temporary installation of HG") | |
153 | installerrs = os.path.join("tests", "install.err") |
|
153 | installerrs = os.path.join("tests", "install.err") | |
154 |
|
154 | |||
155 | # Run installer in hg root |
|
155 | # Run installer in hg root | |
156 | os.chdir(os.path.join(os.path.dirname(sys.argv[0]), '..')) |
|
156 | os.chdir(os.path.join(os.path.dirname(sys.argv[0]), '..')) | |
157 | cmd = ('%s setup.py clean --all' |
|
157 | cmd = ('%s setup.py clean --all' | |
158 | ' install --force --home="%s" --install-lib="%s"' |
|
158 | ' install --force --home="%s" --install-lib="%s"' | |
159 | ' --install-scripts="%s" >%s 2>&1' |
|
159 | ' --install-scripts="%s" >%s 2>&1' | |
160 | % (sys.executable, INST, PYTHONDIR, BINDIR, installerrs)) |
|
160 | % (sys.executable, INST, PYTHONDIR, BINDIR, installerrs)) | |
161 | vlog("# Running", cmd) |
|
161 | vlog("# Running", cmd) | |
162 | if os.system(cmd) == 0: |
|
162 | if os.system(cmd) == 0: | |
163 | if not verbose: |
|
163 | if not verbose: | |
164 | os.remove(installerrs) |
|
164 | os.remove(installerrs) | |
165 | else: |
|
165 | else: | |
166 | f = open(installerrs) |
|
166 | f = open(installerrs) | |
167 | for line in f: |
|
167 | for line in f: | |
168 | print line, |
|
168 | print line, | |
169 | f.close() |
|
169 | f.close() | |
170 | sys.exit(1) |
|
170 | sys.exit(1) | |
171 | os.chdir(TESTDIR) |
|
171 | os.chdir(TESTDIR) | |
172 |
|
172 | |||
173 | os.environ["PATH"] = "%s%s%s" % (BINDIR, os.pathsep, os.environ["PATH"]) |
|
173 | os.environ["PATH"] = "%s%s%s" % (BINDIR, os.pathsep, os.environ["PATH"]) | |
174 |
|
174 | |||
175 | pydir = os.pathsep.join([PYTHONDIR, TESTDIR]) |
|
175 | pydir = os.pathsep.join([PYTHONDIR, TESTDIR]) | |
176 | pythonpath = os.environ.get("PYTHONPATH") |
|
176 | pythonpath = os.environ.get("PYTHONPATH") | |
177 | if pythonpath: |
|
177 | if pythonpath: | |
178 | pythonpath = pydir + os.pathsep + pythonpath |
|
178 | pythonpath = pydir + os.pathsep + pythonpath | |
179 | else: |
|
179 | else: | |
180 | pythonpath = pydir |
|
180 | pythonpath = pydir | |
181 | os.environ["PYTHONPATH"] = pythonpath |
|
181 | os.environ["PYTHONPATH"] = pythonpath | |
182 |
|
182 | |||
183 | use_correct_python() |
|
183 | use_correct_python() | |
184 |
|
184 | |||
185 | if coverage: |
|
185 | if coverage: | |
186 | vlog("# Installing coverage wrapper") |
|
186 | vlog("# Installing coverage wrapper") | |
187 | os.environ['COVERAGE_FILE'] = COVERAGE_FILE |
|
187 | os.environ['COVERAGE_FILE'] = COVERAGE_FILE | |
188 | if os.path.exists(COVERAGE_FILE): |
|
188 | if os.path.exists(COVERAGE_FILE): | |
189 | os.unlink(COVERAGE_FILE) |
|
189 | os.unlink(COVERAGE_FILE) | |
190 | # Create a wrapper script to invoke hg via coverage.py |
|
190 | # Create a wrapper script to invoke hg via coverage.py | |
191 | os.rename(os.path.join(BINDIR, "hg"), os.path.join(BINDIR, "_hg.py")) |
|
191 | os.rename(os.path.join(BINDIR, "hg"), os.path.join(BINDIR, "_hg.py")) | |
192 | f = open(os.path.join(BINDIR, 'hg'), 'w') |
|
192 | f = open(os.path.join(BINDIR, 'hg'), 'w') | |
193 | f.write('#!' + sys.executable + '\n') |
|
193 | f.write('#!' + sys.executable + '\n') | |
194 | f.write('import sys, os; os.execv(sys.executable, [sys.executable, ' |
|
194 | f.write('import sys, os; os.execv(sys.executable, [sys.executable, ' | |
195 | '"%s", "-x", "%s"] + sys.argv[1:])\n' % |
|
195 | '"%s", "-x", "%s"] + sys.argv[1:])\n' % | |
196 | (os.path.join(TESTDIR, 'coverage.py'), |
|
196 | (os.path.join(TESTDIR, 'coverage.py'), | |
197 | os.path.join(BINDIR, '_hg.py'))) |
|
197 | os.path.join(BINDIR, '_hg.py'))) | |
198 | f.close() |
|
198 | f.close() | |
199 | os.chmod(os.path.join(BINDIR, 'hg'), 0700) |
|
199 | os.chmod(os.path.join(BINDIR, 'hg'), 0700) | |
200 | python = '"%s" "%s" -x' % (sys.executable, |
|
200 | python = '"%s" "%s" -x' % (sys.executable, | |
201 | os.path.join(TESTDIR,'coverage.py')) |
|
201 | os.path.join(TESTDIR,'coverage.py')) | |
202 |
|
202 | |||
203 | def output_coverage(): |
|
203 | def output_coverage(): | |
204 | vlog("# Producing coverage report") |
|
204 | vlog("# Producing coverage report") | |
205 | omit = [BINDIR, TESTDIR, PYTHONDIR] |
|
205 | omit = [BINDIR, TESTDIR, PYTHONDIR] | |
206 | if not options.cover_stdlib: |
|
206 | if not options.cover_stdlib: | |
207 | # Exclude as system paths (ignoring empty strings seen on win) |
|
207 | # Exclude as system paths (ignoring empty strings seen on win) | |
208 | omit += [x for x in sys.path if x != ''] |
|
208 | omit += [x for x in sys.path if x != ''] | |
209 | omit = ','.join(omit) |
|
209 | omit = ','.join(omit) | |
210 | os.chdir(PYTHONDIR) |
|
210 | os.chdir(PYTHONDIR) | |
211 | cmd = '"%s" "%s" -i -r "--omit=%s"' % ( |
|
211 | cmd = '"%s" "%s" -i -r "--omit=%s"' % ( | |
212 | sys.executable, os.path.join(TESTDIR, 'coverage.py'), omit) |
|
212 | sys.executable, os.path.join(TESTDIR, 'coverage.py'), omit) | |
213 | vlog("# Running: "+cmd) |
|
213 | vlog("# Running: "+cmd) | |
214 | os.system(cmd) |
|
214 | os.system(cmd) | |
215 | if options.annotate: |
|
215 | if options.annotate: | |
216 | adir = os.path.join(TESTDIR, 'annotated') |
|
216 | adir = os.path.join(TESTDIR, 'annotated') | |
217 | if not os.path.isdir(adir): |
|
217 | if not os.path.isdir(adir): | |
218 | os.mkdir(adir) |
|
218 | os.mkdir(adir) | |
219 | cmd = '"%s" "%s" -i -a "--directory=%s" "--omit=%s"' % ( |
|
219 | cmd = '"%s" "%s" -i -a "--directory=%s" "--omit=%s"' % ( | |
220 | sys.executable, os.path.join(TESTDIR, 'coverage.py'), |
|
220 | sys.executable, os.path.join(TESTDIR, 'coverage.py'), | |
221 | adir, omit) |
|
221 | adir, omit) | |
222 | vlog("# Running: "+cmd) |
|
222 | vlog("# Running: "+cmd) | |
223 | os.system(cmd) |
|
223 | os.system(cmd) | |
224 |
|
224 | |||
225 | class Timeout(Exception): |
|
225 | class Timeout(Exception): | |
226 | pass |
|
226 | pass | |
227 |
|
227 | |||
228 | def alarmed(signum, frame): |
|
228 | def alarmed(signum, frame): | |
229 | raise Timeout |
|
229 | raise Timeout | |
230 |
|
230 | |||
231 | def run(cmd): |
|
231 | def run(cmd): | |
232 | """Run command in a sub-process, capturing the output (stdout and stderr). |
|
232 | """Run command in a sub-process, capturing the output (stdout and stderr). | |
233 | Return the exist code, and output.""" |
|
233 | Return the exist code, and output.""" | |
234 | # TODO: Use subprocess.Popen if we're running on Python 2.4 |
|
234 | # TODO: Use subprocess.Popen if we're running on Python 2.4 | |
235 | if os.name == 'nt': |
|
235 | if os.name == 'nt': | |
236 | tochild, fromchild = os.popen4(cmd) |
|
236 | tochild, fromchild = os.popen4(cmd) | |
237 | tochild.close() |
|
237 | tochild.close() | |
238 | output = fromchild.read() |
|
238 | output = fromchild.read() | |
239 | ret = fromchild.close() |
|
239 | ret = fromchild.close() | |
240 | if ret == None: |
|
240 | if ret == None: | |
241 | ret = 0 |
|
241 | ret = 0 | |
242 | else: |
|
242 | else: | |
243 | proc = popen2.Popen4(cmd) |
|
243 | proc = popen2.Popen4(cmd) | |
244 | try: |
|
244 | try: | |
245 | output = '' |
|
245 | output = '' | |
246 | proc.tochild.close() |
|
246 | proc.tochild.close() | |
247 | output = proc.fromchild.read() |
|
247 | output = proc.fromchild.read() | |
248 | ret = proc.wait() |
|
248 | ret = proc.wait() | |
249 | if os.WIFEXITED(ret): |
|
249 | if os.WIFEXITED(ret): | |
250 | ret = os.WEXITSTATUS(ret) |
|
250 | ret = os.WEXITSTATUS(ret) | |
251 | except Timeout: |
|
251 | except Timeout: | |
252 | vlog('# Process %d timed out - killing it' % proc.pid) |
|
252 | vlog('# Process %d timed out - killing it' % proc.pid) | |
253 | os.kill(proc.pid, signal.SIGTERM) |
|
253 | os.kill(proc.pid, signal.SIGTERM) | |
254 | ret = proc.wait() |
|
254 | ret = proc.wait() | |
255 | if ret == 0: |
|
255 | if ret == 0: | |
256 | ret = signal.SIGTERM << 8 |
|
256 | ret = signal.SIGTERM << 8 | |
257 | output += ("\n### Abort: timeout after %d seconds.\n" |
|
257 | output += ("\n### Abort: timeout after %d seconds.\n" | |
258 | % options.timeout) |
|
258 | % options.timeout) | |
259 | return ret, splitnewlines(output) |
|
259 | return ret, splitnewlines(output) | |
260 |
|
260 | |||
261 | def run_one(test, skips): |
|
261 | def run_one(test, skips): | |
262 | '''tristate output: |
|
262 | '''tristate output: | |
263 | None -> skipped |
|
263 | None -> skipped | |
264 | True -> passed |
|
264 | True -> passed | |
265 | False -> failed''' |
|
265 | False -> failed''' | |
266 |
|
266 | |||
267 | def skip(msg): |
|
267 | def skip(msg): | |
268 | if not verbose: |
|
268 | if not verbose: | |
269 | skips.append((test, msg)) |
|
269 | skips.append((test, msg)) | |
270 | else: |
|
270 | else: | |
271 | print "\nSkipping %s: %s" % (test, msg) |
|
271 | print "\nSkipping %s: %s" % (test, msg) | |
272 | return None |
|
272 | return None | |
273 |
|
273 | |||
274 | vlog("# Test", test) |
|
274 | vlog("# Test", test) | |
275 |
|
275 | |||
276 | # create a fresh hgrc |
|
276 | # create a fresh hgrc | |
277 | hgrc = file(HGRCPATH, 'w+') |
|
277 | hgrc = file(HGRCPATH, 'w+') | |
278 | hgrc.write('[ui]\n') |
|
278 | hgrc.write('[ui]\n') | |
279 | hgrc.write('slash = True\n') |
|
279 | hgrc.write('slash = True\n') | |
280 | hgrc.write('[defaults]\n') |
|
280 | hgrc.write('[defaults]\n') | |
281 | hgrc.write('backout = -d "0 0"\n') |
|
281 | hgrc.write('backout = -d "0 0"\n') | |
282 | hgrc.write('commit = -d "0 0"\n') |
|
282 | hgrc.write('commit = -d "0 0"\n') | |
283 | hgrc.write('debugrawcommit = -d "0 0"\n') |
|
283 | hgrc.write('debugrawcommit = -d "0 0"\n') | |
284 | hgrc.write('tag = -d "0 0"\n') |
|
284 | hgrc.write('tag = -d "0 0"\n') | |
285 | hgrc.close() |
|
285 | hgrc.close() | |
286 |
|
286 | |||
287 | err = os.path.join(TESTDIR, test+".err") |
|
287 | err = os.path.join(TESTDIR, test+".err") | |
288 | ref = os.path.join(TESTDIR, test+".out") |
|
288 | ref = os.path.join(TESTDIR, test+".out") | |
289 | testpath = os.path.join(TESTDIR, test) |
|
289 | testpath = os.path.join(TESTDIR, test) | |
290 |
|
290 | |||
291 | if os.path.exists(err): |
|
291 | if os.path.exists(err): | |
292 | os.remove(err) # Remove any previous output files |
|
292 | os.remove(err) # Remove any previous output files | |
293 |
|
293 | |||
294 | # Make a tmp subdirectory to work in |
|
294 | # Make a tmp subdirectory to work in | |
295 | tmpd = os.path.join(HGTMP, test) |
|
295 | tmpd = os.path.join(HGTMP, test) | |
296 | os.mkdir(tmpd) |
|
296 | os.mkdir(tmpd) | |
297 | os.chdir(tmpd) |
|
297 | os.chdir(tmpd) | |
298 |
|
298 | |||
299 | try: |
|
299 | try: | |
300 | tf = open(testpath) |
|
300 | tf = open(testpath) | |
301 | firstline = tf.readline().rstrip() |
|
301 | firstline = tf.readline().rstrip() | |
302 | tf.close() |
|
302 | tf.close() | |
303 | except: |
|
303 | except: | |
304 | firstline = '' |
|
304 | firstline = '' | |
305 | lctest = test.lower() |
|
305 | lctest = test.lower() | |
306 |
|
306 | |||
307 | if lctest.endswith('.py') or firstline == '#!/usr/bin/env python': |
|
307 | if lctest.endswith('.py') or firstline == '#!/usr/bin/env python': | |
308 | cmd = '%s "%s"' % (python, testpath) |
|
308 | cmd = '%s "%s"' % (python, testpath) | |
309 | elif lctest.endswith('.bat'): |
|
309 | elif lctest.endswith('.bat'): | |
310 | # do not run batch scripts on non-windows |
|
310 | # do not run batch scripts on non-windows | |
311 | if os.name != 'nt': |
|
311 | if os.name != 'nt': | |
312 | return skip("batch script") |
|
312 | return skip("batch script") | |
313 | # To reliably get the error code from batch files on WinXP, |
|
313 | # To reliably get the error code from batch files on WinXP, | |
314 | # the "cmd /c call" prefix is needed. Grrr |
|
314 | # the "cmd /c call" prefix is needed. Grrr | |
315 | cmd = 'cmd /c call "%s"' % testpath |
|
315 | cmd = 'cmd /c call "%s"' % testpath | |
316 | else: |
|
316 | else: | |
317 | # do not run shell scripts on windows |
|
317 | # do not run shell scripts on windows | |
318 | if os.name == 'nt': |
|
318 | if os.name == 'nt': | |
319 | return skip("shell script") |
|
319 | return skip("shell script") | |
320 | # do not try to run non-executable programs |
|
320 | # do not try to run non-executable programs | |
321 | if not os.access(testpath, os.X_OK): |
|
321 | if not os.access(testpath, os.X_OK): | |
322 | return skip("not executable") |
|
322 | return skip("not executable") | |
323 | cmd = '"%s"' % testpath |
|
323 | cmd = '"%s"' % testpath | |
324 |
|
324 | |||
325 | if options.timeout > 0: |
|
325 | if options.timeout > 0: | |
326 | signal.alarm(options.timeout) |
|
326 | signal.alarm(options.timeout) | |
327 |
|
327 | |||
328 | vlog("# Running", cmd) |
|
328 | vlog("# Running", cmd) | |
329 | ret, out = run(cmd) |
|
329 | ret, out = run(cmd) | |
330 | vlog("# Ret was:", ret) |
|
330 | vlog("# Ret was:", ret) | |
331 |
|
331 | |||
332 | if options.timeout > 0: |
|
332 | if options.timeout > 0: | |
333 | signal.alarm(0) |
|
333 | signal.alarm(0) | |
334 |
|
334 | |||
335 | skipped = (ret == SKIPPED_STATUS) |
|
335 | skipped = (ret == SKIPPED_STATUS) | |
336 | diffret = 0 |
|
336 | diffret = 0 | |
337 | # If reference output file exists, check test output against it |
|
337 | # If reference output file exists, check test output against it | |
338 | if os.path.exists(ref): |
|
338 | if os.path.exists(ref): | |
339 | f = open(ref, "r") |
|
339 | f = open(ref, "r") | |
340 | ref_out = splitnewlines(f.read()) |
|
340 | ref_out = splitnewlines(f.read()) | |
341 | f.close() |
|
341 | f.close() | |
342 | else: |
|
342 | else: | |
343 | ref_out = [] |
|
343 | ref_out = [] | |
344 | if not skipped and out != ref_out: |
|
344 | if not skipped and out != ref_out: | |
345 | diffret = 1 |
|
345 | diffret = 1 | |
346 | print "\nERROR: %s output changed" % (test) |
|
346 | print "\nERROR: %s output changed" % (test) | |
347 | show_diff(ref_out, out) |
|
347 | show_diff(ref_out, out) | |
348 | if skipped: |
|
348 | if skipped: | |
349 | missing = extract_missing_features(out) |
|
349 | missing = extract_missing_features(out) | |
350 | if not missing: |
|
350 | if not missing: | |
351 | missing = ['irrelevant'] |
|
351 | missing = ['irrelevant'] | |
352 | skip(missing[-1]) |
|
352 | skip(missing[-1]) | |
353 | elif ret: |
|
353 | elif ret: | |
354 | print "\nERROR: %s failed with error code %d" % (test, ret) |
|
354 | print "\nERROR: %s failed with error code %d" % (test, ret) | |
355 | elif diffret: |
|
355 | elif diffret: | |
356 | ret = diffret |
|
356 | ret = diffret | |
357 |
|
357 | |||
358 | if not verbose: |
|
358 | if not verbose: | |
359 | sys.stdout.write(skipped and 's' or '.') |
|
359 | sys.stdout.write(skipped and 's' or '.') | |
360 | sys.stdout.flush() |
|
360 | sys.stdout.flush() | |
361 |
|
361 | |||
362 | if ret != 0 and not skipped: |
|
362 | if ret != 0 and not skipped: | |
363 | # Save errors to a file for diagnosis |
|
363 | # Save errors to a file for diagnosis | |
364 | f = open(err, "wb") |
|
364 | f = open(err, "wb") | |
365 | for line in out: |
|
365 | for line in out: | |
366 | f.write(line) |
|
366 | f.write(line) | |
367 | f.close() |
|
367 | f.close() | |
368 |
|
368 | |||
369 | # Kill off any leftover daemon processes |
|
369 | # Kill off any leftover daemon processes | |
370 | try: |
|
370 | try: | |
371 | fp = file(DAEMON_PIDS) |
|
371 | fp = file(DAEMON_PIDS) | |
372 | for line in fp: |
|
372 | for line in fp: | |
373 | try: |
|
373 | try: | |
374 | pid = int(line) |
|
374 | pid = int(line) | |
375 | except ValueError: |
|
375 | except ValueError: | |
376 | continue |
|
376 | continue | |
377 | try: |
|
377 | try: | |
378 | os.kill(pid, 0) |
|
378 | os.kill(pid, 0) | |
379 | vlog('# Killing daemon process %d' % pid) |
|
379 | vlog('# Killing daemon process %d' % pid) | |
380 | os.kill(pid, signal.SIGTERM) |
|
380 | os.kill(pid, signal.SIGTERM) | |
381 | time.sleep(0.25) |
|
381 | time.sleep(0.25) | |
382 | os.kill(pid, 0) |
|
382 | os.kill(pid, 0) | |
383 | vlog('# Daemon process %d is stuck - really killing it' % pid) |
|
383 | vlog('# Daemon process %d is stuck - really killing it' % pid) | |
384 | os.kill(pid, signal.SIGKILL) |
|
384 | os.kill(pid, signal.SIGKILL) | |
385 | except OSError, err: |
|
385 | except OSError, err: | |
386 | if err.errno != errno.ESRCH: |
|
386 | if err.errno != errno.ESRCH: | |
387 | raise |
|
387 | raise | |
388 | fp.close() |
|
388 | fp.close() | |
389 | os.unlink(DAEMON_PIDS) |
|
389 | os.unlink(DAEMON_PIDS) | |
390 | except IOError: |
|
390 | except IOError: | |
391 | pass |
|
391 | pass | |
392 |
|
392 | |||
393 | os.chdir(TESTDIR) |
|
393 | os.chdir(TESTDIR) | |
394 | shutil.rmtree(tmpd, True) |
|
394 | shutil.rmtree(tmpd, True) | |
395 | if skipped: |
|
395 | if skipped: | |
396 | return None |
|
396 | return None | |
397 | return ret == 0 |
|
397 | return ret == 0 | |
398 |
|
398 | |||
399 | if not options.child: |
|
399 | if not options.child: | |
400 | os.umask(022) |
|
400 | os.umask(022) | |
401 |
|
401 | |||
402 | check_required_tools() |
|
402 | check_required_tools() | |
403 |
|
403 | |||
404 | # Reset some environment variables to well-known values so that |
|
404 | # Reset some environment variables to well-known values so that | |
405 | # the tests produce repeatable output. |
|
405 | # the tests produce repeatable output. | |
406 | os.environ['LANG'] = os.environ['LC_ALL'] = 'C' |
|
406 | os.environ['LANG'] = os.environ['LC_ALL'] = 'C' | |
407 | os.environ['TZ'] = 'GMT' |
|
407 | os.environ['TZ'] = 'GMT' | |
|
408 | os.environ["EMAIL"] = "Foo Bar <foo.bar@example.com>" | |||
408 |
|
409 | |||
409 | TESTDIR = os.environ["TESTDIR"] = os.getcwd() |
|
410 | TESTDIR = os.environ["TESTDIR"] = os.getcwd() | |
410 | HGTMP = os.environ['HGTMP'] = tempfile.mkdtemp('', 'hgtests.', options.tmpdir) |
|
411 | HGTMP = os.environ['HGTMP'] = tempfile.mkdtemp('', 'hgtests.', options.tmpdir) | |
411 | DAEMON_PIDS = None |
|
412 | DAEMON_PIDS = None | |
412 | HGRCPATH = None |
|
413 | HGRCPATH = None | |
413 |
|
414 | |||
414 | os.environ["HGEDITOR"] = sys.executable + ' -c "import sys; sys.exit(0)"' |
|
415 | os.environ["HGEDITOR"] = sys.executable + ' -c "import sys; sys.exit(0)"' | |
415 | os.environ["HGMERGE"] = ('python "%s" -L my -L other' |
|
416 | os.environ["HGMERGE"] = ('python "%s" -L my -L other' | |
416 | % os.path.join(TESTDIR, os.path.pardir, |
|
417 | % os.path.join(TESTDIR, os.path.pardir, | |
417 | 'contrib', 'simplemerge')) |
|
418 | 'contrib', 'simplemerge')) | |
418 | os.environ["HGUSER"] = "test" |
|
419 | os.environ["HGUSER"] = "test" | |
419 | os.environ["HGENCODING"] = "ascii" |
|
420 | os.environ["HGENCODING"] = "ascii" | |
420 | os.environ["HGENCODINGMODE"] = "strict" |
|
421 | os.environ["HGENCODINGMODE"] = "strict" | |
421 | os.environ["HGPORT"] = str(options.port) |
|
422 | os.environ["HGPORT"] = str(options.port) | |
422 | os.environ["HGPORT1"] = str(options.port + 1) |
|
423 | os.environ["HGPORT1"] = str(options.port + 1) | |
423 | os.environ["HGPORT2"] = str(options.port + 2) |
|
424 | os.environ["HGPORT2"] = str(options.port + 2) | |
424 |
|
425 | |||
425 | if options.with_hg: |
|
426 | if options.with_hg: | |
426 | INST = options.with_hg |
|
427 | INST = options.with_hg | |
427 | else: |
|
428 | else: | |
428 | INST = os.path.join(HGTMP, "install") |
|
429 | INST = os.path.join(HGTMP, "install") | |
429 | BINDIR = os.path.join(INST, "bin") |
|
430 | BINDIR = os.path.join(INST, "bin") | |
430 | PYTHONDIR = os.path.join(INST, "lib", "python") |
|
431 | PYTHONDIR = os.path.join(INST, "lib", "python") | |
431 | COVERAGE_FILE = os.path.join(TESTDIR, ".coverage") |
|
432 | COVERAGE_FILE = os.path.join(TESTDIR, ".coverage") | |
432 |
|
433 | |||
433 | def run_children(tests): |
|
434 | def run_children(tests): | |
434 | if not options.with_hg: |
|
435 | if not options.with_hg: | |
435 | install_hg() |
|
436 | install_hg() | |
436 |
|
437 | |||
437 | optcopy = dict(options.__dict__) |
|
438 | optcopy = dict(options.__dict__) | |
438 | optcopy['jobs'] = 1 |
|
439 | optcopy['jobs'] = 1 | |
439 | optcopy['with_hg'] = INST |
|
440 | optcopy['with_hg'] = INST | |
440 | opts = [] |
|
441 | opts = [] | |
441 | for opt, value in optcopy.iteritems(): |
|
442 | for opt, value in optcopy.iteritems(): | |
442 | name = '--' + opt.replace('_', '-') |
|
443 | name = '--' + opt.replace('_', '-') | |
443 | if value is True: |
|
444 | if value is True: | |
444 | opts.append(name) |
|
445 | opts.append(name) | |
445 | elif value is not None: |
|
446 | elif value is not None: | |
446 | opts.append(name + '=' + str(value)) |
|
447 | opts.append(name + '=' + str(value)) | |
447 |
|
448 | |||
448 | tests.reverse() |
|
449 | tests.reverse() | |
449 | jobs = [[] for j in xrange(options.jobs)] |
|
450 | jobs = [[] for j in xrange(options.jobs)] | |
450 | while tests: |
|
451 | while tests: | |
451 | for j in xrange(options.jobs): |
|
452 | for j in xrange(options.jobs): | |
452 | if not tests: break |
|
453 | if not tests: break | |
453 | jobs[j].append(tests.pop()) |
|
454 | jobs[j].append(tests.pop()) | |
454 | fps = {} |
|
455 | fps = {} | |
455 | for j in xrange(len(jobs)): |
|
456 | for j in xrange(len(jobs)): | |
456 | job = jobs[j] |
|
457 | job = jobs[j] | |
457 | if not job: |
|
458 | if not job: | |
458 | continue |
|
459 | continue | |
459 | rfd, wfd = os.pipe() |
|
460 | rfd, wfd = os.pipe() | |
460 | childopts = ['--child=%d' % wfd, '--port=%d' % (options.port + j * 3)] |
|
461 | childopts = ['--child=%d' % wfd, '--port=%d' % (options.port + j * 3)] | |
461 | cmdline = [python, sys.argv[0]] + opts + childopts + job |
|
462 | cmdline = [python, sys.argv[0]] + opts + childopts + job | |
462 | vlog(' '.join(cmdline)) |
|
463 | vlog(' '.join(cmdline)) | |
463 | fps[os.spawnvp(os.P_NOWAIT, cmdline[0], cmdline)] = os.fdopen(rfd, 'r') |
|
464 | fps[os.spawnvp(os.P_NOWAIT, cmdline[0], cmdline)] = os.fdopen(rfd, 'r') | |
464 | os.close(wfd) |
|
465 | os.close(wfd) | |
465 | failures = 0 |
|
466 | failures = 0 | |
466 | tested, skipped, failed = 0, 0, 0 |
|
467 | tested, skipped, failed = 0, 0, 0 | |
467 | skips = [] |
|
468 | skips = [] | |
468 | while fps: |
|
469 | while fps: | |
469 | pid, status = os.wait() |
|
470 | pid, status = os.wait() | |
470 | fp = fps.pop(pid) |
|
471 | fp = fps.pop(pid) | |
471 | l = fp.read().splitlines() |
|
472 | l = fp.read().splitlines() | |
472 | test, skip, fail = map(int, l[:3]) |
|
473 | test, skip, fail = map(int, l[:3]) | |
473 | for s in l[3:]: |
|
474 | for s in l[3:]: | |
474 | skips.append(s.split(" ", 1)) |
|
475 | skips.append(s.split(" ", 1)) | |
475 | tested += test |
|
476 | tested += test | |
476 | skipped += skip |
|
477 | skipped += skip | |
477 | failed += fail |
|
478 | failed += fail | |
478 | vlog('pid %d exited, status %d' % (pid, status)) |
|
479 | vlog('pid %d exited, status %d' % (pid, status)) | |
479 | failures |= status |
|
480 | failures |= status | |
480 |
|
481 | |||
481 | for s in skips: |
|
482 | for s in skips: | |
482 | print "Skipped %s: %s" % (s[0], s[1]) |
|
483 | print "Skipped %s: %s" % (s[0], s[1]) | |
483 | print "# Ran %d tests, %d skipped, %d failed." % ( |
|
484 | print "# Ran %d tests, %d skipped, %d failed." % ( | |
484 | tested, skipped, failed) |
|
485 | tested, skipped, failed) | |
485 | sys.exit(failures != 0) |
|
486 | sys.exit(failures != 0) | |
486 |
|
487 | |||
487 | def run_tests(tests): |
|
488 | def run_tests(tests): | |
488 | global DAEMON_PIDS, HGRCPATH |
|
489 | global DAEMON_PIDS, HGRCPATH | |
489 | DAEMON_PIDS = os.environ["DAEMON_PIDS"] = os.path.join(HGTMP, 'daemon.pids') |
|
490 | DAEMON_PIDS = os.environ["DAEMON_PIDS"] = os.path.join(HGTMP, 'daemon.pids') | |
490 | HGRCPATH = os.environ["HGRCPATH"] = os.path.join(HGTMP, '.hgrc') |
|
491 | HGRCPATH = os.environ["HGRCPATH"] = os.path.join(HGTMP, '.hgrc') | |
491 |
|
492 | |||
492 | try: |
|
493 | try: | |
493 | if not options.with_hg: |
|
494 | if not options.with_hg: | |
494 | install_hg() |
|
495 | install_hg() | |
495 |
|
496 | |||
496 | if options.timeout > 0: |
|
497 | if options.timeout > 0: | |
497 | try: |
|
498 | try: | |
498 | signal.signal(signal.SIGALRM, alarmed) |
|
499 | signal.signal(signal.SIGALRM, alarmed) | |
499 | vlog('# Running tests with %d-second timeout' % |
|
500 | vlog('# Running tests with %d-second timeout' % | |
500 | options.timeout) |
|
501 | options.timeout) | |
501 | except AttributeError: |
|
502 | except AttributeError: | |
502 | print 'WARNING: cannot run tests with timeouts' |
|
503 | print 'WARNING: cannot run tests with timeouts' | |
503 | options.timeout = 0 |
|
504 | options.timeout = 0 | |
504 |
|
505 | |||
505 | tested = 0 |
|
506 | tested = 0 | |
506 | failed = 0 |
|
507 | failed = 0 | |
507 | skipped = 0 |
|
508 | skipped = 0 | |
508 |
|
509 | |||
509 | if options.restart: |
|
510 | if options.restart: | |
510 | orig = list(tests) |
|
511 | orig = list(tests) | |
511 | while tests: |
|
512 | while tests: | |
512 | if os.path.exists(tests[0] + ".err"): |
|
513 | if os.path.exists(tests[0] + ".err"): | |
513 | break |
|
514 | break | |
514 | tests.pop(0) |
|
515 | tests.pop(0) | |
515 | if not tests: |
|
516 | if not tests: | |
516 | print "running all tests" |
|
517 | print "running all tests" | |
517 | tests = orig |
|
518 | tests = orig | |
518 |
|
519 | |||
519 | skips = [] |
|
520 | skips = [] | |
520 | for test in tests: |
|
521 | for test in tests: | |
521 | if options.retest and not os.path.exists(test + ".err"): |
|
522 | if options.retest and not os.path.exists(test + ".err"): | |
522 | skipped += 1 |
|
523 | skipped += 1 | |
523 | continue |
|
524 | continue | |
524 | ret = run_one(test, skips) |
|
525 | ret = run_one(test, skips) | |
525 | if ret is None: |
|
526 | if ret is None: | |
526 | skipped += 1 |
|
527 | skipped += 1 | |
527 | elif not ret: |
|
528 | elif not ret: | |
528 | if options.interactive: |
|
529 | if options.interactive: | |
529 | print "Accept this change? [n] ", |
|
530 | print "Accept this change? [n] ", | |
530 | answer = sys.stdin.readline().strip() |
|
531 | answer = sys.stdin.readline().strip() | |
531 | if answer.lower() in "y yes".split(): |
|
532 | if answer.lower() in "y yes".split(): | |
532 | os.rename(test + ".err", test + ".out") |
|
533 | os.rename(test + ".err", test + ".out") | |
533 | tested += 1 |
|
534 | tested += 1 | |
534 | continue |
|
535 | continue | |
535 | failed += 1 |
|
536 | failed += 1 | |
536 | if options.first: |
|
537 | if options.first: | |
537 | break |
|
538 | break | |
538 | tested += 1 |
|
539 | tested += 1 | |
539 |
|
540 | |||
540 | if options.child: |
|
541 | if options.child: | |
541 | fp = os.fdopen(options.child, 'w') |
|
542 | fp = os.fdopen(options.child, 'w') | |
542 | fp.write('%d\n%d\n%d\n' % (tested, skipped, failed)) |
|
543 | fp.write('%d\n%d\n%d\n' % (tested, skipped, failed)) | |
543 | for s in skips: |
|
544 | for s in skips: | |
544 | fp.write("%s %s\n" % s) |
|
545 | fp.write("%s %s\n" % s) | |
545 | fp.close() |
|
546 | fp.close() | |
546 | else: |
|
547 | else: | |
547 |
|
548 | |||
548 | for s in skips: |
|
549 | for s in skips: | |
549 | print "Skipped %s: %s" % s |
|
550 | print "Skipped %s: %s" % s | |
550 | print "# Ran %d tests, %d skipped, %d failed." % ( |
|
551 | print "# Ran %d tests, %d skipped, %d failed." % ( | |
551 | tested, skipped, failed) |
|
552 | tested, skipped, failed) | |
552 |
|
553 | |||
553 | if coverage: |
|
554 | if coverage: | |
554 | output_coverage() |
|
555 | output_coverage() | |
555 | except KeyboardInterrupt: |
|
556 | except KeyboardInterrupt: | |
556 | failed = True |
|
557 | failed = True | |
557 | print "\ninterrupted!" |
|
558 | print "\ninterrupted!" | |
558 |
|
559 | |||
559 | if failed: |
|
560 | if failed: | |
560 | sys.exit(1) |
|
561 | sys.exit(1) | |
561 |
|
562 | |||
562 | if len(args) == 0: |
|
563 | if len(args) == 0: | |
563 | args = os.listdir(".") |
|
564 | args = os.listdir(".") | |
564 | args.sort() |
|
565 | args.sort() | |
565 |
|
566 | |||
566 | tests = [] |
|
567 | tests = [] | |
567 | for test in args: |
|
568 | for test in args: | |
568 | if (test.startswith("test-") and '~' not in test and |
|
569 | if (test.startswith("test-") and '~' not in test and | |
569 | ('.' not in test or test.endswith('.py') or |
|
570 | ('.' not in test or test.endswith('.py') or | |
570 | test.endswith('.bat'))): |
|
571 | test.endswith('.bat'))): | |
571 | tests.append(test) |
|
572 | tests.append(test) | |
572 |
|
573 | |||
573 | vlog("# Using TESTDIR", TESTDIR) |
|
574 | vlog("# Using TESTDIR", TESTDIR) | |
574 | vlog("# Using HGTMP", HGTMP) |
|
575 | vlog("# Using HGTMP", HGTMP) | |
575 |
|
576 | |||
576 | try: |
|
577 | try: | |
577 | if len(tests) > 1 and options.jobs > 1: |
|
578 | if len(tests) > 1 and options.jobs > 1: | |
578 | run_children(tests) |
|
579 | run_children(tests) | |
579 | else: |
|
580 | else: | |
580 | run_tests(tests) |
|
581 | run_tests(tests) | |
581 | finally: |
|
582 | finally: | |
582 | cleanup_exit() |
|
583 | cleanup_exit() |
1 | NO CONTENT: modified file, binary diff hidden |
|
NO CONTENT: modified file, binary diff hidden |
General Comments 0
You need to be logged in to leave comments.
Login now