Show More
@@ -1,534 +1,539 | |||||
1 | HGRC(5) |
|
1 | HGRC(5) | |
2 | ======= |
|
2 | ======= | |
3 | Bryan O'Sullivan <bos@serpentine.com> |
|
3 | Bryan O'Sullivan <bos@serpentine.com> | |
4 |
|
4 | |||
5 | NAME |
|
5 | NAME | |
6 | ---- |
|
6 | ---- | |
7 | hgrc - configuration files for Mercurial |
|
7 | hgrc - configuration files for Mercurial | |
8 |
|
8 | |||
9 | SYNOPSIS |
|
9 | SYNOPSIS | |
10 | -------- |
|
10 | -------- | |
11 |
|
11 | |||
12 | The Mercurial system uses a set of configuration files to control |
|
12 | The Mercurial system uses a set of configuration files to control | |
13 | aspects of its behaviour. |
|
13 | aspects of its behaviour. | |
14 |
|
14 | |||
15 | FILES |
|
15 | FILES | |
16 | ----- |
|
16 | ----- | |
17 |
|
17 | |||
18 | Mercurial reads configuration data from several files, if they exist. |
|
18 | Mercurial reads configuration data from several files, if they exist. | |
19 | The names of these files depend on the system on which Mercurial is |
|
19 | The names of these files depend on the system on which Mercurial is | |
20 | installed. |
|
20 | installed. | |
21 |
|
21 | |||
22 | (Unix) <install-root>/etc/mercurial/hgrc.d/*.rc:: |
|
22 | (Unix) <install-root>/etc/mercurial/hgrc.d/*.rc:: | |
23 | (Unix) <install-root>/etc/mercurial/hgrc:: |
|
23 | (Unix) <install-root>/etc/mercurial/hgrc:: | |
24 | Per-installation configuration files, searched for in the |
|
24 | Per-installation configuration files, searched for in the | |
25 | directory where Mercurial is installed. For example, if installed |
|
25 | directory where Mercurial is installed. For example, if installed | |
26 | in /shared/tools, Mercurial will look in |
|
26 | in /shared/tools, Mercurial will look in | |
27 | /shared/tools/etc/mercurial/hgrc. Options in these files apply to |
|
27 | /shared/tools/etc/mercurial/hgrc. Options in these files apply to | |
28 | all Mercurial commands executed by any user in any directory. |
|
28 | all Mercurial commands executed by any user in any directory. | |
29 |
|
29 | |||
30 | (Unix) /etc/mercurial/hgrc.d/*.rc:: |
|
30 | (Unix) /etc/mercurial/hgrc.d/*.rc:: | |
31 | (Unix) /etc/mercurial/hgrc:: |
|
31 | (Unix) /etc/mercurial/hgrc:: | |
32 | (Windows) C:\Mercurial\Mercurial.ini:: |
|
32 | (Windows) C:\Mercurial\Mercurial.ini:: | |
33 | Per-system configuration files, for the system on which Mercurial |
|
33 | Per-system configuration files, for the system on which Mercurial | |
34 | is running. Options in these files apply to all Mercurial |
|
34 | is running. Options in these files apply to all Mercurial | |
35 | commands executed by any user in any directory. Options in these |
|
35 | commands executed by any user in any directory. Options in these | |
36 | files override per-installation options. |
|
36 | files override per-installation options. | |
37 |
|
37 | |||
38 | (Unix) $HOME/.hgrc:: |
|
38 | (Unix) $HOME/.hgrc:: | |
39 | (Windows) C:\Documents and Settings\USERNAME\Mercurial.ini:: |
|
39 | (Windows) C:\Documents and Settings\USERNAME\Mercurial.ini:: | |
40 | (Windows) $HOME\Mercurial.ini:: |
|
40 | (Windows) $HOME\Mercurial.ini:: | |
41 | Per-user configuration file, for the user running Mercurial. |
|
41 | Per-user configuration file, for the user running Mercurial. | |
42 | Options in this file apply to all Mercurial commands executed by |
|
42 | Options in this file apply to all Mercurial commands executed by | |
43 | any user in any directory. Options in this file override |
|
43 | any user in any directory. Options in this file override | |
44 | per-installation and per-system options. |
|
44 | per-installation and per-system options. | |
45 | On Windows system, one of these is chosen exclusively according |
|
45 | On Windows system, one of these is chosen exclusively according | |
46 | to definition of HOME environment variable. |
|
46 | to definition of HOME environment variable. | |
47 |
|
47 | |||
48 | (Unix, Windows) <repo>/.hg/hgrc:: |
|
48 | (Unix, Windows) <repo>/.hg/hgrc:: | |
49 | Per-repository configuration options that only apply in a |
|
49 | Per-repository configuration options that only apply in a | |
50 | particular repository. This file is not version-controlled, and |
|
50 | particular repository. This file is not version-controlled, and | |
51 | will not get transferred during a "clone" operation. Options in |
|
51 | will not get transferred during a "clone" operation. Options in | |
52 | this file override options in all other configuration files. |
|
52 | this file override options in all other configuration files. | |
53 | On Unix, most of this file will be ignored if it doesn't belong |
|
53 | On Unix, most of this file will be ignored if it doesn't belong | |
54 | to a trusted user or to a trusted group. See the documentation |
|
54 | to a trusted user or to a trusted group. See the documentation | |
55 | for the trusted section below for more details. |
|
55 | for the trusted section below for more details. | |
56 |
|
56 | |||
57 | SYNTAX |
|
57 | SYNTAX | |
58 | ------ |
|
58 | ------ | |
59 |
|
59 | |||
60 | A configuration file consists of sections, led by a "[section]" header |
|
60 | A configuration file consists of sections, led by a "[section]" header | |
61 | and followed by "name: value" entries; "name=value" is also accepted. |
|
61 | and followed by "name: value" entries; "name=value" is also accepted. | |
62 |
|
62 | |||
63 | [spam] |
|
63 | [spam] | |
64 | eggs=ham |
|
64 | eggs=ham | |
65 | green= |
|
65 | green= | |
66 | eggs |
|
66 | eggs | |
67 |
|
67 | |||
68 | Each line contains one entry. If the lines that follow are indented, |
|
68 | Each line contains one entry. If the lines that follow are indented, | |
69 | they are treated as continuations of that entry. |
|
69 | they are treated as continuations of that entry. | |
70 |
|
70 | |||
71 | Leading whitespace is removed from values. Empty lines are skipped. |
|
71 | Leading whitespace is removed from values. Empty lines are skipped. | |
72 |
|
72 | |||
73 | The optional values can contain format strings which refer to other |
|
73 | The optional values can contain format strings which refer to other | |
74 | values in the same section, or values in a special DEFAULT section. |
|
74 | values in the same section, or values in a special DEFAULT section. | |
75 |
|
75 | |||
76 | Lines beginning with "#" or ";" are ignored and may be used to provide |
|
76 | Lines beginning with "#" or ";" are ignored and may be used to provide | |
77 | comments. |
|
77 | comments. | |
78 |
|
78 | |||
79 | SECTIONS |
|
79 | SECTIONS | |
80 | -------- |
|
80 | -------- | |
81 |
|
81 | |||
82 | This section describes the different sections that may appear in a |
|
82 | This section describes the different sections that may appear in a | |
83 | Mercurial "hgrc" file, the purpose of each section, its possible |
|
83 | Mercurial "hgrc" file, the purpose of each section, its possible | |
84 | keys, and their possible values. |
|
84 | keys, and their possible values. | |
85 |
|
85 | |||
86 | decode/encode:: |
|
86 | decode/encode:: | |
87 | Filters for transforming files on checkout/checkin. This would |
|
87 | Filters for transforming files on checkout/checkin. This would | |
88 | typically be used for newline processing or other |
|
88 | typically be used for newline processing or other | |
89 | localization/canonicalization of files. |
|
89 | localization/canonicalization of files. | |
90 |
|
90 | |||
91 | Filters consist of a filter pattern followed by a filter command. |
|
91 | Filters consist of a filter pattern followed by a filter command. | |
92 | Filter patterns are globs by default, rooted at the repository |
|
92 | Filter patterns are globs by default, rooted at the repository | |
93 | root. For example, to match any file ending in ".txt" in the root |
|
93 | root. For example, to match any file ending in ".txt" in the root | |
94 | directory only, use the pattern "*.txt". To match any file ending |
|
94 | directory only, use the pattern "*.txt". To match any file ending | |
95 | in ".c" anywhere in the repository, use the pattern "**.c". |
|
95 | in ".c" anywhere in the repository, use the pattern "**.c". | |
96 |
|
96 | |||
97 | The filter command can start with a specifier, either "pipe:" or |
|
97 | The filter command can start with a specifier, either "pipe:" or | |
98 | "tempfile:". If no specifier is given, "pipe:" is used by default. |
|
98 | "tempfile:". If no specifier is given, "pipe:" is used by default. | |
99 |
|
99 | |||
100 | A "pipe:" command must accept data on stdin and return the |
|
100 | A "pipe:" command must accept data on stdin and return the | |
101 | transformed data on stdout. |
|
101 | transformed data on stdout. | |
102 |
|
102 | |||
103 | Pipe example: |
|
103 | Pipe example: | |
104 |
|
104 | |||
105 | [encode] |
|
105 | [encode] | |
106 | # uncompress gzip files on checkin to improve delta compression |
|
106 | # uncompress gzip files on checkin to improve delta compression | |
107 | # note: not necessarily a good idea, just an example |
|
107 | # note: not necessarily a good idea, just an example | |
108 | *.gz = pipe: gunzip |
|
108 | *.gz = pipe: gunzip | |
109 |
|
109 | |||
110 | [decode] |
|
110 | [decode] | |
111 | # recompress gzip files when writing them to the working dir (we |
|
111 | # recompress gzip files when writing them to the working dir (we | |
112 | # can safely omit "pipe:", because it's the default) |
|
112 | # can safely omit "pipe:", because it's the default) | |
113 | *.gz = gzip |
|
113 | *.gz = gzip | |
114 |
|
114 | |||
115 | A "tempfile:" command is a template. The string INFILE is replaced |
|
115 | A "tempfile:" command is a template. The string INFILE is replaced | |
116 | with the name of a temporary file that contains the data to be |
|
116 | with the name of a temporary file that contains the data to be | |
117 | filtered by the command. The string OUTFILE is replaced with the |
|
117 | filtered by the command. The string OUTFILE is replaced with the | |
118 | name of an empty temporary file, where the filtered data must be |
|
118 | name of an empty temporary file, where the filtered data must be | |
119 | written by the command. |
|
119 | written by the command. | |
120 |
|
120 | |||
121 | NOTE: the tempfile mechanism is recommended for Windows systems, |
|
121 | NOTE: the tempfile mechanism is recommended for Windows systems, | |
122 | where the standard shell I/O redirection operators often have |
|
122 | where the standard shell I/O redirection operators often have | |
123 | strange effects. In particular, if you are doing line ending |
|
123 | strange effects. In particular, if you are doing line ending | |
124 | conversion on Windows using the popular dos2unix and unix2dos |
|
124 | conversion on Windows using the popular dos2unix and unix2dos | |
125 | programs, you *must* use the tempfile mechanism, as using pipes will |
|
125 | programs, you *must* use the tempfile mechanism, as using pipes will | |
126 | corrupt the contents of your files. |
|
126 | corrupt the contents of your files. | |
127 |
|
127 | |||
128 | Tempfile example: |
|
128 | Tempfile example: | |
129 |
|
129 | |||
130 | [encode] |
|
130 | [encode] | |
131 | # convert files to unix line ending conventions on checkin |
|
131 | # convert files to unix line ending conventions on checkin | |
132 | **.txt = tempfile: dos2unix -n INFILE OUTFILE |
|
132 | **.txt = tempfile: dos2unix -n INFILE OUTFILE | |
133 |
|
133 | |||
134 | [decode] |
|
134 | [decode] | |
135 | # convert files to windows line ending conventions when writing |
|
135 | # convert files to windows line ending conventions when writing | |
136 | # them to the working dir |
|
136 | # them to the working dir | |
137 | **.txt = tempfile: unix2dos -n INFILE OUTFILE |
|
137 | **.txt = tempfile: unix2dos -n INFILE OUTFILE | |
138 |
|
138 | |||
139 | defaults:: |
|
139 | defaults:: | |
140 | Use the [defaults] section to define command defaults, i.e. the |
|
140 | Use the [defaults] section to define command defaults, i.e. the | |
141 | default options/arguments to pass to the specified commands. |
|
141 | default options/arguments to pass to the specified commands. | |
142 |
|
142 | |||
143 | The following example makes 'hg log' run in verbose mode, and |
|
143 | The following example makes 'hg log' run in verbose mode, and | |
144 | 'hg status' show only the modified files, by default. |
|
144 | 'hg status' show only the modified files, by default. | |
145 |
|
145 | |||
146 | [defaults] |
|
146 | [defaults] | |
147 | log = -v |
|
147 | log = -v | |
148 | status = -m |
|
148 | status = -m | |
149 |
|
149 | |||
150 | The actual commands, instead of their aliases, must be used when |
|
150 | The actual commands, instead of their aliases, must be used when | |
151 | defining command defaults. The command defaults will also be |
|
151 | defining command defaults. The command defaults will also be | |
152 | applied to the aliases of the commands defined. |
|
152 | applied to the aliases of the commands defined. | |
153 |
|
153 | |||
154 | diff:: |
|
154 | diff:: | |
155 | Settings used when displaying diffs. They are all boolean and |
|
155 | Settings used when displaying diffs. They are all boolean and | |
156 | defaults to False. |
|
156 | defaults to False. | |
157 | git;; |
|
157 | git;; | |
158 | Use git extended diff format. |
|
158 | Use git extended diff format. | |
159 | nodates;; |
|
159 | nodates;; | |
160 | Don't include dates in diff headers. |
|
160 | Don't include dates in diff headers. | |
161 | showfunc;; |
|
161 | showfunc;; | |
162 | Show which function each change is in. |
|
162 | Show which function each change is in. | |
163 | ignorews;; |
|
163 | ignorews;; | |
164 | Ignore white space when comparing lines. |
|
164 | Ignore white space when comparing lines. | |
165 | ignorewsamount;; |
|
165 | ignorewsamount;; | |
166 | Ignore changes in the amount of white space. |
|
166 | Ignore changes in the amount of white space. | |
167 | ignoreblanklines;; |
|
167 | ignoreblanklines;; | |
168 | Ignore changes whose lines are all blank. |
|
168 | Ignore changes whose lines are all blank. | |
169 |
|
169 | |||
170 | email:: |
|
170 | email:: | |
171 | Settings for extensions that send email messages. |
|
171 | Settings for extensions that send email messages. | |
172 | from;; |
|
172 | from;; | |
173 | Optional. Email address to use in "From" header and SMTP envelope |
|
173 | Optional. Email address to use in "From" header and SMTP envelope | |
174 | of outgoing messages. |
|
174 | of outgoing messages. | |
175 | to;; |
|
175 | to;; | |
176 | Optional. Comma-separated list of recipients' email addresses. |
|
176 | Optional. Comma-separated list of recipients' email addresses. | |
177 | cc;; |
|
177 | cc;; | |
178 | Optional. Comma-separated list of carbon copy recipients' |
|
178 | Optional. Comma-separated list of carbon copy recipients' | |
179 | email addresses. |
|
179 | email addresses. | |
180 | bcc;; |
|
180 | bcc;; | |
181 | Optional. Comma-separated list of blind carbon copy |
|
181 | Optional. Comma-separated list of blind carbon copy | |
182 | recipients' email addresses. Cannot be set interactively. |
|
182 | recipients' email addresses. Cannot be set interactively. | |
183 | method;; |
|
183 | method;; | |
184 | Optional. Method to use to send email messages. If value is |
|
184 | Optional. Method to use to send email messages. If value is | |
185 | "smtp" (default), use SMTP (see section "[smtp]" for |
|
185 | "smtp" (default), use SMTP (see section "[smtp]" for | |
186 | configuration). Otherwise, use as name of program to run that |
|
186 | configuration). Otherwise, use as name of program to run that | |
187 | acts like sendmail (takes "-f" option for sender, list of |
|
187 | acts like sendmail (takes "-f" option for sender, list of | |
188 | recipients on command line, message on stdin). Normally, setting |
|
188 | recipients on command line, message on stdin). Normally, setting | |
189 | this to "sendmail" or "/usr/sbin/sendmail" is enough to use |
|
189 | this to "sendmail" or "/usr/sbin/sendmail" is enough to use | |
190 | sendmail to send messages. |
|
190 | sendmail to send messages. | |
191 |
|
191 | |||
192 | Email example: |
|
192 | Email example: | |
193 |
|
193 | |||
194 | [email] |
|
194 | [email] | |
195 | from = Joseph User <joe.user@example.com> |
|
195 | from = Joseph User <joe.user@example.com> | |
196 | method = /usr/sbin/sendmail |
|
196 | method = /usr/sbin/sendmail | |
197 |
|
197 | |||
198 | extensions:: |
|
198 | extensions:: | |
199 | Mercurial has an extension mechanism for adding new features. To |
|
199 | Mercurial has an extension mechanism for adding new features. To | |
200 | enable an extension, create an entry for it in this section. |
|
200 | enable an extension, create an entry for it in this section. | |
201 |
|
201 | |||
202 | If you know that the extension is already in Python's search path, |
|
202 | If you know that the extension is already in Python's search path, | |
203 | you can give the name of the module, followed by "=", with nothing |
|
203 | you can give the name of the module, followed by "=", with nothing | |
204 | after the "=". |
|
204 | after the "=". | |
205 |
|
205 | |||
206 | Otherwise, give a name that you choose, followed by "=", followed by |
|
206 | Otherwise, give a name that you choose, followed by "=", followed by | |
207 | the path to the ".py" file (including the file name extension) that |
|
207 | the path to the ".py" file (including the file name extension) that | |
208 | defines the extension. |
|
208 | defines the extension. | |
209 |
|
209 | |||
210 | Example for ~/.hgrc: |
|
210 | Example for ~/.hgrc: | |
211 |
|
211 | |||
212 | [extensions] |
|
212 | [extensions] | |
213 | # (the mq extension will get loaded from mercurial's path) |
|
213 | # (the mq extension will get loaded from mercurial's path) | |
214 | hgext.mq = |
|
214 | hgext.mq = | |
215 | # (this extension will get loaded from the file specified) |
|
215 | # (this extension will get loaded from the file specified) | |
216 | myfeature = ~/.hgext/myfeature.py |
|
216 | myfeature = ~/.hgext/myfeature.py | |
217 |
|
217 | |||
218 | hooks:: |
|
218 | hooks:: | |
219 | Commands or Python functions that get automatically executed by |
|
219 | Commands or Python functions that get automatically executed by | |
220 | various actions such as starting or finishing a commit. Multiple |
|
220 | various actions such as starting or finishing a commit. Multiple | |
221 | hooks can be run for the same action by appending a suffix to the |
|
221 | hooks can be run for the same action by appending a suffix to the | |
222 | action. Overriding a site-wide hook can be done by changing its |
|
222 | action. Overriding a site-wide hook can be done by changing its | |
223 | value or setting it to an empty string. |
|
223 | value or setting it to an empty string. | |
224 |
|
224 | |||
225 | Example .hg/hgrc: |
|
225 | Example .hg/hgrc: | |
226 |
|
226 | |||
227 | [hooks] |
|
227 | [hooks] | |
228 | # do not use the site-wide hook |
|
228 | # do not use the site-wide hook | |
229 | incoming = |
|
229 | incoming = | |
230 | incoming.email = /my/email/hook |
|
230 | incoming.email = /my/email/hook | |
231 | incoming.autobuild = /my/build/hook |
|
231 | incoming.autobuild = /my/build/hook | |
232 |
|
232 | |||
233 | Most hooks are run with environment variables set that give added |
|
233 | Most hooks are run with environment variables set that give added | |
234 | useful information. For each hook below, the environment variables |
|
234 | useful information. For each hook below, the environment variables | |
235 | it is passed are listed with names of the form "$HG_foo". |
|
235 | it is passed are listed with names of the form "$HG_foo". | |
236 |
|
236 | |||
237 | changegroup;; |
|
237 | changegroup;; | |
238 | Run after a changegroup has been added via push, pull or |
|
238 | Run after a changegroup has been added via push, pull or | |
239 | unbundle. ID of the first new changeset is in $HG_NODE. URL from |
|
239 | unbundle. ID of the first new changeset is in $HG_NODE. URL from | |
240 | which changes came is in $HG_URL. |
|
240 | which changes came is in $HG_URL. | |
241 | commit;; |
|
241 | commit;; | |
242 | Run after a changeset has been created in the local repository. |
|
242 | Run after a changeset has been created in the local repository. | |
243 | ID of the newly created changeset is in $HG_NODE. Parent |
|
243 | ID of the newly created changeset is in $HG_NODE. Parent | |
244 | changeset IDs are in $HG_PARENT1 and $HG_PARENT2. |
|
244 | changeset IDs are in $HG_PARENT1 and $HG_PARENT2. | |
245 | incoming;; |
|
245 | incoming;; | |
246 | Run after a changeset has been pulled, pushed, or unbundled into |
|
246 | Run after a changeset has been pulled, pushed, or unbundled into | |
247 | the local repository. The ID of the newly arrived changeset is in |
|
247 | the local repository. The ID of the newly arrived changeset is in | |
248 | $HG_NODE. URL that was source of changes came is in $HG_URL. |
|
248 | $HG_NODE. URL that was source of changes came is in $HG_URL. | |
249 | outgoing;; |
|
249 | outgoing;; | |
250 | Run after sending changes from local repository to another. ID of |
|
250 | Run after sending changes from local repository to another. ID of | |
251 | first changeset sent is in $HG_NODE. Source of operation is in |
|
251 | first changeset sent is in $HG_NODE. Source of operation is in | |
252 | $HG_SOURCE; see "preoutgoing" hook for description. |
|
252 | $HG_SOURCE; see "preoutgoing" hook for description. | |
253 | prechangegroup;; |
|
253 | prechangegroup;; | |
254 | Run before a changegroup is added via push, pull or unbundle. |
|
254 | Run before a changegroup is added via push, pull or unbundle. | |
255 | Exit status 0 allows the changegroup to proceed. Non-zero status |
|
255 | Exit status 0 allows the changegroup to proceed. Non-zero status | |
256 | will cause the push, pull or unbundle to fail. URL from which |
|
256 | will cause the push, pull or unbundle to fail. URL from which | |
257 | changes will come is in $HG_URL. |
|
257 | changes will come is in $HG_URL. | |
258 | precommit;; |
|
258 | precommit;; | |
259 | Run before starting a local commit. Exit status 0 allows the |
|
259 | Run before starting a local commit. Exit status 0 allows the | |
260 | commit to proceed. Non-zero status will cause the commit to fail. |
|
260 | commit to proceed. Non-zero status will cause the commit to fail. | |
261 | Parent changeset IDs are in $HG_PARENT1 and $HG_PARENT2. |
|
261 | Parent changeset IDs are in $HG_PARENT1 and $HG_PARENT2. | |
262 | preoutgoing;; |
|
262 | preoutgoing;; | |
263 | Run before computing changes to send from the local repository to |
|
263 | Run before computing changes to send from the local repository to | |
264 | another. Non-zero status will cause failure. This lets you |
|
264 | another. Non-zero status will cause failure. This lets you | |
265 | prevent pull over http or ssh. Also prevents against local pull, |
|
265 | prevent pull over http or ssh. Also prevents against local pull, | |
266 | push (outbound) or bundle commands, but not effective, since you |
|
266 | push (outbound) or bundle commands, but not effective, since you | |
267 | can just copy files instead then. Source of operation is in |
|
267 | can just copy files instead then. Source of operation is in | |
268 | $HG_SOURCE. If "serve", operation is happening on behalf of |
|
268 | $HG_SOURCE. If "serve", operation is happening on behalf of | |
269 | remote ssh or http repository. If "push", "pull" or "bundle", |
|
269 | remote ssh or http repository. If "push", "pull" or "bundle", | |
270 | operation is happening on behalf of repository on same system. |
|
270 | operation is happening on behalf of repository on same system. | |
271 | pretag;; |
|
271 | pretag;; | |
272 | Run before creating a tag. Exit status 0 allows the tag to be |
|
272 | Run before creating a tag. Exit status 0 allows the tag to be | |
273 | created. Non-zero status will cause the tag to fail. ID of |
|
273 | created. Non-zero status will cause the tag to fail. ID of | |
274 | changeset to tag is in $HG_NODE. Name of tag is in $HG_TAG. Tag |
|
274 | changeset to tag is in $HG_NODE. Name of tag is in $HG_TAG. Tag | |
275 | is local if $HG_LOCAL=1, in repo if $HG_LOCAL=0. |
|
275 | is local if $HG_LOCAL=1, in repo if $HG_LOCAL=0. | |
276 | pretxnchangegroup;; |
|
276 | pretxnchangegroup;; | |
277 | Run after a changegroup has been added via push, pull or unbundle, |
|
277 | Run after a changegroup has been added via push, pull or unbundle, | |
278 | but before the transaction has been committed. Changegroup is |
|
278 | but before the transaction has been committed. Changegroup is | |
279 | visible to hook program. This lets you validate incoming changes |
|
279 | visible to hook program. This lets you validate incoming changes | |
280 | before accepting them. Passed the ID of the first new changeset |
|
280 | before accepting them. Passed the ID of the first new changeset | |
281 | in $HG_NODE. Exit status 0 allows the transaction to commit. |
|
281 | in $HG_NODE. Exit status 0 allows the transaction to commit. | |
282 | Non-zero status will cause the transaction to be rolled back and |
|
282 | Non-zero status will cause the transaction to be rolled back and | |
283 | the push, pull or unbundle will fail. URL that was source of |
|
283 | the push, pull or unbundle will fail. URL that was source of | |
284 | changes is in $HG_URL. |
|
284 | changes is in $HG_URL. | |
285 | pretxncommit;; |
|
285 | pretxncommit;; | |
286 | Run after a changeset has been created but the transaction not yet |
|
286 | Run after a changeset has been created but the transaction not yet | |
287 | committed. Changeset is visible to hook program. This lets you |
|
287 | committed. Changeset is visible to hook program. This lets you | |
288 | validate commit message and changes. Exit status 0 allows the |
|
288 | validate commit message and changes. Exit status 0 allows the | |
289 | commit to proceed. Non-zero status will cause the transaction to |
|
289 | commit to proceed. Non-zero status will cause the transaction to | |
290 | be rolled back. ID of changeset is in $HG_NODE. Parent changeset |
|
290 | be rolled back. ID of changeset is in $HG_NODE. Parent changeset | |
291 | IDs are in $HG_PARENT1 and $HG_PARENT2. |
|
291 | IDs are in $HG_PARENT1 and $HG_PARENT2. | |
292 | preupdate;; |
|
292 | preupdate;; | |
293 | Run before updating the working directory. Exit status 0 allows |
|
293 | Run before updating the working directory. Exit status 0 allows | |
294 | the update to proceed. Non-zero status will prevent the update. |
|
294 | the update to proceed. Non-zero status will prevent the update. | |
295 | Changeset ID of first new parent is in $HG_PARENT1. If merge, ID |
|
295 | Changeset ID of first new parent is in $HG_PARENT1. If merge, ID | |
296 | of second new parent is in $HG_PARENT2. |
|
296 | of second new parent is in $HG_PARENT2. | |
297 | tag;; |
|
297 | tag;; | |
298 | Run after a tag is created. ID of tagged changeset is in |
|
298 | Run after a tag is created. ID of tagged changeset is in | |
299 | $HG_NODE. Name of tag is in $HG_TAG. Tag is local if |
|
299 | $HG_NODE. Name of tag is in $HG_TAG. Tag is local if | |
300 | $HG_LOCAL=1, in repo if $HG_LOCAL=0. |
|
300 | $HG_LOCAL=1, in repo if $HG_LOCAL=0. | |
301 | update;; |
|
301 | update;; | |
302 | Run after updating the working directory. Changeset ID of first |
|
302 | Run after updating the working directory. Changeset ID of first | |
303 | new parent is in $HG_PARENT1. If merge, ID of second new parent |
|
303 | new parent is in $HG_PARENT1. If merge, ID of second new parent | |
304 | is in $HG_PARENT2. If update succeeded, $HG_ERROR=0. If update |
|
304 | is in $HG_PARENT2. If update succeeded, $HG_ERROR=0. If update | |
305 | failed (e.g. because conflicts not resolved), $HG_ERROR=1. |
|
305 | failed (e.g. because conflicts not resolved), $HG_ERROR=1. | |
306 |
|
306 | |||
307 | Note: In earlier releases, the names of hook environment variables |
|
307 | Note: In earlier releases, the names of hook environment variables | |
308 | did not have a "HG_" prefix. The old unprefixed names are no longer |
|
308 | did not have a "HG_" prefix. The old unprefixed names are no longer | |
309 | provided in the environment. |
|
309 | provided in the environment. | |
310 |
|
310 | |||
311 | The syntax for Python hooks is as follows: |
|
311 | The syntax for Python hooks is as follows: | |
312 |
|
312 | |||
313 | hookname = python:modulename.submodule.callable |
|
313 | hookname = python:modulename.submodule.callable | |
314 |
|
314 | |||
315 | Python hooks are run within the Mercurial process. Each hook is |
|
315 | Python hooks are run within the Mercurial process. Each hook is | |
316 | called with at least three keyword arguments: a ui object (keyword |
|
316 | called with at least three keyword arguments: a ui object (keyword | |
317 | "ui"), a repository object (keyword "repo"), and a "hooktype" |
|
317 | "ui"), a repository object (keyword "repo"), and a "hooktype" | |
318 | keyword that tells what kind of hook is used. Arguments listed as |
|
318 | keyword that tells what kind of hook is used. Arguments listed as | |
319 | environment variables above are passed as keyword arguments, with no |
|
319 | environment variables above are passed as keyword arguments, with no | |
320 | "HG_" prefix, and names in lower case. |
|
320 | "HG_" prefix, and names in lower case. | |
321 |
|
321 | |||
322 | If a Python hook returns a "true" value or raises an exception, this |
|
322 | If a Python hook returns a "true" value or raises an exception, this | |
323 | is treated as failure of the hook. |
|
323 | is treated as failure of the hook. | |
324 |
|
324 | |||
325 | http_proxy:: |
|
325 | http_proxy:: | |
326 | Used to access web-based Mercurial repositories through a HTTP |
|
326 | Used to access web-based Mercurial repositories through a HTTP | |
327 | proxy. |
|
327 | proxy. | |
328 | host;; |
|
328 | host;; | |
329 | Host name and (optional) port of the proxy server, for example |
|
329 | Host name and (optional) port of the proxy server, for example | |
330 | "myproxy:8000". |
|
330 | "myproxy:8000". | |
331 | no;; |
|
331 | no;; | |
332 | Optional. Comma-separated list of host names that should bypass |
|
332 | Optional. Comma-separated list of host names that should bypass | |
333 | the proxy. |
|
333 | the proxy. | |
334 | passwd;; |
|
334 | passwd;; | |
335 | Optional. Password to authenticate with at the proxy server. |
|
335 | Optional. Password to authenticate with at the proxy server. | |
336 | user;; |
|
336 | user;; | |
337 | Optional. User name to authenticate with at the proxy server. |
|
337 | Optional. User name to authenticate with at the proxy server. | |
338 |
|
338 | |||
339 | smtp:: |
|
339 | smtp:: | |
340 | Configuration for extensions that need to send email messages. |
|
340 | Configuration for extensions that need to send email messages. | |
341 | host;; |
|
341 | host;; | |
342 | Host name of mail server, e.g. "mail.example.com". |
|
342 | Host name of mail server, e.g. "mail.example.com". | |
343 | port;; |
|
343 | port;; | |
344 | Optional. Port to connect to on mail server. Default: 25. |
|
344 | Optional. Port to connect to on mail server. Default: 25. | |
345 | tls;; |
|
345 | tls;; | |
346 | Optional. Whether to connect to mail server using TLS. True or |
|
346 | Optional. Whether to connect to mail server using TLS. True or | |
347 | False. Default: False. |
|
347 | False. Default: False. | |
348 | username;; |
|
348 | username;; | |
349 | Optional. User name to authenticate to SMTP server with. |
|
349 | Optional. User name to authenticate to SMTP server with. | |
350 | If username is specified, password must also be specified. |
|
350 | If username is specified, password must also be specified. | |
351 | Default: none. |
|
351 | Default: none. | |
352 | password;; |
|
352 | password;; | |
353 | Optional. Password to authenticate to SMTP server with. |
|
353 | Optional. Password to authenticate to SMTP server with. | |
354 | If username is specified, password must also be specified. |
|
354 | If username is specified, password must also be specified. | |
355 | Default: none. |
|
355 | Default: none. | |
356 | local_hostname;; |
|
356 | local_hostname;; | |
357 | Optional. It's the hostname that the sender can use to identify itself |
|
357 | Optional. It's the hostname that the sender can use to identify itself | |
358 | to the MTA. |
|
358 | to the MTA. | |
359 |
|
359 | |||
360 | paths:: |
|
360 | paths:: | |
361 | Assigns symbolic names to repositories. The left side is the |
|
361 | Assigns symbolic names to repositories. The left side is the | |
362 | symbolic name, and the right gives the directory or URL that is the |
|
362 | symbolic name, and the right gives the directory or URL that is the | |
363 | location of the repository. Default paths can be declared by |
|
363 | location of the repository. Default paths can be declared by | |
364 | setting the following entries. |
|
364 | setting the following entries. | |
365 | default;; |
|
365 | default;; | |
366 | Directory or URL to use when pulling if no source is specified. |
|
366 | Directory or URL to use when pulling if no source is specified. | |
367 | Default is set to repository from which the current repository |
|
367 | Default is set to repository from which the current repository | |
368 | was cloned. |
|
368 | was cloned. | |
369 | default-push;; |
|
369 | default-push;; | |
370 | Optional. Directory or URL to use when pushing if no destination |
|
370 | Optional. Directory or URL to use when pushing if no destination | |
371 | is specified. |
|
371 | is specified. | |
372 |
|
372 | |||
373 | server:: |
|
373 | server:: | |
374 | Controls generic server settings. |
|
374 | Controls generic server settings. | |
375 | uncompressed;; |
|
375 | uncompressed;; | |
376 | Whether to allow clients to clone a repo using the uncompressed |
|
376 | Whether to allow clients to clone a repo using the uncompressed | |
377 | streaming protocol. This transfers about 40% more data than a |
|
377 | streaming protocol. This transfers about 40% more data than a | |
378 | regular clone, but uses less memory and CPU on both server and |
|
378 | regular clone, but uses less memory and CPU on both server and | |
379 | client. Over a LAN (100Mbps or better) or a very fast WAN, an |
|
379 | client. Over a LAN (100Mbps or better) or a very fast WAN, an | |
380 | uncompressed streaming clone is a lot faster (~10x) than a regular |
|
380 | uncompressed streaming clone is a lot faster (~10x) than a regular | |
381 | clone. Over most WAN connections (anything slower than about |
|
381 | clone. Over most WAN connections (anything slower than about | |
382 | 6Mbps), uncompressed streaming is slower, because of the extra |
|
382 | 6Mbps), uncompressed streaming is slower, because of the extra | |
383 | data transfer overhead. Default is False. |
|
383 | data transfer overhead. Default is False. | |
384 |
|
384 | |||
385 | trusted:: |
|
385 | trusted:: | |
386 | For security reasons, Mercurial will not use the settings in |
|
386 | For security reasons, Mercurial will not use the settings in | |
387 | the .hg/hgrc file from a repository if it doesn't belong to a |
|
387 | the .hg/hgrc file from a repository if it doesn't belong to a | |
388 | trusted user or to a trusted group. The main exception is the |
|
388 | trusted user or to a trusted group. The main exception is the | |
389 | web interface, which automatically uses some safe settings, since |
|
389 | web interface, which automatically uses some safe settings, since | |
390 | it's common to serve repositories from different users. |
|
390 | it's common to serve repositories from different users. | |
391 |
|
391 | |||
392 | This section specifies what users and groups are trusted. The |
|
392 | This section specifies what users and groups are trusted. The | |
393 | current user is always trusted. To trust everybody, list a user |
|
393 | current user is always trusted. To trust everybody, list a user | |
394 | or a group with name "*". |
|
394 | or a group with name "*". | |
395 |
|
395 | |||
396 | users;; |
|
396 | users;; | |
397 | Comma-separated list of trusted users. |
|
397 | Comma-separated list of trusted users. | |
398 | groups;; |
|
398 | groups;; | |
399 | Comma-separated list of trusted groups. |
|
399 | Comma-separated list of trusted groups. | |
400 |
|
400 | |||
401 | ui:: |
|
401 | ui:: | |
402 | User interface controls. |
|
402 | User interface controls. | |
403 | debug;; |
|
403 | debug;; | |
404 | Print debugging information. True or False. Default is False. |
|
404 | Print debugging information. True or False. Default is False. | |
405 | editor;; |
|
405 | editor;; | |
406 | The editor to use during a commit. Default is $EDITOR or "vi". |
|
406 | The editor to use during a commit. Default is $EDITOR or "vi". | |
407 | fallbackencoding;; |
|
407 | fallbackencoding;; | |
408 | Encoding to try if it's not possible to decode the changelog using |
|
408 | Encoding to try if it's not possible to decode the changelog using | |
409 | UTF-8. Default is ISO-8859-1. |
|
409 | UTF-8. Default is ISO-8859-1. | |
410 | ignore;; |
|
410 | ignore;; | |
411 | A file to read per-user ignore patterns from. This file should be in |
|
411 | A file to read per-user ignore patterns from. This file should be in | |
412 | the same format as a repository-wide .hgignore file. This option |
|
412 | the same format as a repository-wide .hgignore file. This option | |
413 | supports hook syntax, so if you want to specify multiple ignore |
|
413 | supports hook syntax, so if you want to specify multiple ignore | |
414 | files, you can do so by setting something like |
|
414 | files, you can do so by setting something like | |
415 | "ignore.other = ~/.hgignore2". For details of the ignore file |
|
415 | "ignore.other = ~/.hgignore2". For details of the ignore file | |
416 | format, see the hgignore(5) man page. |
|
416 | format, see the hgignore(5) man page. | |
417 | interactive;; |
|
417 | interactive;; | |
418 | Allow to prompt the user. True or False. Default is True. |
|
418 | Allow to prompt the user. True or False. Default is True. | |
419 | logtemplate;; |
|
419 | logtemplate;; | |
420 | Template string for commands that print changesets. |
|
420 | Template string for commands that print changesets. | |
421 | style;; |
|
421 | style;; | |
422 | Name of style to use for command output. |
|
422 | Name of style to use for command output. | |
423 | merge;; |
|
423 | merge;; | |
424 | The conflict resolution program to use during a manual merge. |
|
424 | The conflict resolution program to use during a manual merge. | |
425 | Default is "hgmerge". |
|
425 | Default is "hgmerge". | |
426 | quiet;; |
|
426 | quiet;; | |
427 | Reduce the amount of output printed. True or False. Default is False. |
|
427 | Reduce the amount of output printed. True or False. Default is False. | |
428 | remotecmd;; |
|
428 | remotecmd;; | |
429 | remote command to use for clone/push/pull operations. Default is 'hg'. |
|
429 | remote command to use for clone/push/pull operations. Default is 'hg'. | |
430 | ssh;; |
|
430 | ssh;; | |
431 | command to use for SSH connections. Default is 'ssh'. |
|
431 | command to use for SSH connections. Default is 'ssh'. | |
432 | strict;; |
|
432 | strict;; | |
433 | Require exact command names, instead of allowing unambiguous |
|
433 | Require exact command names, instead of allowing unambiguous | |
434 | abbreviations. True or False. Default is False. |
|
434 | abbreviations. True or False. Default is False. | |
435 | timeout;; |
|
435 | timeout;; | |
436 | The timeout used when a lock is held (in seconds), a negative value |
|
436 | The timeout used when a lock is held (in seconds), a negative value | |
437 | means no timeout. Default is 600. |
|
437 | means no timeout. Default is 600. | |
438 | username;; |
|
438 | username;; | |
439 | The committer of a changeset created when running "commit". |
|
439 | The committer of a changeset created when running "commit". | |
440 | Typically a person's name and email address, e.g. "Fred Widget |
|
440 | Typically a person's name and email address, e.g. "Fred Widget | |
441 | <fred@example.com>". Default is $EMAIL or username@hostname. |
|
441 | <fred@example.com>". Default is $EMAIL or username@hostname. | |
442 | If the username in hgrc is empty, it has to be specified manually or |
|
442 | If the username in hgrc is empty, it has to be specified manually or | |
443 | in a different hgrc file (e.g. $HOME/.hgrc, if the admin set "username =" |
|
443 | in a different hgrc file (e.g. $HOME/.hgrc, if the admin set "username =" | |
444 | in the system hgrc). |
|
444 | in the system hgrc). | |
445 | verbose;; |
|
445 | verbose;; | |
446 | Increase the amount of output printed. True or False. Default is False. |
|
446 | Increase the amount of output printed. True or False. Default is False. | |
447 |
|
447 | |||
448 |
|
448 | |||
449 | web:: |
|
449 | web:: | |
450 | Web interface configuration. |
|
450 | Web interface configuration. | |
451 | accesslog;; |
|
451 | accesslog;; | |
452 | Where to output the access log. Default is stdout. |
|
452 | Where to output the access log. Default is stdout. | |
453 | address;; |
|
453 | address;; | |
454 | Interface address to bind to. Default is all. |
|
454 | Interface address to bind to. Default is all. | |
455 | allow_archive;; |
|
455 | allow_archive;; | |
456 | List of archive format (bz2, gz, zip) allowed for downloading. |
|
456 | List of archive format (bz2, gz, zip) allowed for downloading. | |
457 | Default is empty. |
|
457 | Default is empty. | |
458 | allowbz2;; |
|
458 | allowbz2;; | |
459 | (DEPRECATED) Whether to allow .tar.bz2 downloading of repo revisions. |
|
459 | (DEPRECATED) Whether to allow .tar.bz2 downloading of repo revisions. | |
460 | Default is false. |
|
460 | Default is false. | |
461 | allowgz;; |
|
461 | allowgz;; | |
462 | (DEPRECATED) Whether to allow .tar.gz downloading of repo revisions. |
|
462 | (DEPRECATED) Whether to allow .tar.gz downloading of repo revisions. | |
463 | Default is false. |
|
463 | Default is false. | |
464 | allowpull;; |
|
464 | allowpull;; | |
465 | Whether to allow pulling from the repository. Default is true. |
|
465 | Whether to allow pulling from the repository. Default is true. | |
466 | allow_push;; |
|
466 | allow_push;; | |
467 | Whether to allow pushing to the repository. If empty or not set, |
|
467 | Whether to allow pushing to the repository. If empty or not set, | |
468 | push is not allowed. If the special value "*", any remote user |
|
468 | push is not allowed. If the special value "*", any remote user | |
469 | can push, including unauthenticated users. Otherwise, the remote |
|
469 | can push, including unauthenticated users. Otherwise, the remote | |
470 | user must have been authenticated, and the authenticated user name |
|
470 | user must have been authenticated, and the authenticated user name | |
471 | must be present in this list (separated by whitespace or ","). |
|
471 | must be present in this list (separated by whitespace or ","). | |
472 | The contents of the allow_push list are examined after the |
|
472 | The contents of the allow_push list are examined after the | |
473 | deny_push list. |
|
473 | deny_push list. | |
474 | allowzip;; |
|
474 | allowzip;; | |
475 | (DEPRECATED) Whether to allow .zip downloading of repo revisions. |
|
475 | (DEPRECATED) Whether to allow .zip downloading of repo revisions. | |
476 | Default is false. This feature creates temporary files. |
|
476 | Default is false. This feature creates temporary files. | |
477 | baseurl;; |
|
477 | baseurl;; | |
478 | Base URL to use when publishing URLs in other locations, so |
|
478 | Base URL to use when publishing URLs in other locations, so | |
479 | third-party tools like email notification hooks can construct URLs. |
|
479 | third-party tools like email notification hooks can construct URLs. | |
480 | Example: "http://hgserver/repos/" |
|
480 | Example: "http://hgserver/repos/" | |
481 | contact;; |
|
481 | contact;; | |
482 | Name or email address of the person in charge of the repository. |
|
482 | Name or email address of the person in charge of the repository. | |
483 | Default is "unknown". |
|
483 | Default is "unknown". | |
484 | deny_push;; |
|
484 | deny_push;; | |
485 | Whether to deny pushing to the repository. If empty or not set, |
|
485 | Whether to deny pushing to the repository. If empty or not set, | |
486 | push is not denied. If the special value "*", all remote users |
|
486 | push is not denied. If the special value "*", all remote users | |
487 | are denied push. Otherwise, unauthenticated users are all denied, |
|
487 | are denied push. Otherwise, unauthenticated users are all denied, | |
488 | and any authenticated user name present in this list (separated by |
|
488 | and any authenticated user name present in this list (separated by | |
489 | whitespace or ",") is also denied. The contents of the deny_push |
|
489 | whitespace or ",") is also denied. The contents of the deny_push | |
490 | list are examined before the allow_push list. |
|
490 | list are examined before the allow_push list. | |
491 | description;; |
|
491 | description;; | |
492 | Textual description of the repository's purpose or contents. |
|
492 | Textual description of the repository's purpose or contents. | |
493 | Default is "unknown". |
|
493 | Default is "unknown". | |
494 | errorlog;; |
|
494 | errorlog;; | |
495 | Where to output the error log. Default is stderr. |
|
495 | Where to output the error log. Default is stderr. | |
496 | ipv6;; |
|
496 | ipv6;; | |
497 | Whether to use IPv6. Default is false. |
|
497 | Whether to use IPv6. Default is false. | |
498 | name;; |
|
498 | name;; | |
499 | Repository name to use in the web interface. Default is current |
|
499 | Repository name to use in the web interface. Default is current | |
500 | working directory. |
|
500 | working directory. | |
501 | maxchanges;; |
|
501 | maxchanges;; | |
502 | Maximum number of changes to list on the changelog. Default is 10. |
|
502 | Maximum number of changes to list on the changelog. Default is 10. | |
503 | maxfiles;; |
|
503 | maxfiles;; | |
504 | Maximum number of files to list per changeset. Default is 10. |
|
504 | Maximum number of files to list per changeset. Default is 10. | |
505 | port;; |
|
505 | port;; | |
506 | Port to listen on. Default is 8000. |
|
506 | Port to listen on. Default is 8000. | |
507 | push_ssl;; |
|
507 | push_ssl;; | |
508 | Whether to require that inbound pushes be transported over SSL to |
|
508 | Whether to require that inbound pushes be transported over SSL to | |
509 | prevent password sniffing. Default is true. |
|
509 | prevent password sniffing. Default is true. | |
|
510 | staticurl;; | |||
|
511 | Base URL to use for static files. If unset, static files (e.g. | |||
|
512 | the hgicon.png favicon) will be served by the CGI script itself. | |||
|
513 | Use this setting to serve them directly with the HTTP server. | |||
|
514 | Example: "http://hgserver/static/" | |||
510 | stripes;; |
|
515 | stripes;; | |
511 | How many lines a "zebra stripe" should span in multiline output. |
|
516 | How many lines a "zebra stripe" should span in multiline output. | |
512 | Default is 1; set to 0 to disable. |
|
517 | Default is 1; set to 0 to disable. | |
513 | style;; |
|
518 | style;; | |
514 | Which template map style to use. |
|
519 | Which template map style to use. | |
515 | templates;; |
|
520 | templates;; | |
516 | Where to find the HTML templates. Default is install path. |
|
521 | Where to find the HTML templates. Default is install path. | |
517 |
|
522 | |||
518 |
|
523 | |||
519 | AUTHOR |
|
524 | AUTHOR | |
520 | ------ |
|
525 | ------ | |
521 | Bryan O'Sullivan <bos@serpentine.com>. |
|
526 | Bryan O'Sullivan <bos@serpentine.com>. | |
522 |
|
527 | |||
523 | Mercurial was written by Matt Mackall <mpm@selenic.com>. |
|
528 | Mercurial was written by Matt Mackall <mpm@selenic.com>. | |
524 |
|
529 | |||
525 | SEE ALSO |
|
530 | SEE ALSO | |
526 | -------- |
|
531 | -------- | |
527 | hg(1), hgignore(5) |
|
532 | hg(1), hgignore(5) | |
528 |
|
533 | |||
529 | COPYING |
|
534 | COPYING | |
530 | ------- |
|
535 | ------- | |
531 | This manual page is copyright 2005 Bryan O'Sullivan. |
|
536 | This manual page is copyright 2005 Bryan O'Sullivan. | |
532 | Mercurial is copyright 2005, 2006 Matt Mackall. |
|
537 | Mercurial is copyright 2005, 2006 Matt Mackall. | |
533 | Free use of this software is granted under the terms of the GNU General |
|
538 | Free use of this software is granted under the terms of the GNU General | |
534 | Public License (GPL). |
|
539 | Public License (GPL). |
@@ -1,1148 +1,1152 | |||||
1 | # hgweb/hgweb_mod.py - Web interface for a repository. |
|
1 | # hgweb/hgweb_mod.py - Web interface for a repository. | |
2 | # |
|
2 | # | |
3 | # Copyright 21 May 2005 - (c) 2005 Jake Edge <jake@edge2.net> |
|
3 | # Copyright 21 May 2005 - (c) 2005 Jake Edge <jake@edge2.net> | |
4 | # Copyright 2005, 2006 Matt Mackall <mpm@selenic.com> |
|
4 | # Copyright 2005, 2006 Matt Mackall <mpm@selenic.com> | |
5 | # |
|
5 | # | |
6 | # This software may be used and distributed according to the terms |
|
6 | # This software may be used and distributed according to the terms | |
7 | # of the GNU General Public License, incorporated herein by reference. |
|
7 | # of the GNU General Public License, incorporated herein by reference. | |
8 |
|
8 | |||
9 | import os, mimetypes, re, zlib, mimetools, cStringIO, sys |
|
9 | import os, mimetypes, re, zlib, mimetools, cStringIO, sys | |
10 | import tempfile, urllib, bz2 |
|
10 | import tempfile, urllib, bz2 | |
11 | from mercurial.node import * |
|
11 | from mercurial.node import * | |
12 | from mercurial.i18n import gettext as _ |
|
12 | from mercurial.i18n import gettext as _ | |
13 | from mercurial import mdiff, ui, hg, util, archival, streamclone, patch |
|
13 | from mercurial import mdiff, ui, hg, util, archival, streamclone, patch | |
14 | from mercurial import revlog, templater |
|
14 | from mercurial import revlog, templater | |
15 | from common import get_mtime, staticfile, style_map |
|
15 | from common import get_mtime, staticfile, style_map | |
16 |
|
16 | |||
17 | def _up(p): |
|
17 | def _up(p): | |
18 | if p[0] != "/": |
|
18 | if p[0] != "/": | |
19 | p = "/" + p |
|
19 | p = "/" + p | |
20 | if p[-1] == "/": |
|
20 | if p[-1] == "/": | |
21 | p = p[:-1] |
|
21 | p = p[:-1] | |
22 | up = os.path.dirname(p) |
|
22 | up = os.path.dirname(p) | |
23 | if up == "/": |
|
23 | if up == "/": | |
24 | return "/" |
|
24 | return "/" | |
25 | return up + "/" |
|
25 | return up + "/" | |
26 |
|
26 | |||
27 | def revnavgen(pos, pagelen, limit, nodefunc): |
|
27 | def revnavgen(pos, pagelen, limit, nodefunc): | |
28 | def seq(factor, limit=None): |
|
28 | def seq(factor, limit=None): | |
29 | if limit: |
|
29 | if limit: | |
30 | yield limit |
|
30 | yield limit | |
31 | if limit >= 20 and limit <= 40: |
|
31 | if limit >= 20 and limit <= 40: | |
32 | yield 50 |
|
32 | yield 50 | |
33 | else: |
|
33 | else: | |
34 | yield 1 * factor |
|
34 | yield 1 * factor | |
35 | yield 3 * factor |
|
35 | yield 3 * factor | |
36 | for f in seq(factor * 10): |
|
36 | for f in seq(factor * 10): | |
37 | yield f |
|
37 | yield f | |
38 |
|
38 | |||
39 | def nav(**map): |
|
39 | def nav(**map): | |
40 | l = [] |
|
40 | l = [] | |
41 | last = 0 |
|
41 | last = 0 | |
42 | for f in seq(1, pagelen): |
|
42 | for f in seq(1, pagelen): | |
43 | if f < pagelen or f <= last: |
|
43 | if f < pagelen or f <= last: | |
44 | continue |
|
44 | continue | |
45 | if f > limit: |
|
45 | if f > limit: | |
46 | break |
|
46 | break | |
47 | last = f |
|
47 | last = f | |
48 | if pos + f < limit: |
|
48 | if pos + f < limit: | |
49 | l.append(("+%d" % f, hex(nodefunc(pos + f).node()))) |
|
49 | l.append(("+%d" % f, hex(nodefunc(pos + f).node()))) | |
50 | if pos - f >= 0: |
|
50 | if pos - f >= 0: | |
51 | l.insert(0, ("-%d" % f, hex(nodefunc(pos - f).node()))) |
|
51 | l.insert(0, ("-%d" % f, hex(nodefunc(pos - f).node()))) | |
52 |
|
52 | |||
53 | try: |
|
53 | try: | |
54 | yield {"label": "(0)", "node": hex(nodefunc('0').node())} |
|
54 | yield {"label": "(0)", "node": hex(nodefunc('0').node())} | |
55 |
|
55 | |||
56 | for label, node in l: |
|
56 | for label, node in l: | |
57 | yield {"label": label, "node": node} |
|
57 | yield {"label": label, "node": node} | |
58 |
|
58 | |||
59 | yield {"label": "tip", "node": "tip"} |
|
59 | yield {"label": "tip", "node": "tip"} | |
60 | except hg.RepoError: |
|
60 | except hg.RepoError: | |
61 | pass |
|
61 | pass | |
62 |
|
62 | |||
63 | return nav |
|
63 | return nav | |
64 |
|
64 | |||
65 | class hgweb(object): |
|
65 | class hgweb(object): | |
66 | def __init__(self, repo, name=None): |
|
66 | def __init__(self, repo, name=None): | |
67 | if type(repo) == type(""): |
|
67 | if type(repo) == type(""): | |
68 | self.repo = hg.repository(ui.ui(report_untrusted=False), repo) |
|
68 | self.repo = hg.repository(ui.ui(report_untrusted=False), repo) | |
69 | else: |
|
69 | else: | |
70 | self.repo = repo |
|
70 | self.repo = repo | |
71 |
|
71 | |||
72 | self.mtime = -1 |
|
72 | self.mtime = -1 | |
73 | self.reponame = name |
|
73 | self.reponame = name | |
74 | self.archives = 'zip', 'gz', 'bz2' |
|
74 | self.archives = 'zip', 'gz', 'bz2' | |
75 | self.stripecount = 1 |
|
75 | self.stripecount = 1 | |
76 | # a repo owner may set web.templates in .hg/hgrc to get any file |
|
76 | # a repo owner may set web.templates in .hg/hgrc to get any file | |
77 | # readable by the user running the CGI script |
|
77 | # readable by the user running the CGI script | |
78 | self.templatepath = self.config("web", "templates", |
|
78 | self.templatepath = self.config("web", "templates", | |
79 | templater.templatepath(), |
|
79 | templater.templatepath(), | |
80 | untrusted=False) |
|
80 | untrusted=False) | |
81 |
|
81 | |||
82 | # The CGI scripts are often run by a user different from the repo owner. |
|
82 | # The CGI scripts are often run by a user different from the repo owner. | |
83 | # Trust the settings from the .hg/hgrc files by default. |
|
83 | # Trust the settings from the .hg/hgrc files by default. | |
84 | def config(self, section, name, default=None, untrusted=True): |
|
84 | def config(self, section, name, default=None, untrusted=True): | |
85 | return self.repo.ui.config(section, name, default, |
|
85 | return self.repo.ui.config(section, name, default, | |
86 | untrusted=untrusted) |
|
86 | untrusted=untrusted) | |
87 |
|
87 | |||
88 | def configbool(self, section, name, default=False, untrusted=True): |
|
88 | def configbool(self, section, name, default=False, untrusted=True): | |
89 | return self.repo.ui.configbool(section, name, default, |
|
89 | return self.repo.ui.configbool(section, name, default, | |
90 | untrusted=untrusted) |
|
90 | untrusted=untrusted) | |
91 |
|
91 | |||
92 | def configlist(self, section, name, default=None, untrusted=True): |
|
92 | def configlist(self, section, name, default=None, untrusted=True): | |
93 | return self.repo.ui.configlist(section, name, default, |
|
93 | return self.repo.ui.configlist(section, name, default, | |
94 | untrusted=untrusted) |
|
94 | untrusted=untrusted) | |
95 |
|
95 | |||
96 | def refresh(self): |
|
96 | def refresh(self): | |
97 | mtime = get_mtime(self.repo.root) |
|
97 | mtime = get_mtime(self.repo.root) | |
98 | if mtime != self.mtime: |
|
98 | if mtime != self.mtime: | |
99 | self.mtime = mtime |
|
99 | self.mtime = mtime | |
100 | self.repo = hg.repository(self.repo.ui, self.repo.root) |
|
100 | self.repo = hg.repository(self.repo.ui, self.repo.root) | |
101 | self.maxchanges = int(self.config("web", "maxchanges", 10)) |
|
101 | self.maxchanges = int(self.config("web", "maxchanges", 10)) | |
102 | self.stripecount = int(self.config("web", "stripes", 1)) |
|
102 | self.stripecount = int(self.config("web", "stripes", 1)) | |
103 | self.maxshortchanges = int(self.config("web", "maxshortchanges", 60)) |
|
103 | self.maxshortchanges = int(self.config("web", "maxshortchanges", 60)) | |
104 | self.maxfiles = int(self.config("web", "maxfiles", 10)) |
|
104 | self.maxfiles = int(self.config("web", "maxfiles", 10)) | |
105 | self.allowpull = self.configbool("web", "allowpull", True) |
|
105 | self.allowpull = self.configbool("web", "allowpull", True) | |
106 |
|
106 | |||
107 | def archivelist(self, nodeid): |
|
107 | def archivelist(self, nodeid): | |
108 | allowed = self.configlist("web", "allow_archive") |
|
108 | allowed = self.configlist("web", "allow_archive") | |
109 | for i, spec in self.archive_specs.iteritems(): |
|
109 | for i, spec in self.archive_specs.iteritems(): | |
110 | if i in allowed or self.configbool("web", "allow" + i): |
|
110 | if i in allowed or self.configbool("web", "allow" + i): | |
111 | yield {"type" : i, "extension" : spec[2], "node" : nodeid} |
|
111 | yield {"type" : i, "extension" : spec[2], "node" : nodeid} | |
112 |
|
112 | |||
113 | def listfilediffs(self, files, changeset): |
|
113 | def listfilediffs(self, files, changeset): | |
114 | for f in files[:self.maxfiles]: |
|
114 | for f in files[:self.maxfiles]: | |
115 | yield self.t("filedifflink", node=hex(changeset), file=f) |
|
115 | yield self.t("filedifflink", node=hex(changeset), file=f) | |
116 | if len(files) > self.maxfiles: |
|
116 | if len(files) > self.maxfiles: | |
117 | yield self.t("fileellipses") |
|
117 | yield self.t("fileellipses") | |
118 |
|
118 | |||
119 | def siblings(self, siblings=[], hiderev=None, **args): |
|
119 | def siblings(self, siblings=[], hiderev=None, **args): | |
120 | siblings = [s for s in siblings if s.node() != nullid] |
|
120 | siblings = [s for s in siblings if s.node() != nullid] | |
121 | if len(siblings) == 1 and siblings[0].rev() == hiderev: |
|
121 | if len(siblings) == 1 and siblings[0].rev() == hiderev: | |
122 | return |
|
122 | return | |
123 | for s in siblings: |
|
123 | for s in siblings: | |
124 | d = {'node': hex(s.node()), 'rev': s.rev()} |
|
124 | d = {'node': hex(s.node()), 'rev': s.rev()} | |
125 | if hasattr(s, 'path'): |
|
125 | if hasattr(s, 'path'): | |
126 | d['file'] = s.path() |
|
126 | d['file'] = s.path() | |
127 | d.update(args) |
|
127 | d.update(args) | |
128 | yield d |
|
128 | yield d | |
129 |
|
129 | |||
130 | def renamelink(self, fl, node): |
|
130 | def renamelink(self, fl, node): | |
131 | r = fl.renamed(node) |
|
131 | r = fl.renamed(node) | |
132 | if r: |
|
132 | if r: | |
133 | return [dict(file=r[0], node=hex(r[1]))] |
|
133 | return [dict(file=r[0], node=hex(r[1]))] | |
134 | return [] |
|
134 | return [] | |
135 |
|
135 | |||
136 | def showtag(self, t1, node=nullid, **args): |
|
136 | def showtag(self, t1, node=nullid, **args): | |
137 | for t in self.repo.nodetags(node): |
|
137 | for t in self.repo.nodetags(node): | |
138 | yield self.t(t1, tag=t, **args) |
|
138 | yield self.t(t1, tag=t, **args) | |
139 |
|
139 | |||
140 | def diff(self, node1, node2, files): |
|
140 | def diff(self, node1, node2, files): | |
141 | def filterfiles(filters, files): |
|
141 | def filterfiles(filters, files): | |
142 | l = [x for x in files if x in filters] |
|
142 | l = [x for x in files if x in filters] | |
143 |
|
143 | |||
144 | for t in filters: |
|
144 | for t in filters: | |
145 | if t and t[-1] != os.sep: |
|
145 | if t and t[-1] != os.sep: | |
146 | t += os.sep |
|
146 | t += os.sep | |
147 | l += [x for x in files if x.startswith(t)] |
|
147 | l += [x for x in files if x.startswith(t)] | |
148 | return l |
|
148 | return l | |
149 |
|
149 | |||
150 | parity = [0] |
|
150 | parity = [0] | |
151 | def diffblock(diff, f, fn): |
|
151 | def diffblock(diff, f, fn): | |
152 | yield self.t("diffblock", |
|
152 | yield self.t("diffblock", | |
153 | lines=prettyprintlines(diff), |
|
153 | lines=prettyprintlines(diff), | |
154 | parity=parity[0], |
|
154 | parity=parity[0], | |
155 | file=f, |
|
155 | file=f, | |
156 | filenode=hex(fn or nullid)) |
|
156 | filenode=hex(fn or nullid)) | |
157 | parity[0] = 1 - parity[0] |
|
157 | parity[0] = 1 - parity[0] | |
158 |
|
158 | |||
159 | def prettyprintlines(diff): |
|
159 | def prettyprintlines(diff): | |
160 | for l in diff.splitlines(1): |
|
160 | for l in diff.splitlines(1): | |
161 | if l.startswith('+'): |
|
161 | if l.startswith('+'): | |
162 | yield self.t("difflineplus", line=l) |
|
162 | yield self.t("difflineplus", line=l) | |
163 | elif l.startswith('-'): |
|
163 | elif l.startswith('-'): | |
164 | yield self.t("difflineminus", line=l) |
|
164 | yield self.t("difflineminus", line=l) | |
165 | elif l.startswith('@'): |
|
165 | elif l.startswith('@'): | |
166 | yield self.t("difflineat", line=l) |
|
166 | yield self.t("difflineat", line=l) | |
167 | else: |
|
167 | else: | |
168 | yield self.t("diffline", line=l) |
|
168 | yield self.t("diffline", line=l) | |
169 |
|
169 | |||
170 | r = self.repo |
|
170 | r = self.repo | |
171 | c1 = r.changectx(node1) |
|
171 | c1 = r.changectx(node1) | |
172 | c2 = r.changectx(node2) |
|
172 | c2 = r.changectx(node2) | |
173 | date1 = util.datestr(c1.date()) |
|
173 | date1 = util.datestr(c1.date()) | |
174 | date2 = util.datestr(c2.date()) |
|
174 | date2 = util.datestr(c2.date()) | |
175 |
|
175 | |||
176 | modified, added, removed, deleted, unknown = r.status(node1, node2)[:5] |
|
176 | modified, added, removed, deleted, unknown = r.status(node1, node2)[:5] | |
177 | if files: |
|
177 | if files: | |
178 | modified, added, removed = map(lambda x: filterfiles(files, x), |
|
178 | modified, added, removed = map(lambda x: filterfiles(files, x), | |
179 | (modified, added, removed)) |
|
179 | (modified, added, removed)) | |
180 |
|
180 | |||
181 | diffopts = patch.diffopts(self.repo.ui, untrusted=True) |
|
181 | diffopts = patch.diffopts(self.repo.ui, untrusted=True) | |
182 | for f in modified: |
|
182 | for f in modified: | |
183 | to = c1.filectx(f).data() |
|
183 | to = c1.filectx(f).data() | |
184 | tn = c2.filectx(f).data() |
|
184 | tn = c2.filectx(f).data() | |
185 | yield diffblock(mdiff.unidiff(to, date1, tn, date2, f, |
|
185 | yield diffblock(mdiff.unidiff(to, date1, tn, date2, f, | |
186 | opts=diffopts), f, tn) |
|
186 | opts=diffopts), f, tn) | |
187 | for f in added: |
|
187 | for f in added: | |
188 | to = None |
|
188 | to = None | |
189 | tn = c2.filectx(f).data() |
|
189 | tn = c2.filectx(f).data() | |
190 | yield diffblock(mdiff.unidiff(to, date1, tn, date2, f, |
|
190 | yield diffblock(mdiff.unidiff(to, date1, tn, date2, f, | |
191 | opts=diffopts), f, tn) |
|
191 | opts=diffopts), f, tn) | |
192 | for f in removed: |
|
192 | for f in removed: | |
193 | to = c1.filectx(f).data() |
|
193 | to = c1.filectx(f).data() | |
194 | tn = None |
|
194 | tn = None | |
195 | yield diffblock(mdiff.unidiff(to, date1, tn, date2, f, |
|
195 | yield diffblock(mdiff.unidiff(to, date1, tn, date2, f, | |
196 | opts=diffopts), f, tn) |
|
196 | opts=diffopts), f, tn) | |
197 |
|
197 | |||
198 | def changelog(self, ctx, shortlog=False): |
|
198 | def changelog(self, ctx, shortlog=False): | |
199 | def changelist(**map): |
|
199 | def changelist(**map): | |
200 | parity = (start - end) & 1 |
|
200 | parity = (start - end) & 1 | |
201 | cl = self.repo.changelog |
|
201 | cl = self.repo.changelog | |
202 | l = [] # build a list in forward order for efficiency |
|
202 | l = [] # build a list in forward order for efficiency | |
203 | for i in xrange(start, end): |
|
203 | for i in xrange(start, end): | |
204 | ctx = self.repo.changectx(i) |
|
204 | ctx = self.repo.changectx(i) | |
205 | n = ctx.node() |
|
205 | n = ctx.node() | |
206 |
|
206 | |||
207 | l.insert(0, {"parity": parity, |
|
207 | l.insert(0, {"parity": parity, | |
208 | "author": ctx.user(), |
|
208 | "author": ctx.user(), | |
209 | "parent": self.siblings(ctx.parents(), i - 1), |
|
209 | "parent": self.siblings(ctx.parents(), i - 1), | |
210 | "child": self.siblings(ctx.children(), i + 1), |
|
210 | "child": self.siblings(ctx.children(), i + 1), | |
211 | "changelogtag": self.showtag("changelogtag",n), |
|
211 | "changelogtag": self.showtag("changelogtag",n), | |
212 | "desc": ctx.description(), |
|
212 | "desc": ctx.description(), | |
213 | "date": ctx.date(), |
|
213 | "date": ctx.date(), | |
214 | "files": self.listfilediffs(ctx.files(), n), |
|
214 | "files": self.listfilediffs(ctx.files(), n), | |
215 | "rev": i, |
|
215 | "rev": i, | |
216 | "node": hex(n)}) |
|
216 | "node": hex(n)}) | |
217 | parity = 1 - parity |
|
217 | parity = 1 - parity | |
218 |
|
218 | |||
219 | for e in l: |
|
219 | for e in l: | |
220 | yield e |
|
220 | yield e | |
221 |
|
221 | |||
222 | maxchanges = shortlog and self.maxshortchanges or self.maxchanges |
|
222 | maxchanges = shortlog and self.maxshortchanges or self.maxchanges | |
223 | cl = self.repo.changelog |
|
223 | cl = self.repo.changelog | |
224 | count = cl.count() |
|
224 | count = cl.count() | |
225 | pos = ctx.rev() |
|
225 | pos = ctx.rev() | |
226 | start = max(0, pos - maxchanges + 1) |
|
226 | start = max(0, pos - maxchanges + 1) | |
227 | end = min(count, start + maxchanges) |
|
227 | end = min(count, start + maxchanges) | |
228 | pos = end - 1 |
|
228 | pos = end - 1 | |
229 |
|
229 | |||
230 | changenav = revnavgen(pos, maxchanges, count, self.repo.changectx) |
|
230 | changenav = revnavgen(pos, maxchanges, count, self.repo.changectx) | |
231 |
|
231 | |||
232 | yield self.t(shortlog and 'shortlog' or 'changelog', |
|
232 | yield self.t(shortlog and 'shortlog' or 'changelog', | |
233 | changenav=changenav, |
|
233 | changenav=changenav, | |
234 | node=hex(cl.tip()), |
|
234 | node=hex(cl.tip()), | |
235 | rev=pos, changesets=count, entries=changelist, |
|
235 | rev=pos, changesets=count, entries=changelist, | |
236 | archives=self.archivelist("tip")) |
|
236 | archives=self.archivelist("tip")) | |
237 |
|
237 | |||
238 | def search(self, query): |
|
238 | def search(self, query): | |
239 |
|
239 | |||
240 | def changelist(**map): |
|
240 | def changelist(**map): | |
241 | cl = self.repo.changelog |
|
241 | cl = self.repo.changelog | |
242 | count = 0 |
|
242 | count = 0 | |
243 | qw = query.lower().split() |
|
243 | qw = query.lower().split() | |
244 |
|
244 | |||
245 | def revgen(): |
|
245 | def revgen(): | |
246 | for i in xrange(cl.count() - 1, 0, -100): |
|
246 | for i in xrange(cl.count() - 1, 0, -100): | |
247 | l = [] |
|
247 | l = [] | |
248 | for j in xrange(max(0, i - 100), i): |
|
248 | for j in xrange(max(0, i - 100), i): | |
249 | ctx = self.repo.changectx(j) |
|
249 | ctx = self.repo.changectx(j) | |
250 | l.append(ctx) |
|
250 | l.append(ctx) | |
251 | l.reverse() |
|
251 | l.reverse() | |
252 | for e in l: |
|
252 | for e in l: | |
253 | yield e |
|
253 | yield e | |
254 |
|
254 | |||
255 | for ctx in revgen(): |
|
255 | for ctx in revgen(): | |
256 | miss = 0 |
|
256 | miss = 0 | |
257 | for q in qw: |
|
257 | for q in qw: | |
258 | if not (q in ctx.user().lower() or |
|
258 | if not (q in ctx.user().lower() or | |
259 | q in ctx.description().lower() or |
|
259 | q in ctx.description().lower() or | |
260 | q in " ".join(ctx.files()[:20]).lower()): |
|
260 | q in " ".join(ctx.files()[:20]).lower()): | |
261 | miss = 1 |
|
261 | miss = 1 | |
262 | break |
|
262 | break | |
263 | if miss: |
|
263 | if miss: | |
264 | continue |
|
264 | continue | |
265 |
|
265 | |||
266 | count += 1 |
|
266 | count += 1 | |
267 | n = ctx.node() |
|
267 | n = ctx.node() | |
268 |
|
268 | |||
269 | yield self.t('searchentry', |
|
269 | yield self.t('searchentry', | |
270 | parity=self.stripes(count), |
|
270 | parity=self.stripes(count), | |
271 | author=ctx.user(), |
|
271 | author=ctx.user(), | |
272 | parent=self.siblings(ctx.parents()), |
|
272 | parent=self.siblings(ctx.parents()), | |
273 | child=self.siblings(ctx.children()), |
|
273 | child=self.siblings(ctx.children()), | |
274 | changelogtag=self.showtag("changelogtag",n), |
|
274 | changelogtag=self.showtag("changelogtag",n), | |
275 | desc=ctx.description(), |
|
275 | desc=ctx.description(), | |
276 | date=ctx.date(), |
|
276 | date=ctx.date(), | |
277 | files=self.listfilediffs(ctx.files(), n), |
|
277 | files=self.listfilediffs(ctx.files(), n), | |
278 | rev=ctx.rev(), |
|
278 | rev=ctx.rev(), | |
279 | node=hex(n)) |
|
279 | node=hex(n)) | |
280 |
|
280 | |||
281 | if count >= self.maxchanges: |
|
281 | if count >= self.maxchanges: | |
282 | break |
|
282 | break | |
283 |
|
283 | |||
284 | cl = self.repo.changelog |
|
284 | cl = self.repo.changelog | |
285 |
|
285 | |||
286 | yield self.t('search', |
|
286 | yield self.t('search', | |
287 | query=query, |
|
287 | query=query, | |
288 | node=hex(cl.tip()), |
|
288 | node=hex(cl.tip()), | |
289 | entries=changelist) |
|
289 | entries=changelist) | |
290 |
|
290 | |||
291 | def changeset(self, ctx): |
|
291 | def changeset(self, ctx): | |
292 | n = ctx.node() |
|
292 | n = ctx.node() | |
293 | parents = ctx.parents() |
|
293 | parents = ctx.parents() | |
294 | p1 = parents[0].node() |
|
294 | p1 = parents[0].node() | |
295 |
|
295 | |||
296 | files = [] |
|
296 | files = [] | |
297 | parity = 0 |
|
297 | parity = 0 | |
298 | for f in ctx.files(): |
|
298 | for f in ctx.files(): | |
299 | files.append(self.t("filenodelink", |
|
299 | files.append(self.t("filenodelink", | |
300 | node=hex(n), file=f, |
|
300 | node=hex(n), file=f, | |
301 | parity=parity)) |
|
301 | parity=parity)) | |
302 | parity = 1 - parity |
|
302 | parity = 1 - parity | |
303 |
|
303 | |||
304 | def diff(**map): |
|
304 | def diff(**map): | |
305 | yield self.diff(p1, n, None) |
|
305 | yield self.diff(p1, n, None) | |
306 |
|
306 | |||
307 | yield self.t('changeset', |
|
307 | yield self.t('changeset', | |
308 | diff=diff, |
|
308 | diff=diff, | |
309 | rev=ctx.rev(), |
|
309 | rev=ctx.rev(), | |
310 | node=hex(n), |
|
310 | node=hex(n), | |
311 | parent=self.siblings(parents), |
|
311 | parent=self.siblings(parents), | |
312 | child=self.siblings(ctx.children()), |
|
312 | child=self.siblings(ctx.children()), | |
313 | changesettag=self.showtag("changesettag",n), |
|
313 | changesettag=self.showtag("changesettag",n), | |
314 | author=ctx.user(), |
|
314 | author=ctx.user(), | |
315 | desc=ctx.description(), |
|
315 | desc=ctx.description(), | |
316 | date=ctx.date(), |
|
316 | date=ctx.date(), | |
317 | files=files, |
|
317 | files=files, | |
318 | archives=self.archivelist(hex(n))) |
|
318 | archives=self.archivelist(hex(n))) | |
319 |
|
319 | |||
320 | def filelog(self, fctx): |
|
320 | def filelog(self, fctx): | |
321 | f = fctx.path() |
|
321 | f = fctx.path() | |
322 | fl = fctx.filelog() |
|
322 | fl = fctx.filelog() | |
323 | count = fl.count() |
|
323 | count = fl.count() | |
324 | pagelen = self.maxshortchanges |
|
324 | pagelen = self.maxshortchanges | |
325 | pos = fctx.filerev() |
|
325 | pos = fctx.filerev() | |
326 | start = max(0, pos - pagelen + 1) |
|
326 | start = max(0, pos - pagelen + 1) | |
327 | end = min(count, start + pagelen) |
|
327 | end = min(count, start + pagelen) | |
328 | pos = end - 1 |
|
328 | pos = end - 1 | |
329 |
|
329 | |||
330 | def entries(**map): |
|
330 | def entries(**map): | |
331 | l = [] |
|
331 | l = [] | |
332 | parity = (count - 1) & 1 |
|
332 | parity = (count - 1) & 1 | |
333 |
|
333 | |||
334 | for i in xrange(start, end): |
|
334 | for i in xrange(start, end): | |
335 | ctx = fctx.filectx(i) |
|
335 | ctx = fctx.filectx(i) | |
336 | n = fl.node(i) |
|
336 | n = fl.node(i) | |
337 |
|
337 | |||
338 | l.insert(0, {"parity": parity, |
|
338 | l.insert(0, {"parity": parity, | |
339 | "filerev": i, |
|
339 | "filerev": i, | |
340 | "file": f, |
|
340 | "file": f, | |
341 | "node": hex(ctx.node()), |
|
341 | "node": hex(ctx.node()), | |
342 | "author": ctx.user(), |
|
342 | "author": ctx.user(), | |
343 | "date": ctx.date(), |
|
343 | "date": ctx.date(), | |
344 | "rename": self.renamelink(fl, n), |
|
344 | "rename": self.renamelink(fl, n), | |
345 | "parent": self.siblings(fctx.parents()), |
|
345 | "parent": self.siblings(fctx.parents()), | |
346 | "child": self.siblings(fctx.children()), |
|
346 | "child": self.siblings(fctx.children()), | |
347 | "desc": ctx.description()}) |
|
347 | "desc": ctx.description()}) | |
348 | parity = 1 - parity |
|
348 | parity = 1 - parity | |
349 |
|
349 | |||
350 | for e in l: |
|
350 | for e in l: | |
351 | yield e |
|
351 | yield e | |
352 |
|
352 | |||
353 | nodefunc = lambda x: fctx.filectx(fileid=x) |
|
353 | nodefunc = lambda x: fctx.filectx(fileid=x) | |
354 | nav = revnavgen(pos, pagelen, count, nodefunc) |
|
354 | nav = revnavgen(pos, pagelen, count, nodefunc) | |
355 | yield self.t("filelog", file=f, node=hex(fctx.node()), nav=nav, |
|
355 | yield self.t("filelog", file=f, node=hex(fctx.node()), nav=nav, | |
356 | entries=entries) |
|
356 | entries=entries) | |
357 |
|
357 | |||
358 | def filerevision(self, fctx): |
|
358 | def filerevision(self, fctx): | |
359 | f = fctx.path() |
|
359 | f = fctx.path() | |
360 | text = fctx.data() |
|
360 | text = fctx.data() | |
361 | fl = fctx.filelog() |
|
361 | fl = fctx.filelog() | |
362 | n = fctx.filenode() |
|
362 | n = fctx.filenode() | |
363 |
|
363 | |||
364 | mt = mimetypes.guess_type(f)[0] |
|
364 | mt = mimetypes.guess_type(f)[0] | |
365 | rawtext = text |
|
365 | rawtext = text | |
366 | if util.binary(text): |
|
366 | if util.binary(text): | |
367 | mt = mt or 'application/octet-stream' |
|
367 | mt = mt or 'application/octet-stream' | |
368 | text = "(binary:%s)" % mt |
|
368 | text = "(binary:%s)" % mt | |
369 | mt = mt or 'text/plain' |
|
369 | mt = mt or 'text/plain' | |
370 |
|
370 | |||
371 | def lines(): |
|
371 | def lines(): | |
372 | for l, t in enumerate(text.splitlines(1)): |
|
372 | for l, t in enumerate(text.splitlines(1)): | |
373 | yield {"line": t, |
|
373 | yield {"line": t, | |
374 | "linenumber": "% 6d" % (l + 1), |
|
374 | "linenumber": "% 6d" % (l + 1), | |
375 | "parity": self.stripes(l)} |
|
375 | "parity": self.stripes(l)} | |
376 |
|
376 | |||
377 | yield self.t("filerevision", |
|
377 | yield self.t("filerevision", | |
378 | file=f, |
|
378 | file=f, | |
379 | path=_up(f), |
|
379 | path=_up(f), | |
380 | text=lines(), |
|
380 | text=lines(), | |
381 | raw=rawtext, |
|
381 | raw=rawtext, | |
382 | mimetype=mt, |
|
382 | mimetype=mt, | |
383 | rev=fctx.rev(), |
|
383 | rev=fctx.rev(), | |
384 | node=hex(fctx.node()), |
|
384 | node=hex(fctx.node()), | |
385 | author=fctx.user(), |
|
385 | author=fctx.user(), | |
386 | date=fctx.date(), |
|
386 | date=fctx.date(), | |
387 | desc=fctx.description(), |
|
387 | desc=fctx.description(), | |
388 | parent=self.siblings(fctx.parents()), |
|
388 | parent=self.siblings(fctx.parents()), | |
389 | child=self.siblings(fctx.children()), |
|
389 | child=self.siblings(fctx.children()), | |
390 | rename=self.renamelink(fl, n), |
|
390 | rename=self.renamelink(fl, n), | |
391 | permissions=fctx.manifest().execf(f)) |
|
391 | permissions=fctx.manifest().execf(f)) | |
392 |
|
392 | |||
393 | def fileannotate(self, fctx): |
|
393 | def fileannotate(self, fctx): | |
394 | f = fctx.path() |
|
394 | f = fctx.path() | |
395 | n = fctx.filenode() |
|
395 | n = fctx.filenode() | |
396 | fl = fctx.filelog() |
|
396 | fl = fctx.filelog() | |
397 |
|
397 | |||
398 | def annotate(**map): |
|
398 | def annotate(**map): | |
399 | parity = 0 |
|
399 | parity = 0 | |
400 | last = None |
|
400 | last = None | |
401 | for f, l in fctx.annotate(follow=True): |
|
401 | for f, l in fctx.annotate(follow=True): | |
402 | fnode = f.filenode() |
|
402 | fnode = f.filenode() | |
403 | name = self.repo.ui.shortuser(f.user()) |
|
403 | name = self.repo.ui.shortuser(f.user()) | |
404 |
|
404 | |||
405 | if last != fnode: |
|
405 | if last != fnode: | |
406 | parity = 1 - parity |
|
406 | parity = 1 - parity | |
407 | last = fnode |
|
407 | last = fnode | |
408 |
|
408 | |||
409 | yield {"parity": parity, |
|
409 | yield {"parity": parity, | |
410 | "node": hex(f.node()), |
|
410 | "node": hex(f.node()), | |
411 | "rev": f.rev(), |
|
411 | "rev": f.rev(), | |
412 | "author": name, |
|
412 | "author": name, | |
413 | "file": f.path(), |
|
413 | "file": f.path(), | |
414 | "line": l} |
|
414 | "line": l} | |
415 |
|
415 | |||
416 | yield self.t("fileannotate", |
|
416 | yield self.t("fileannotate", | |
417 | file=f, |
|
417 | file=f, | |
418 | annotate=annotate, |
|
418 | annotate=annotate, | |
419 | path=_up(f), |
|
419 | path=_up(f), | |
420 | rev=fctx.rev(), |
|
420 | rev=fctx.rev(), | |
421 | node=hex(fctx.node()), |
|
421 | node=hex(fctx.node()), | |
422 | author=fctx.user(), |
|
422 | author=fctx.user(), | |
423 | date=fctx.date(), |
|
423 | date=fctx.date(), | |
424 | desc=fctx.description(), |
|
424 | desc=fctx.description(), | |
425 | rename=self.renamelink(fl, n), |
|
425 | rename=self.renamelink(fl, n), | |
426 | parent=self.siblings(fctx.parents()), |
|
426 | parent=self.siblings(fctx.parents()), | |
427 | child=self.siblings(fctx.children()), |
|
427 | child=self.siblings(fctx.children()), | |
428 | permissions=fctx.manifest().execf(f)) |
|
428 | permissions=fctx.manifest().execf(f)) | |
429 |
|
429 | |||
430 | def manifest(self, ctx, path): |
|
430 | def manifest(self, ctx, path): | |
431 | mf = ctx.manifest() |
|
431 | mf = ctx.manifest() | |
432 | node = ctx.node() |
|
432 | node = ctx.node() | |
433 |
|
433 | |||
434 | files = {} |
|
434 | files = {} | |
435 |
|
435 | |||
436 | if path and path[-1] != "/": |
|
436 | if path and path[-1] != "/": | |
437 | path += "/" |
|
437 | path += "/" | |
438 | l = len(path) |
|
438 | l = len(path) | |
439 | abspath = "/" + path |
|
439 | abspath = "/" + path | |
440 |
|
440 | |||
441 | for f, n in mf.items(): |
|
441 | for f, n in mf.items(): | |
442 | if f[:l] != path: |
|
442 | if f[:l] != path: | |
443 | continue |
|
443 | continue | |
444 | remain = f[l:] |
|
444 | remain = f[l:] | |
445 | if "/" in remain: |
|
445 | if "/" in remain: | |
446 | short = remain[:remain.index("/") + 1] # bleah |
|
446 | short = remain[:remain.index("/") + 1] # bleah | |
447 | files[short] = (f, None) |
|
447 | files[short] = (f, None) | |
448 | else: |
|
448 | else: | |
449 | short = os.path.basename(remain) |
|
449 | short = os.path.basename(remain) | |
450 | files[short] = (f, n) |
|
450 | files[short] = (f, n) | |
451 |
|
451 | |||
452 | def filelist(**map): |
|
452 | def filelist(**map): | |
453 | parity = 0 |
|
453 | parity = 0 | |
454 | fl = files.keys() |
|
454 | fl = files.keys() | |
455 | fl.sort() |
|
455 | fl.sort() | |
456 | for f in fl: |
|
456 | for f in fl: | |
457 | full, fnode = files[f] |
|
457 | full, fnode = files[f] | |
458 | if not fnode: |
|
458 | if not fnode: | |
459 | continue |
|
459 | continue | |
460 |
|
460 | |||
461 | yield {"file": full, |
|
461 | yield {"file": full, | |
462 | "parity": self.stripes(parity), |
|
462 | "parity": self.stripes(parity), | |
463 | "basename": f, |
|
463 | "basename": f, | |
464 | "size": ctx.filectx(full).size(), |
|
464 | "size": ctx.filectx(full).size(), | |
465 | "permissions": mf.execf(full)} |
|
465 | "permissions": mf.execf(full)} | |
466 | parity += 1 |
|
466 | parity += 1 | |
467 |
|
467 | |||
468 | def dirlist(**map): |
|
468 | def dirlist(**map): | |
469 | parity = 0 |
|
469 | parity = 0 | |
470 | fl = files.keys() |
|
470 | fl = files.keys() | |
471 | fl.sort() |
|
471 | fl.sort() | |
472 | for f in fl: |
|
472 | for f in fl: | |
473 | full, fnode = files[f] |
|
473 | full, fnode = files[f] | |
474 | if fnode: |
|
474 | if fnode: | |
475 | continue |
|
475 | continue | |
476 |
|
476 | |||
477 | yield {"parity": self.stripes(parity), |
|
477 | yield {"parity": self.stripes(parity), | |
478 | "path": os.path.join(abspath, f), |
|
478 | "path": os.path.join(abspath, f), | |
479 | "basename": f[:-1]} |
|
479 | "basename": f[:-1]} | |
480 | parity += 1 |
|
480 | parity += 1 | |
481 |
|
481 | |||
482 | yield self.t("manifest", |
|
482 | yield self.t("manifest", | |
483 | rev=ctx.rev(), |
|
483 | rev=ctx.rev(), | |
484 | node=hex(node), |
|
484 | node=hex(node), | |
485 | path=abspath, |
|
485 | path=abspath, | |
486 | up=_up(abspath), |
|
486 | up=_up(abspath), | |
487 | fentries=filelist, |
|
487 | fentries=filelist, | |
488 | dentries=dirlist, |
|
488 | dentries=dirlist, | |
489 | archives=self.archivelist(hex(node))) |
|
489 | archives=self.archivelist(hex(node))) | |
490 |
|
490 | |||
491 | def tags(self): |
|
491 | def tags(self): | |
492 | i = self.repo.tagslist() |
|
492 | i = self.repo.tagslist() | |
493 | i.reverse() |
|
493 | i.reverse() | |
494 |
|
494 | |||
495 | def entries(notip=False, **map): |
|
495 | def entries(notip=False, **map): | |
496 | parity = 0 |
|
496 | parity = 0 | |
497 | for k, n in i: |
|
497 | for k, n in i: | |
498 | if notip and k == "tip": |
|
498 | if notip and k == "tip": | |
499 | continue |
|
499 | continue | |
500 | yield {"parity": self.stripes(parity), |
|
500 | yield {"parity": self.stripes(parity), | |
501 | "tag": k, |
|
501 | "tag": k, | |
502 | "date": self.repo.changectx(n).date(), |
|
502 | "date": self.repo.changectx(n).date(), | |
503 | "node": hex(n)} |
|
503 | "node": hex(n)} | |
504 | parity += 1 |
|
504 | parity += 1 | |
505 |
|
505 | |||
506 | yield self.t("tags", |
|
506 | yield self.t("tags", | |
507 | node=hex(self.repo.changelog.tip()), |
|
507 | node=hex(self.repo.changelog.tip()), | |
508 | entries=lambda **x: entries(False, **x), |
|
508 | entries=lambda **x: entries(False, **x), | |
509 | entriesnotip=lambda **x: entries(True, **x)) |
|
509 | entriesnotip=lambda **x: entries(True, **x)) | |
510 |
|
510 | |||
511 | def summary(self): |
|
511 | def summary(self): | |
512 | i = self.repo.tagslist() |
|
512 | i = self.repo.tagslist() | |
513 | i.reverse() |
|
513 | i.reverse() | |
514 |
|
514 | |||
515 | def tagentries(**map): |
|
515 | def tagentries(**map): | |
516 | parity = 0 |
|
516 | parity = 0 | |
517 | count = 0 |
|
517 | count = 0 | |
518 | for k, n in i: |
|
518 | for k, n in i: | |
519 | if k == "tip": # skip tip |
|
519 | if k == "tip": # skip tip | |
520 | continue; |
|
520 | continue; | |
521 |
|
521 | |||
522 | count += 1 |
|
522 | count += 1 | |
523 | if count > 10: # limit to 10 tags |
|
523 | if count > 10: # limit to 10 tags | |
524 | break; |
|
524 | break; | |
525 |
|
525 | |||
526 | yield self.t("tagentry", |
|
526 | yield self.t("tagentry", | |
527 | parity=self.stripes(parity), |
|
527 | parity=self.stripes(parity), | |
528 | tag=k, |
|
528 | tag=k, | |
529 | node=hex(n), |
|
529 | node=hex(n), | |
530 | date=self.repo.changectx(n).date()) |
|
530 | date=self.repo.changectx(n).date()) | |
531 | parity += 1 |
|
531 | parity += 1 | |
532 |
|
532 | |||
533 | def heads(**map): |
|
533 | def heads(**map): | |
534 | parity = 0 |
|
534 | parity = 0 | |
535 | count = 0 |
|
535 | count = 0 | |
536 |
|
536 | |||
537 | for node in self.repo.heads(): |
|
537 | for node in self.repo.heads(): | |
538 | count += 1 |
|
538 | count += 1 | |
539 | if count > 10: |
|
539 | if count > 10: | |
540 | break; |
|
540 | break; | |
541 |
|
541 | |||
542 | ctx = self.repo.changectx(node) |
|
542 | ctx = self.repo.changectx(node) | |
543 |
|
543 | |||
544 | yield {'parity': self.stripes(parity), |
|
544 | yield {'parity': self.stripes(parity), | |
545 | 'branch': ctx.branch(), |
|
545 | 'branch': ctx.branch(), | |
546 | 'node': hex(node), |
|
546 | 'node': hex(node), | |
547 | 'date': ctx.date()} |
|
547 | 'date': ctx.date()} | |
548 | parity += 1 |
|
548 | parity += 1 | |
549 |
|
549 | |||
550 | def changelist(**map): |
|
550 | def changelist(**map): | |
551 | parity = 0 |
|
551 | parity = 0 | |
552 | l = [] # build a list in forward order for efficiency |
|
552 | l = [] # build a list in forward order for efficiency | |
553 | for i in xrange(start, end): |
|
553 | for i in xrange(start, end): | |
554 | ctx = self.repo.changectx(i) |
|
554 | ctx = self.repo.changectx(i) | |
555 | hn = hex(ctx.node()) |
|
555 | hn = hex(ctx.node()) | |
556 |
|
556 | |||
557 | l.insert(0, self.t( |
|
557 | l.insert(0, self.t( | |
558 | 'shortlogentry', |
|
558 | 'shortlogentry', | |
559 | parity=parity, |
|
559 | parity=parity, | |
560 | author=ctx.user(), |
|
560 | author=ctx.user(), | |
561 | desc=ctx.description(), |
|
561 | desc=ctx.description(), | |
562 | date=ctx.date(), |
|
562 | date=ctx.date(), | |
563 | rev=i, |
|
563 | rev=i, | |
564 | node=hn)) |
|
564 | node=hn)) | |
565 | parity = 1 - parity |
|
565 | parity = 1 - parity | |
566 |
|
566 | |||
567 | yield l |
|
567 | yield l | |
568 |
|
568 | |||
569 | cl = self.repo.changelog |
|
569 | cl = self.repo.changelog | |
570 | count = cl.count() |
|
570 | count = cl.count() | |
571 | start = max(0, count - self.maxchanges) |
|
571 | start = max(0, count - self.maxchanges) | |
572 | end = min(count, start + self.maxchanges) |
|
572 | end = min(count, start + self.maxchanges) | |
573 |
|
573 | |||
574 | yield self.t("summary", |
|
574 | yield self.t("summary", | |
575 | desc=self.config("web", "description", "unknown"), |
|
575 | desc=self.config("web", "description", "unknown"), | |
576 | owner=(self.config("ui", "username") or # preferred |
|
576 | owner=(self.config("ui", "username") or # preferred | |
577 | self.config("web", "contact") or # deprecated |
|
577 | self.config("web", "contact") or # deprecated | |
578 | self.config("web", "author", "unknown")), # also |
|
578 | self.config("web", "author", "unknown")), # also | |
579 | lastchange=cl.read(cl.tip())[2], |
|
579 | lastchange=cl.read(cl.tip())[2], | |
580 | tags=tagentries, |
|
580 | tags=tagentries, | |
581 | heads=heads, |
|
581 | heads=heads, | |
582 | shortlog=changelist, |
|
582 | shortlog=changelist, | |
583 | node=hex(cl.tip()), |
|
583 | node=hex(cl.tip()), | |
584 | archives=self.archivelist("tip")) |
|
584 | archives=self.archivelist("tip")) | |
585 |
|
585 | |||
586 | def filediff(self, fctx): |
|
586 | def filediff(self, fctx): | |
587 | n = fctx.node() |
|
587 | n = fctx.node() | |
588 | path = fctx.path() |
|
588 | path = fctx.path() | |
589 | parents = fctx.parents() |
|
589 | parents = fctx.parents() | |
590 | p1 = parents and parents[0].node() or nullid |
|
590 | p1 = parents and parents[0].node() or nullid | |
591 |
|
591 | |||
592 | def diff(**map): |
|
592 | def diff(**map): | |
593 | yield self.diff(p1, n, [path]) |
|
593 | yield self.diff(p1, n, [path]) | |
594 |
|
594 | |||
595 | yield self.t("filediff", |
|
595 | yield self.t("filediff", | |
596 | file=path, |
|
596 | file=path, | |
597 | node=hex(n), |
|
597 | node=hex(n), | |
598 | rev=fctx.rev(), |
|
598 | rev=fctx.rev(), | |
599 | parent=self.siblings(parents), |
|
599 | parent=self.siblings(parents), | |
600 | child=self.siblings(fctx.children()), |
|
600 | child=self.siblings(fctx.children()), | |
601 | diff=diff) |
|
601 | diff=diff) | |
602 |
|
602 | |||
603 | archive_specs = { |
|
603 | archive_specs = { | |
604 | 'bz2': ('application/x-tar', 'tbz2', '.tar.bz2', None), |
|
604 | 'bz2': ('application/x-tar', 'tbz2', '.tar.bz2', None), | |
605 | 'gz': ('application/x-tar', 'tgz', '.tar.gz', None), |
|
605 | 'gz': ('application/x-tar', 'tgz', '.tar.gz', None), | |
606 | 'zip': ('application/zip', 'zip', '.zip', None), |
|
606 | 'zip': ('application/zip', 'zip', '.zip', None), | |
607 | } |
|
607 | } | |
608 |
|
608 | |||
609 | def archive(self, req, cnode, type_): |
|
609 | def archive(self, req, cnode, type_): | |
610 | reponame = re.sub(r"\W+", "-", os.path.basename(self.reponame)) |
|
610 | reponame = re.sub(r"\W+", "-", os.path.basename(self.reponame)) | |
611 | name = "%s-%s" % (reponame, short(cnode)) |
|
611 | name = "%s-%s" % (reponame, short(cnode)) | |
612 | mimetype, artype, extension, encoding = self.archive_specs[type_] |
|
612 | mimetype, artype, extension, encoding = self.archive_specs[type_] | |
613 | headers = [('Content-type', mimetype), |
|
613 | headers = [('Content-type', mimetype), | |
614 | ('Content-disposition', 'attachment; filename=%s%s' % |
|
614 | ('Content-disposition', 'attachment; filename=%s%s' % | |
615 | (name, extension))] |
|
615 | (name, extension))] | |
616 | if encoding: |
|
616 | if encoding: | |
617 | headers.append(('Content-encoding', encoding)) |
|
617 | headers.append(('Content-encoding', encoding)) | |
618 | req.header(headers) |
|
618 | req.header(headers) | |
619 | archival.archive(self.repo, req.out, cnode, artype, prefix=name) |
|
619 | archival.archive(self.repo, req.out, cnode, artype, prefix=name) | |
620 |
|
620 | |||
621 | # add tags to things |
|
621 | # add tags to things | |
622 | # tags -> list of changesets corresponding to tags |
|
622 | # tags -> list of changesets corresponding to tags | |
623 | # find tag, changeset, file |
|
623 | # find tag, changeset, file | |
624 |
|
624 | |||
625 | def cleanpath(self, path): |
|
625 | def cleanpath(self, path): | |
626 | path = path.lstrip('/') |
|
626 | path = path.lstrip('/') | |
627 | return util.canonpath(self.repo.root, '', path) |
|
627 | return util.canonpath(self.repo.root, '', path) | |
628 |
|
628 | |||
629 | def run(self): |
|
629 | def run(self): | |
630 | if not os.environ.get('GATEWAY_INTERFACE', '').startswith("CGI/1."): |
|
630 | if not os.environ.get('GATEWAY_INTERFACE', '').startswith("CGI/1."): | |
631 | raise RuntimeError("This function is only intended to be called while running as a CGI script.") |
|
631 | raise RuntimeError("This function is only intended to be called while running as a CGI script.") | |
632 | import mercurial.hgweb.wsgicgi as wsgicgi |
|
632 | import mercurial.hgweb.wsgicgi as wsgicgi | |
633 | from request import wsgiapplication |
|
633 | from request import wsgiapplication | |
634 | def make_web_app(): |
|
634 | def make_web_app(): | |
635 | return self |
|
635 | return self | |
636 | wsgicgi.launch(wsgiapplication(make_web_app)) |
|
636 | wsgicgi.launch(wsgiapplication(make_web_app)) | |
637 |
|
637 | |||
638 | def run_wsgi(self, req): |
|
638 | def run_wsgi(self, req): | |
639 | def header(**map): |
|
639 | def header(**map): | |
640 | header_file = cStringIO.StringIO( |
|
640 | header_file = cStringIO.StringIO( | |
641 | ''.join(self.t("header", encoding=util._encoding, **map))) |
|
641 | ''.join(self.t("header", encoding=util._encoding, **map))) | |
642 | msg = mimetools.Message(header_file, 0) |
|
642 | msg = mimetools.Message(header_file, 0) | |
643 | req.header(msg.items()) |
|
643 | req.header(msg.items()) | |
644 | yield header_file.read() |
|
644 | yield header_file.read() | |
645 |
|
645 | |||
646 | def rawfileheader(**map): |
|
646 | def rawfileheader(**map): | |
647 | req.header([('Content-type', map['mimetype']), |
|
647 | req.header([('Content-type', map['mimetype']), | |
648 | ('Content-disposition', 'filename=%s' % map['file']), |
|
648 | ('Content-disposition', 'filename=%s' % map['file']), | |
649 | ('Content-length', str(len(map['raw'])))]) |
|
649 | ('Content-length', str(len(map['raw'])))]) | |
650 | yield '' |
|
650 | yield '' | |
651 |
|
651 | |||
652 | def footer(**map): |
|
652 | def footer(**map): | |
653 | yield self.t("footer", **map) |
|
653 | yield self.t("footer", **map) | |
654 |
|
654 | |||
655 | def motd(**map): |
|
655 | def motd(**map): | |
656 | yield self.config("web", "motd", "") |
|
656 | yield self.config("web", "motd", "") | |
657 |
|
657 | |||
658 | def expand_form(form): |
|
658 | def expand_form(form): | |
659 | shortcuts = { |
|
659 | shortcuts = { | |
660 | 'cl': [('cmd', ['changelog']), ('rev', None)], |
|
660 | 'cl': [('cmd', ['changelog']), ('rev', None)], | |
661 | 'sl': [('cmd', ['shortlog']), ('rev', None)], |
|
661 | 'sl': [('cmd', ['shortlog']), ('rev', None)], | |
662 | 'cs': [('cmd', ['changeset']), ('node', None)], |
|
662 | 'cs': [('cmd', ['changeset']), ('node', None)], | |
663 | 'f': [('cmd', ['file']), ('filenode', None)], |
|
663 | 'f': [('cmd', ['file']), ('filenode', None)], | |
664 | 'fl': [('cmd', ['filelog']), ('filenode', None)], |
|
664 | 'fl': [('cmd', ['filelog']), ('filenode', None)], | |
665 | 'fd': [('cmd', ['filediff']), ('node', None)], |
|
665 | 'fd': [('cmd', ['filediff']), ('node', None)], | |
666 | 'fa': [('cmd', ['annotate']), ('filenode', None)], |
|
666 | 'fa': [('cmd', ['annotate']), ('filenode', None)], | |
667 | 'mf': [('cmd', ['manifest']), ('manifest', None)], |
|
667 | 'mf': [('cmd', ['manifest']), ('manifest', None)], | |
668 | 'ca': [('cmd', ['archive']), ('node', None)], |
|
668 | 'ca': [('cmd', ['archive']), ('node', None)], | |
669 | 'tags': [('cmd', ['tags'])], |
|
669 | 'tags': [('cmd', ['tags'])], | |
670 | 'tip': [('cmd', ['changeset']), ('node', ['tip'])], |
|
670 | 'tip': [('cmd', ['changeset']), ('node', ['tip'])], | |
671 | 'static': [('cmd', ['static']), ('file', None)] |
|
671 | 'static': [('cmd', ['static']), ('file', None)] | |
672 | } |
|
672 | } | |
673 |
|
673 | |||
674 | for k in shortcuts.iterkeys(): |
|
674 | for k in shortcuts.iterkeys(): | |
675 | if form.has_key(k): |
|
675 | if form.has_key(k): | |
676 | for name, value in shortcuts[k]: |
|
676 | for name, value in shortcuts[k]: | |
677 | if value is None: |
|
677 | if value is None: | |
678 | value = form[k] |
|
678 | value = form[k] | |
679 | form[name] = value |
|
679 | form[name] = value | |
680 | del form[k] |
|
680 | del form[k] | |
681 |
|
681 | |||
682 | def rewrite_request(req): |
|
682 | def rewrite_request(req): | |
683 | '''translate new web interface to traditional format''' |
|
683 | '''translate new web interface to traditional format''' | |
684 |
|
684 | |||
685 | def spliturl(req): |
|
685 | def spliturl(req): | |
686 | def firstitem(query): |
|
686 | def firstitem(query): | |
687 | return query.split('&', 1)[0].split(';', 1)[0] |
|
687 | return query.split('&', 1)[0].split(';', 1)[0] | |
688 |
|
688 | |||
689 | def normurl(url): |
|
689 | def normurl(url): | |
690 | inner = '/'.join([x for x in url.split('/') if x]) |
|
690 | inner = '/'.join([x for x in url.split('/') if x]) | |
691 | tl = len(url) > 1 and url.endswith('/') and '/' or '' |
|
691 | tl = len(url) > 1 and url.endswith('/') and '/' or '' | |
692 |
|
692 | |||
693 | return '%s%s%s' % (url.startswith('/') and '/' or '', |
|
693 | return '%s%s%s' % (url.startswith('/') and '/' or '', | |
694 | inner, tl) |
|
694 | inner, tl) | |
695 |
|
695 | |||
696 | root = normurl(urllib.unquote(req.env.get('REQUEST_URI', '').split('?', 1)[0])) |
|
696 | root = normurl(urllib.unquote(req.env.get('REQUEST_URI', '').split('?', 1)[0])) | |
697 | pi = normurl(req.env.get('PATH_INFO', '')) |
|
697 | pi = normurl(req.env.get('PATH_INFO', '')) | |
698 | if pi: |
|
698 | if pi: | |
699 | # strip leading / |
|
699 | # strip leading / | |
700 | pi = pi[1:] |
|
700 | pi = pi[1:] | |
701 | if pi: |
|
701 | if pi: | |
702 | root = root[:-len(pi)] |
|
702 | root = root[:-len(pi)] | |
703 | if req.env.has_key('REPO_NAME'): |
|
703 | if req.env.has_key('REPO_NAME'): | |
704 | rn = req.env['REPO_NAME'] + '/' |
|
704 | rn = req.env['REPO_NAME'] + '/' | |
705 | root += rn |
|
705 | root += rn | |
706 | query = pi[len(rn):] |
|
706 | query = pi[len(rn):] | |
707 | else: |
|
707 | else: | |
708 | query = pi |
|
708 | query = pi | |
709 | else: |
|
709 | else: | |
710 | root += '?' |
|
710 | root += '?' | |
711 | query = firstitem(req.env['QUERY_STRING']) |
|
711 | query = firstitem(req.env['QUERY_STRING']) | |
712 |
|
712 | |||
713 | return (root, query) |
|
713 | return (root, query) | |
714 |
|
714 | |||
715 | req.url, query = spliturl(req) |
|
715 | req.url, query = spliturl(req) | |
716 |
|
716 | |||
717 | if req.form.has_key('cmd'): |
|
717 | if req.form.has_key('cmd'): | |
718 | # old style |
|
718 | # old style | |
719 | return |
|
719 | return | |
720 |
|
720 | |||
721 | args = query.split('/', 2) |
|
721 | args = query.split('/', 2) | |
722 | if not args or not args[0]: |
|
722 | if not args or not args[0]: | |
723 | return |
|
723 | return | |
724 |
|
724 | |||
725 | cmd = args.pop(0) |
|
725 | cmd = args.pop(0) | |
726 | style = cmd.rfind('-') |
|
726 | style = cmd.rfind('-') | |
727 | if style != -1: |
|
727 | if style != -1: | |
728 | req.form['style'] = [cmd[:style]] |
|
728 | req.form['style'] = [cmd[:style]] | |
729 | cmd = cmd[style+1:] |
|
729 | cmd = cmd[style+1:] | |
730 | # avoid accepting e.g. style parameter as command |
|
730 | # avoid accepting e.g. style parameter as command | |
731 | if hasattr(self, 'do_' + cmd): |
|
731 | if hasattr(self, 'do_' + cmd): | |
732 | req.form['cmd'] = [cmd] |
|
732 | req.form['cmd'] = [cmd] | |
733 |
|
733 | |||
734 | if args and args[0]: |
|
734 | if args and args[0]: | |
735 | node = args.pop(0) |
|
735 | node = args.pop(0) | |
736 | req.form['node'] = [node] |
|
736 | req.form['node'] = [node] | |
737 | if args: |
|
737 | if args: | |
738 | req.form['file'] = args |
|
738 | req.form['file'] = args | |
739 |
|
739 | |||
740 | if cmd == 'static': |
|
740 | if cmd == 'static': | |
741 | req.form['file'] = req.form['node'] |
|
741 | req.form['file'] = req.form['node'] | |
742 | elif cmd == 'archive': |
|
742 | elif cmd == 'archive': | |
743 | fn = req.form['node'][0] |
|
743 | fn = req.form['node'][0] | |
744 | for type_, spec in self.archive_specs.iteritems(): |
|
744 | for type_, spec in self.archive_specs.iteritems(): | |
745 | ext = spec[2] |
|
745 | ext = spec[2] | |
746 | if fn.endswith(ext): |
|
746 | if fn.endswith(ext): | |
747 | req.form['node'] = [fn[:-len(ext)]] |
|
747 | req.form['node'] = [fn[:-len(ext)]] | |
748 | req.form['type'] = [type_] |
|
748 | req.form['type'] = [type_] | |
749 |
|
749 | |||
750 | def sessionvars(**map): |
|
750 | def sessionvars(**map): | |
751 | fields = [] |
|
751 | fields = [] | |
752 | if req.form.has_key('style'): |
|
752 | if req.form.has_key('style'): | |
753 | style = req.form['style'][0] |
|
753 | style = req.form['style'][0] | |
754 | if style != self.config('web', 'style', ''): |
|
754 | if style != self.config('web', 'style', ''): | |
755 | fields.append(('style', style)) |
|
755 | fields.append(('style', style)) | |
756 |
|
756 | |||
757 | separator = req.url[-1] == '?' and ';' or '?' |
|
757 | separator = req.url[-1] == '?' and ';' or '?' | |
758 | for name, value in fields: |
|
758 | for name, value in fields: | |
759 | yield dict(name=name, value=value, separator=separator) |
|
759 | yield dict(name=name, value=value, separator=separator) | |
760 | separator = ';' |
|
760 | separator = ';' | |
761 |
|
761 | |||
762 | self.refresh() |
|
762 | self.refresh() | |
763 |
|
763 | |||
764 | expand_form(req.form) |
|
764 | expand_form(req.form) | |
765 | rewrite_request(req) |
|
765 | rewrite_request(req) | |
766 |
|
766 | |||
767 | style = self.config("web", "style", "") |
|
767 | style = self.config("web", "style", "") | |
768 | if req.form.has_key('style'): |
|
768 | if req.form.has_key('style'): | |
769 | style = req.form['style'][0] |
|
769 | style = req.form['style'][0] | |
770 | mapfile = style_map(self.templatepath, style) |
|
770 | mapfile = style_map(self.templatepath, style) | |
771 |
|
771 | |||
772 | port = req.env["SERVER_PORT"] |
|
772 | port = req.env["SERVER_PORT"] | |
773 | port = port != "80" and (":" + port) or "" |
|
773 | port = port != "80" and (":" + port) or "" | |
774 | urlbase = 'http://%s%s' % (req.env['SERVER_NAME'], port) |
|
774 | urlbase = 'http://%s%s' % (req.env['SERVER_NAME'], port) | |
|
775 | staticurl = self.config("web", "staticurl") or req.url + 'static/' | |||
|
776 | if not staticurl.endswith('/'): | |||
|
777 | staticurl += '/' | |||
775 |
|
778 | |||
776 | if not self.reponame: |
|
779 | if not self.reponame: | |
777 | self.reponame = (self.config("web", "name") |
|
780 | self.reponame = (self.config("web", "name") | |
778 | or req.env.get('REPO_NAME') |
|
781 | or req.env.get('REPO_NAME') | |
779 | or req.url.strip('/') or self.repo.root) |
|
782 | or req.url.strip('/') or self.repo.root) | |
780 |
|
783 | |||
781 | self.t = templater.templater(mapfile, templater.common_filters, |
|
784 | self.t = templater.templater(mapfile, templater.common_filters, | |
782 | defaults={"url": req.url, |
|
785 | defaults={"url": req.url, | |
|
786 | "staticurl": staticurl, | |||
783 | "urlbase": urlbase, |
|
787 | "urlbase": urlbase, | |
784 | "repo": self.reponame, |
|
788 | "repo": self.reponame, | |
785 | "header": header, |
|
789 | "header": header, | |
786 | "footer": footer, |
|
790 | "footer": footer, | |
787 | "motd": motd, |
|
791 | "motd": motd, | |
788 | "rawfileheader": rawfileheader, |
|
792 | "rawfileheader": rawfileheader, | |
789 | "sessionvars": sessionvars |
|
793 | "sessionvars": sessionvars | |
790 | }) |
|
794 | }) | |
791 |
|
795 | |||
792 | if not req.form.has_key('cmd'): |
|
796 | if not req.form.has_key('cmd'): | |
793 | req.form['cmd'] = [self.t.cache['default']] |
|
797 | req.form['cmd'] = [self.t.cache['default']] | |
794 |
|
798 | |||
795 | cmd = req.form['cmd'][0] |
|
799 | cmd = req.form['cmd'][0] | |
796 |
|
800 | |||
797 | method = getattr(self, 'do_' + cmd, None) |
|
801 | method = getattr(self, 'do_' + cmd, None) | |
798 | if method: |
|
802 | if method: | |
799 | try: |
|
803 | try: | |
800 | method(req) |
|
804 | method(req) | |
801 | except (hg.RepoError, revlog.RevlogError), inst: |
|
805 | except (hg.RepoError, revlog.RevlogError), inst: | |
802 | req.write(self.t("error", error=str(inst))) |
|
806 | req.write(self.t("error", error=str(inst))) | |
803 | else: |
|
807 | else: | |
804 | req.write(self.t("error", error='No such method: ' + cmd)) |
|
808 | req.write(self.t("error", error='No such method: ' + cmd)) | |
805 |
|
809 | |||
806 | def changectx(self, req): |
|
810 | def changectx(self, req): | |
807 | if req.form.has_key('node'): |
|
811 | if req.form.has_key('node'): | |
808 | changeid = req.form['node'][0] |
|
812 | changeid = req.form['node'][0] | |
809 | elif req.form.has_key('manifest'): |
|
813 | elif req.form.has_key('manifest'): | |
810 | changeid = req.form['manifest'][0] |
|
814 | changeid = req.form['manifest'][0] | |
811 | else: |
|
815 | else: | |
812 | changeid = self.repo.changelog.count() - 1 |
|
816 | changeid = self.repo.changelog.count() - 1 | |
813 |
|
817 | |||
814 | try: |
|
818 | try: | |
815 | ctx = self.repo.changectx(changeid) |
|
819 | ctx = self.repo.changectx(changeid) | |
816 | except hg.RepoError: |
|
820 | except hg.RepoError: | |
817 | man = self.repo.manifest |
|
821 | man = self.repo.manifest | |
818 | mn = man.lookup(changeid) |
|
822 | mn = man.lookup(changeid) | |
819 | ctx = self.repo.changectx(man.linkrev(mn)) |
|
823 | ctx = self.repo.changectx(man.linkrev(mn)) | |
820 |
|
824 | |||
821 | return ctx |
|
825 | return ctx | |
822 |
|
826 | |||
823 | def filectx(self, req): |
|
827 | def filectx(self, req): | |
824 | path = self.cleanpath(req.form['file'][0]) |
|
828 | path = self.cleanpath(req.form['file'][0]) | |
825 | if req.form.has_key('node'): |
|
829 | if req.form.has_key('node'): | |
826 | changeid = req.form['node'][0] |
|
830 | changeid = req.form['node'][0] | |
827 | else: |
|
831 | else: | |
828 | changeid = req.form['filenode'][0] |
|
832 | changeid = req.form['filenode'][0] | |
829 | try: |
|
833 | try: | |
830 | ctx = self.repo.changectx(changeid) |
|
834 | ctx = self.repo.changectx(changeid) | |
831 | fctx = ctx.filectx(path) |
|
835 | fctx = ctx.filectx(path) | |
832 | except hg.RepoError: |
|
836 | except hg.RepoError: | |
833 | fctx = self.repo.filectx(path, fileid=changeid) |
|
837 | fctx = self.repo.filectx(path, fileid=changeid) | |
834 |
|
838 | |||
835 | return fctx |
|
839 | return fctx | |
836 |
|
840 | |||
837 | def stripes(self, parity): |
|
841 | def stripes(self, parity): | |
838 | "make horizontal stripes for easier reading" |
|
842 | "make horizontal stripes for easier reading" | |
839 | if self.stripecount: |
|
843 | if self.stripecount: | |
840 | return (1 + parity / self.stripecount) & 1 |
|
844 | return (1 + parity / self.stripecount) & 1 | |
841 | else: |
|
845 | else: | |
842 | return 0 |
|
846 | return 0 | |
843 |
|
847 | |||
844 | def do_log(self, req): |
|
848 | def do_log(self, req): | |
845 | if req.form.has_key('file') and req.form['file'][0]: |
|
849 | if req.form.has_key('file') and req.form['file'][0]: | |
846 | self.do_filelog(req) |
|
850 | self.do_filelog(req) | |
847 | else: |
|
851 | else: | |
848 | self.do_changelog(req) |
|
852 | self.do_changelog(req) | |
849 |
|
853 | |||
850 | def do_rev(self, req): |
|
854 | def do_rev(self, req): | |
851 | self.do_changeset(req) |
|
855 | self.do_changeset(req) | |
852 |
|
856 | |||
853 | def do_file(self, req): |
|
857 | def do_file(self, req): | |
854 | path = self.cleanpath(req.form.get('file', [''])[0]) |
|
858 | path = self.cleanpath(req.form.get('file', [''])[0]) | |
855 | if path: |
|
859 | if path: | |
856 | try: |
|
860 | try: | |
857 | req.write(self.filerevision(self.filectx(req))) |
|
861 | req.write(self.filerevision(self.filectx(req))) | |
858 | return |
|
862 | return | |
859 | except revlog.LookupError: |
|
863 | except revlog.LookupError: | |
860 | pass |
|
864 | pass | |
861 |
|
865 | |||
862 | req.write(self.manifest(self.changectx(req), path)) |
|
866 | req.write(self.manifest(self.changectx(req), path)) | |
863 |
|
867 | |||
864 | def do_diff(self, req): |
|
868 | def do_diff(self, req): | |
865 | self.do_filediff(req) |
|
869 | self.do_filediff(req) | |
866 |
|
870 | |||
867 | def do_changelog(self, req, shortlog = False): |
|
871 | def do_changelog(self, req, shortlog = False): | |
868 | if req.form.has_key('node'): |
|
872 | if req.form.has_key('node'): | |
869 | ctx = self.changectx(req) |
|
873 | ctx = self.changectx(req) | |
870 | else: |
|
874 | else: | |
871 | if req.form.has_key('rev'): |
|
875 | if req.form.has_key('rev'): | |
872 | hi = req.form['rev'][0] |
|
876 | hi = req.form['rev'][0] | |
873 | else: |
|
877 | else: | |
874 | hi = self.repo.changelog.count() - 1 |
|
878 | hi = self.repo.changelog.count() - 1 | |
875 | try: |
|
879 | try: | |
876 | ctx = self.repo.changectx(hi) |
|
880 | ctx = self.repo.changectx(hi) | |
877 | except hg.RepoError: |
|
881 | except hg.RepoError: | |
878 | req.write(self.search(hi)) # XXX redirect to 404 page? |
|
882 | req.write(self.search(hi)) # XXX redirect to 404 page? | |
879 | return |
|
883 | return | |
880 |
|
884 | |||
881 | req.write(self.changelog(ctx, shortlog = shortlog)) |
|
885 | req.write(self.changelog(ctx, shortlog = shortlog)) | |
882 |
|
886 | |||
883 | def do_shortlog(self, req): |
|
887 | def do_shortlog(self, req): | |
884 | self.do_changelog(req, shortlog = True) |
|
888 | self.do_changelog(req, shortlog = True) | |
885 |
|
889 | |||
886 | def do_changeset(self, req): |
|
890 | def do_changeset(self, req): | |
887 | req.write(self.changeset(self.changectx(req))) |
|
891 | req.write(self.changeset(self.changectx(req))) | |
888 |
|
892 | |||
889 | def do_manifest(self, req): |
|
893 | def do_manifest(self, req): | |
890 | req.write(self.manifest(self.changectx(req), |
|
894 | req.write(self.manifest(self.changectx(req), | |
891 | self.cleanpath(req.form['path'][0]))) |
|
895 | self.cleanpath(req.form['path'][0]))) | |
892 |
|
896 | |||
893 | def do_tags(self, req): |
|
897 | def do_tags(self, req): | |
894 | req.write(self.tags()) |
|
898 | req.write(self.tags()) | |
895 |
|
899 | |||
896 | def do_summary(self, req): |
|
900 | def do_summary(self, req): | |
897 | req.write(self.summary()) |
|
901 | req.write(self.summary()) | |
898 |
|
902 | |||
899 | def do_filediff(self, req): |
|
903 | def do_filediff(self, req): | |
900 | req.write(self.filediff(self.filectx(req))) |
|
904 | req.write(self.filediff(self.filectx(req))) | |
901 |
|
905 | |||
902 | def do_annotate(self, req): |
|
906 | def do_annotate(self, req): | |
903 | req.write(self.fileannotate(self.filectx(req))) |
|
907 | req.write(self.fileannotate(self.filectx(req))) | |
904 |
|
908 | |||
905 | def do_filelog(self, req): |
|
909 | def do_filelog(self, req): | |
906 | req.write(self.filelog(self.filectx(req))) |
|
910 | req.write(self.filelog(self.filectx(req))) | |
907 |
|
911 | |||
908 | def do_lookup(self, req): |
|
912 | def do_lookup(self, req): | |
909 | try: |
|
913 | try: | |
910 | r = hex(self.repo.lookup(req.form['key'][0])) |
|
914 | r = hex(self.repo.lookup(req.form['key'][0])) | |
911 | success = 1 |
|
915 | success = 1 | |
912 | except Exception,inst: |
|
916 | except Exception,inst: | |
913 | r = str(inst) |
|
917 | r = str(inst) | |
914 | success = 0 |
|
918 | success = 0 | |
915 | resp = "%s %s\n" % (success, r) |
|
919 | resp = "%s %s\n" % (success, r) | |
916 | req.httphdr("application/mercurial-0.1", length=len(resp)) |
|
920 | req.httphdr("application/mercurial-0.1", length=len(resp)) | |
917 | req.write(resp) |
|
921 | req.write(resp) | |
918 |
|
922 | |||
919 | def do_heads(self, req): |
|
923 | def do_heads(self, req): | |
920 | resp = " ".join(map(hex, self.repo.heads())) + "\n" |
|
924 | resp = " ".join(map(hex, self.repo.heads())) + "\n" | |
921 | req.httphdr("application/mercurial-0.1", length=len(resp)) |
|
925 | req.httphdr("application/mercurial-0.1", length=len(resp)) | |
922 | req.write(resp) |
|
926 | req.write(resp) | |
923 |
|
927 | |||
924 | def do_branches(self, req): |
|
928 | def do_branches(self, req): | |
925 | nodes = [] |
|
929 | nodes = [] | |
926 | if req.form.has_key('nodes'): |
|
930 | if req.form.has_key('nodes'): | |
927 | nodes = map(bin, req.form['nodes'][0].split(" ")) |
|
931 | nodes = map(bin, req.form['nodes'][0].split(" ")) | |
928 | resp = cStringIO.StringIO() |
|
932 | resp = cStringIO.StringIO() | |
929 | for b in self.repo.branches(nodes): |
|
933 | for b in self.repo.branches(nodes): | |
930 | resp.write(" ".join(map(hex, b)) + "\n") |
|
934 | resp.write(" ".join(map(hex, b)) + "\n") | |
931 | resp = resp.getvalue() |
|
935 | resp = resp.getvalue() | |
932 | req.httphdr("application/mercurial-0.1", length=len(resp)) |
|
936 | req.httphdr("application/mercurial-0.1", length=len(resp)) | |
933 | req.write(resp) |
|
937 | req.write(resp) | |
934 |
|
938 | |||
935 | def do_between(self, req): |
|
939 | def do_between(self, req): | |
936 | if req.form.has_key('pairs'): |
|
940 | if req.form.has_key('pairs'): | |
937 | pairs = [map(bin, p.split("-")) |
|
941 | pairs = [map(bin, p.split("-")) | |
938 | for p in req.form['pairs'][0].split(" ")] |
|
942 | for p in req.form['pairs'][0].split(" ")] | |
939 | resp = cStringIO.StringIO() |
|
943 | resp = cStringIO.StringIO() | |
940 | for b in self.repo.between(pairs): |
|
944 | for b in self.repo.between(pairs): | |
941 | resp.write(" ".join(map(hex, b)) + "\n") |
|
945 | resp.write(" ".join(map(hex, b)) + "\n") | |
942 | resp = resp.getvalue() |
|
946 | resp = resp.getvalue() | |
943 | req.httphdr("application/mercurial-0.1", length=len(resp)) |
|
947 | req.httphdr("application/mercurial-0.1", length=len(resp)) | |
944 | req.write(resp) |
|
948 | req.write(resp) | |
945 |
|
949 | |||
946 | def do_changegroup(self, req): |
|
950 | def do_changegroup(self, req): | |
947 | req.httphdr("application/mercurial-0.1") |
|
951 | req.httphdr("application/mercurial-0.1") | |
948 | nodes = [] |
|
952 | nodes = [] | |
949 | if not self.allowpull: |
|
953 | if not self.allowpull: | |
950 | return |
|
954 | return | |
951 |
|
955 | |||
952 | if req.form.has_key('roots'): |
|
956 | if req.form.has_key('roots'): | |
953 | nodes = map(bin, req.form['roots'][0].split(" ")) |
|
957 | nodes = map(bin, req.form['roots'][0].split(" ")) | |
954 |
|
958 | |||
955 | z = zlib.compressobj() |
|
959 | z = zlib.compressobj() | |
956 | f = self.repo.changegroup(nodes, 'serve') |
|
960 | f = self.repo.changegroup(nodes, 'serve') | |
957 | while 1: |
|
961 | while 1: | |
958 | chunk = f.read(4096) |
|
962 | chunk = f.read(4096) | |
959 | if not chunk: |
|
963 | if not chunk: | |
960 | break |
|
964 | break | |
961 | req.write(z.compress(chunk)) |
|
965 | req.write(z.compress(chunk)) | |
962 |
|
966 | |||
963 | req.write(z.flush()) |
|
967 | req.write(z.flush()) | |
964 |
|
968 | |||
965 | def do_changegroupsubset(self, req): |
|
969 | def do_changegroupsubset(self, req): | |
966 | req.httphdr("application/mercurial-0.1") |
|
970 | req.httphdr("application/mercurial-0.1") | |
967 | bases = [] |
|
971 | bases = [] | |
968 | heads = [] |
|
972 | heads = [] | |
969 | if not self.allowpull: |
|
973 | if not self.allowpull: | |
970 | return |
|
974 | return | |
971 |
|
975 | |||
972 | if req.form.has_key('bases'): |
|
976 | if req.form.has_key('bases'): | |
973 | bases = [bin(x) for x in req.form['bases'][0].split(' ')] |
|
977 | bases = [bin(x) for x in req.form['bases'][0].split(' ')] | |
974 | if req.form.has_key('heads'): |
|
978 | if req.form.has_key('heads'): | |
975 | heads = [bin(x) for x in req.form['heads'][0].split(' ')] |
|
979 | heads = [bin(x) for x in req.form['heads'][0].split(' ')] | |
976 |
|
980 | |||
977 | z = zlib.compressobj() |
|
981 | z = zlib.compressobj() | |
978 | f = self.repo.changegroupsubset(bases, heads, 'serve') |
|
982 | f = self.repo.changegroupsubset(bases, heads, 'serve') | |
979 | while 1: |
|
983 | while 1: | |
980 | chunk = f.read(4096) |
|
984 | chunk = f.read(4096) | |
981 | if not chunk: |
|
985 | if not chunk: | |
982 | break |
|
986 | break | |
983 | req.write(z.compress(chunk)) |
|
987 | req.write(z.compress(chunk)) | |
984 |
|
988 | |||
985 | req.write(z.flush()) |
|
989 | req.write(z.flush()) | |
986 |
|
990 | |||
987 | def do_archive(self, req): |
|
991 | def do_archive(self, req): | |
988 | changeset = self.repo.lookup(req.form['node'][0]) |
|
992 | changeset = self.repo.lookup(req.form['node'][0]) | |
989 | type_ = req.form['type'][0] |
|
993 | type_ = req.form['type'][0] | |
990 | allowed = self.configlist("web", "allow_archive") |
|
994 | allowed = self.configlist("web", "allow_archive") | |
991 | if (type_ in self.archives and (type_ in allowed or |
|
995 | if (type_ in self.archives and (type_ in allowed or | |
992 | self.configbool("web", "allow" + type_, False))): |
|
996 | self.configbool("web", "allow" + type_, False))): | |
993 | self.archive(req, changeset, type_) |
|
997 | self.archive(req, changeset, type_) | |
994 | return |
|
998 | return | |
995 |
|
999 | |||
996 | req.write(self.t("error")) |
|
1000 | req.write(self.t("error")) | |
997 |
|
1001 | |||
998 | def do_static(self, req): |
|
1002 | def do_static(self, req): | |
999 | fname = req.form['file'][0] |
|
1003 | fname = req.form['file'][0] | |
1000 | # a repo owner may set web.static in .hg/hgrc to get any file |
|
1004 | # a repo owner may set web.static in .hg/hgrc to get any file | |
1001 | # readable by the user running the CGI script |
|
1005 | # readable by the user running the CGI script | |
1002 | static = self.config("web", "static", |
|
1006 | static = self.config("web", "static", | |
1003 | os.path.join(self.templatepath, "static"), |
|
1007 | os.path.join(self.templatepath, "static"), | |
1004 | untrusted=False) |
|
1008 | untrusted=False) | |
1005 | req.write(staticfile(static, fname, req) |
|
1009 | req.write(staticfile(static, fname, req) | |
1006 | or self.t("error", error="%r not found" % fname)) |
|
1010 | or self.t("error", error="%r not found" % fname)) | |
1007 |
|
1011 | |||
1008 | def do_capabilities(self, req): |
|
1012 | def do_capabilities(self, req): | |
1009 | caps = ['lookup', 'changegroupsubset'] |
|
1013 | caps = ['lookup', 'changegroupsubset'] | |
1010 | if self.configbool('server', 'uncompressed'): |
|
1014 | if self.configbool('server', 'uncompressed'): | |
1011 | caps.append('stream=%d' % self.repo.revlogversion) |
|
1015 | caps.append('stream=%d' % self.repo.revlogversion) | |
1012 | # XXX: make configurable and/or share code with do_unbundle: |
|
1016 | # XXX: make configurable and/or share code with do_unbundle: | |
1013 | unbundleversions = ['HG10GZ', 'HG10BZ', 'HG10UN'] |
|
1017 | unbundleversions = ['HG10GZ', 'HG10BZ', 'HG10UN'] | |
1014 | if unbundleversions: |
|
1018 | if unbundleversions: | |
1015 | caps.append('unbundle=%s' % ','.join(unbundleversions)) |
|
1019 | caps.append('unbundle=%s' % ','.join(unbundleversions)) | |
1016 | resp = ' '.join(caps) |
|
1020 | resp = ' '.join(caps) | |
1017 | req.httphdr("application/mercurial-0.1", length=len(resp)) |
|
1021 | req.httphdr("application/mercurial-0.1", length=len(resp)) | |
1018 | req.write(resp) |
|
1022 | req.write(resp) | |
1019 |
|
1023 | |||
1020 | def check_perm(self, req, op, default): |
|
1024 | def check_perm(self, req, op, default): | |
1021 | '''check permission for operation based on user auth. |
|
1025 | '''check permission for operation based on user auth. | |
1022 | return true if op allowed, else false. |
|
1026 | return true if op allowed, else false. | |
1023 | default is policy to use if no config given.''' |
|
1027 | default is policy to use if no config given.''' | |
1024 |
|
1028 | |||
1025 | user = req.env.get('REMOTE_USER') |
|
1029 | user = req.env.get('REMOTE_USER') | |
1026 |
|
1030 | |||
1027 | deny = self.configlist('web', 'deny_' + op) |
|
1031 | deny = self.configlist('web', 'deny_' + op) | |
1028 | if deny and (not user or deny == ['*'] or user in deny): |
|
1032 | if deny and (not user or deny == ['*'] or user in deny): | |
1029 | return False |
|
1033 | return False | |
1030 |
|
1034 | |||
1031 | allow = self.configlist('web', 'allow_' + op) |
|
1035 | allow = self.configlist('web', 'allow_' + op) | |
1032 | return (allow and (allow == ['*'] or user in allow)) or default |
|
1036 | return (allow and (allow == ['*'] or user in allow)) or default | |
1033 |
|
1037 | |||
1034 | def do_unbundle(self, req): |
|
1038 | def do_unbundle(self, req): | |
1035 | def bail(response, headers={}): |
|
1039 | def bail(response, headers={}): | |
1036 | length = int(req.env['CONTENT_LENGTH']) |
|
1040 | length = int(req.env['CONTENT_LENGTH']) | |
1037 | for s in util.filechunkiter(req, limit=length): |
|
1041 | for s in util.filechunkiter(req, limit=length): | |
1038 | # drain incoming bundle, else client will not see |
|
1042 | # drain incoming bundle, else client will not see | |
1039 | # response when run outside cgi script |
|
1043 | # response when run outside cgi script | |
1040 | pass |
|
1044 | pass | |
1041 | req.httphdr("application/mercurial-0.1", headers=headers) |
|
1045 | req.httphdr("application/mercurial-0.1", headers=headers) | |
1042 | req.write('0\n') |
|
1046 | req.write('0\n') | |
1043 | req.write(response) |
|
1047 | req.write(response) | |
1044 |
|
1048 | |||
1045 | # require ssl by default, auth info cannot be sniffed and |
|
1049 | # require ssl by default, auth info cannot be sniffed and | |
1046 | # replayed |
|
1050 | # replayed | |
1047 | ssl_req = self.configbool('web', 'push_ssl', True) |
|
1051 | ssl_req = self.configbool('web', 'push_ssl', True) | |
1048 | if ssl_req: |
|
1052 | if ssl_req: | |
1049 | if not req.env.get('HTTPS'): |
|
1053 | if not req.env.get('HTTPS'): | |
1050 | bail(_('ssl required\n')) |
|
1054 | bail(_('ssl required\n')) | |
1051 | return |
|
1055 | return | |
1052 | proto = 'https' |
|
1056 | proto = 'https' | |
1053 | else: |
|
1057 | else: | |
1054 | proto = 'http' |
|
1058 | proto = 'http' | |
1055 |
|
1059 | |||
1056 | # do not allow push unless explicitly allowed |
|
1060 | # do not allow push unless explicitly allowed | |
1057 | if not self.check_perm(req, 'push', False): |
|
1061 | if not self.check_perm(req, 'push', False): | |
1058 | bail(_('push not authorized\n'), |
|
1062 | bail(_('push not authorized\n'), | |
1059 | headers={'status': '401 Unauthorized'}) |
|
1063 | headers={'status': '401 Unauthorized'}) | |
1060 | return |
|
1064 | return | |
1061 |
|
1065 | |||
1062 | req.httphdr("application/mercurial-0.1") |
|
1066 | req.httphdr("application/mercurial-0.1") | |
1063 |
|
1067 | |||
1064 | their_heads = req.form['heads'][0].split(' ') |
|
1068 | their_heads = req.form['heads'][0].split(' ') | |
1065 |
|
1069 | |||
1066 | def check_heads(): |
|
1070 | def check_heads(): | |
1067 | heads = map(hex, self.repo.heads()) |
|
1071 | heads = map(hex, self.repo.heads()) | |
1068 | return their_heads == [hex('force')] or their_heads == heads |
|
1072 | return their_heads == [hex('force')] or their_heads == heads | |
1069 |
|
1073 | |||
1070 | # fail early if possible |
|
1074 | # fail early if possible | |
1071 | if not check_heads(): |
|
1075 | if not check_heads(): | |
1072 | bail(_('unsynced changes\n')) |
|
1076 | bail(_('unsynced changes\n')) | |
1073 | return |
|
1077 | return | |
1074 |
|
1078 | |||
1075 | # do not lock repo until all changegroup data is |
|
1079 | # do not lock repo until all changegroup data is | |
1076 | # streamed. save to temporary file. |
|
1080 | # streamed. save to temporary file. | |
1077 |
|
1081 | |||
1078 | fd, tempname = tempfile.mkstemp(prefix='hg-unbundle-') |
|
1082 | fd, tempname = tempfile.mkstemp(prefix='hg-unbundle-') | |
1079 | fp = os.fdopen(fd, 'wb+') |
|
1083 | fp = os.fdopen(fd, 'wb+') | |
1080 | try: |
|
1084 | try: | |
1081 | length = int(req.env['CONTENT_LENGTH']) |
|
1085 | length = int(req.env['CONTENT_LENGTH']) | |
1082 | for s in util.filechunkiter(req, limit=length): |
|
1086 | for s in util.filechunkiter(req, limit=length): | |
1083 | fp.write(s) |
|
1087 | fp.write(s) | |
1084 |
|
1088 | |||
1085 | lock = self.repo.lock() |
|
1089 | lock = self.repo.lock() | |
1086 | try: |
|
1090 | try: | |
1087 | if not check_heads(): |
|
1091 | if not check_heads(): | |
1088 | req.write('0\n') |
|
1092 | req.write('0\n') | |
1089 | req.write(_('unsynced changes\n')) |
|
1093 | req.write(_('unsynced changes\n')) | |
1090 | return |
|
1094 | return | |
1091 |
|
1095 | |||
1092 | fp.seek(0) |
|
1096 | fp.seek(0) | |
1093 | header = fp.read(6) |
|
1097 | header = fp.read(6) | |
1094 | if not header.startswith("HG"): |
|
1098 | if not header.startswith("HG"): | |
1095 | # old client with uncompressed bundle |
|
1099 | # old client with uncompressed bundle | |
1096 | def generator(f): |
|
1100 | def generator(f): | |
1097 | yield header |
|
1101 | yield header | |
1098 | for chunk in f: |
|
1102 | for chunk in f: | |
1099 | yield chunk |
|
1103 | yield chunk | |
1100 | elif not header.startswith("HG10"): |
|
1104 | elif not header.startswith("HG10"): | |
1101 | req.write("0\n") |
|
1105 | req.write("0\n") | |
1102 | req.write(_("unknown bundle version\n")) |
|
1106 | req.write(_("unknown bundle version\n")) | |
1103 | return |
|
1107 | return | |
1104 | elif header == "HG10GZ": |
|
1108 | elif header == "HG10GZ": | |
1105 | def generator(f): |
|
1109 | def generator(f): | |
1106 | zd = zlib.decompressobj() |
|
1110 | zd = zlib.decompressobj() | |
1107 | for chunk in f: |
|
1111 | for chunk in f: | |
1108 | yield zd.decompress(chunk) |
|
1112 | yield zd.decompress(chunk) | |
1109 | elif header == "HG10BZ": |
|
1113 | elif header == "HG10BZ": | |
1110 | def generator(f): |
|
1114 | def generator(f): | |
1111 | zd = bz2.BZ2Decompressor() |
|
1115 | zd = bz2.BZ2Decompressor() | |
1112 | zd.decompress("BZ") |
|
1116 | zd.decompress("BZ") | |
1113 | for chunk in f: |
|
1117 | for chunk in f: | |
1114 | yield zd.decompress(chunk) |
|
1118 | yield zd.decompress(chunk) | |
1115 | elif header == "HG10UN": |
|
1119 | elif header == "HG10UN": | |
1116 | def generator(f): |
|
1120 | def generator(f): | |
1117 | for chunk in f: |
|
1121 | for chunk in f: | |
1118 | yield chunk |
|
1122 | yield chunk | |
1119 | else: |
|
1123 | else: | |
1120 | req.write("0\n") |
|
1124 | req.write("0\n") | |
1121 | req.write(_("unknown bundle compression type\n")) |
|
1125 | req.write(_("unknown bundle compression type\n")) | |
1122 | return |
|
1126 | return | |
1123 | gen = generator(util.filechunkiter(fp, 4096)) |
|
1127 | gen = generator(util.filechunkiter(fp, 4096)) | |
1124 |
|
1128 | |||
1125 | # send addchangegroup output to client |
|
1129 | # send addchangegroup output to client | |
1126 |
|
1130 | |||
1127 | old_stdout = sys.stdout |
|
1131 | old_stdout = sys.stdout | |
1128 | sys.stdout = cStringIO.StringIO() |
|
1132 | sys.stdout = cStringIO.StringIO() | |
1129 |
|
1133 | |||
1130 | try: |
|
1134 | try: | |
1131 | url = 'remote:%s:%s' % (proto, |
|
1135 | url = 'remote:%s:%s' % (proto, | |
1132 | req.env.get('REMOTE_HOST', '')) |
|
1136 | req.env.get('REMOTE_HOST', '')) | |
1133 | ret = self.repo.addchangegroup(util.chunkbuffer(gen), |
|
1137 | ret = self.repo.addchangegroup(util.chunkbuffer(gen), | |
1134 | 'serve', url) |
|
1138 | 'serve', url) | |
1135 | finally: |
|
1139 | finally: | |
1136 | val = sys.stdout.getvalue() |
|
1140 | val = sys.stdout.getvalue() | |
1137 | sys.stdout = old_stdout |
|
1141 | sys.stdout = old_stdout | |
1138 | req.write('%d\n' % ret) |
|
1142 | req.write('%d\n' % ret) | |
1139 | req.write(val) |
|
1143 | req.write(val) | |
1140 | finally: |
|
1144 | finally: | |
1141 | lock.release() |
|
1145 | lock.release() | |
1142 | finally: |
|
1146 | finally: | |
1143 | fp.close() |
|
1147 | fp.close() | |
1144 | os.unlink(tempname) |
|
1148 | os.unlink(tempname) | |
1145 |
|
1149 | |||
1146 | def do_stream_out(self, req): |
|
1150 | def do_stream_out(self, req): | |
1147 | req.httphdr("application/mercurial-0.1") |
|
1151 | req.httphdr("application/mercurial-0.1") | |
1148 | streamclone.stream_out(self.repo, req) |
|
1152 | streamclone.stream_out(self.repo, req) |
@@ -1,226 +1,231 | |||||
1 | # hgweb/hgwebdir_mod.py - Web interface for a directory of repositories. |
|
1 | # hgweb/hgwebdir_mod.py - Web interface for a directory of repositories. | |
2 | # |
|
2 | # | |
3 | # Copyright 21 May 2005 - (c) 2005 Jake Edge <jake@edge2.net> |
|
3 | # Copyright 21 May 2005 - (c) 2005 Jake Edge <jake@edge2.net> | |
4 | # Copyright 2005, 2006 Matt Mackall <mpm@selenic.com> |
|
4 | # Copyright 2005, 2006 Matt Mackall <mpm@selenic.com> | |
5 | # |
|
5 | # | |
6 | # This software may be used and distributed according to the terms |
|
6 | # This software may be used and distributed according to the terms | |
7 | # of the GNU General Public License, incorporated herein by reference. |
|
7 | # of the GNU General Public License, incorporated herein by reference. | |
8 |
|
8 | |||
9 | from mercurial import demandimport; demandimport.enable() |
|
9 | from mercurial import demandimport; demandimport.enable() | |
10 | import os, mimetools, cStringIO |
|
10 | import os, mimetools, cStringIO | |
11 | from mercurial.i18n import gettext as _ |
|
11 | from mercurial.i18n import gettext as _ | |
12 | from mercurial import ui, hg, util, templater |
|
12 | from mercurial import ui, hg, util, templater | |
13 | from common import get_mtime, staticfile, style_map |
|
13 | from common import get_mtime, staticfile, style_map | |
14 | from hgweb_mod import hgweb |
|
14 | from hgweb_mod import hgweb | |
15 |
|
15 | |||
16 | # This is a stopgap |
|
16 | # This is a stopgap | |
17 | class hgwebdir(object): |
|
17 | class hgwebdir(object): | |
18 | def __init__(self, config, parentui=None): |
|
18 | def __init__(self, config, parentui=None): | |
19 | def cleannames(items): |
|
19 | def cleannames(items): | |
20 | return [(name.strip(os.sep), path) for name, path in items] |
|
20 | return [(name.strip(os.sep), path) for name, path in items] | |
21 |
|
21 | |||
22 | self.parentui = parentui |
|
22 | self.parentui = parentui | |
23 | self.motd = None |
|
23 | self.motd = None | |
24 | self.style = None |
|
24 | self.style = None | |
25 | self.repos_sorted = ('name', False) |
|
25 | self.repos_sorted = ('name', False) | |
26 | if isinstance(config, (list, tuple)): |
|
26 | if isinstance(config, (list, tuple)): | |
27 | self.repos = cleannames(config) |
|
27 | self.repos = cleannames(config) | |
28 | self.repos_sorted = ('', False) |
|
28 | self.repos_sorted = ('', False) | |
29 | elif isinstance(config, dict): |
|
29 | elif isinstance(config, dict): | |
30 | self.repos = cleannames(config.items()) |
|
30 | self.repos = cleannames(config.items()) | |
31 | self.repos.sort() |
|
31 | self.repos.sort() | |
32 | else: |
|
32 | else: | |
33 | if isinstance(config, util.configparser): |
|
33 | if isinstance(config, util.configparser): | |
34 | cp = config |
|
34 | cp = config | |
35 | else: |
|
35 | else: | |
36 | cp = util.configparser() |
|
36 | cp = util.configparser() | |
37 | cp.read(config) |
|
37 | cp.read(config) | |
38 | self.repos = [] |
|
38 | self.repos = [] | |
39 | if cp.has_section('web'): |
|
39 | if cp.has_section('web'): | |
40 | if cp.has_option('web', 'motd'): |
|
40 | if cp.has_option('web', 'motd'): | |
41 | self.motd = cp.get('web', 'motd') |
|
41 | self.motd = cp.get('web', 'motd') | |
42 | if cp.has_option('web', 'style'): |
|
42 | if cp.has_option('web', 'style'): | |
43 | self.style = cp.get('web', 'style') |
|
43 | self.style = cp.get('web', 'style') | |
44 | if cp.has_section('paths'): |
|
44 | if cp.has_section('paths'): | |
45 | self.repos.extend(cleannames(cp.items('paths'))) |
|
45 | self.repos.extend(cleannames(cp.items('paths'))) | |
46 | if cp.has_section('collections'): |
|
46 | if cp.has_section('collections'): | |
47 | for prefix, root in cp.items('collections'): |
|
47 | for prefix, root in cp.items('collections'): | |
48 | for path in util.walkrepos(root): |
|
48 | for path in util.walkrepos(root): | |
49 | repo = os.path.normpath(path) |
|
49 | repo = os.path.normpath(path) | |
50 | name = repo |
|
50 | name = repo | |
51 | if name.startswith(prefix): |
|
51 | if name.startswith(prefix): | |
52 | name = name[len(prefix):] |
|
52 | name = name[len(prefix):] | |
53 | self.repos.append((name.lstrip(os.sep), repo)) |
|
53 | self.repos.append((name.lstrip(os.sep), repo)) | |
54 | self.repos.sort() |
|
54 | self.repos.sort() | |
55 |
|
55 | |||
56 | def run(self): |
|
56 | def run(self): | |
57 | if not os.environ.get('GATEWAY_INTERFACE', '').startswith("CGI/1."): |
|
57 | if not os.environ.get('GATEWAY_INTERFACE', '').startswith("CGI/1."): | |
58 | raise RuntimeError("This function is only intended to be called while running as a CGI script.") |
|
58 | raise RuntimeError("This function is only intended to be called while running as a CGI script.") | |
59 | import mercurial.hgweb.wsgicgi as wsgicgi |
|
59 | import mercurial.hgweb.wsgicgi as wsgicgi | |
60 | from request import wsgiapplication |
|
60 | from request import wsgiapplication | |
61 | def make_web_app(): |
|
61 | def make_web_app(): | |
62 | return self |
|
62 | return self | |
63 | wsgicgi.launch(wsgiapplication(make_web_app)) |
|
63 | wsgicgi.launch(wsgiapplication(make_web_app)) | |
64 |
|
64 | |||
65 | def run_wsgi(self, req): |
|
65 | def run_wsgi(self, req): | |
66 | def header(**map): |
|
66 | def header(**map): | |
67 | header_file = cStringIO.StringIO( |
|
67 | header_file = cStringIO.StringIO( | |
68 | ''.join(tmpl("header", encoding=util._encoding, **map))) |
|
68 | ''.join(tmpl("header", encoding=util._encoding, **map))) | |
69 | msg = mimetools.Message(header_file, 0) |
|
69 | msg = mimetools.Message(header_file, 0) | |
70 | req.header(msg.items()) |
|
70 | req.header(msg.items()) | |
71 | yield header_file.read() |
|
71 | yield header_file.read() | |
72 |
|
72 | |||
73 | def footer(**map): |
|
73 | def footer(**map): | |
74 | yield tmpl("footer", **map) |
|
74 | yield tmpl("footer", **map) | |
75 |
|
75 | |||
76 | def motd(**map): |
|
76 | def motd(**map): | |
77 | if self.motd is not None: |
|
77 | if self.motd is not None: | |
78 | yield self.motd |
|
78 | yield self.motd | |
79 | else: |
|
79 | else: | |
80 | yield config('web', 'motd', '') |
|
80 | yield config('web', 'motd', '') | |
81 |
|
81 | |||
82 | parentui = self.parentui or ui.ui(report_untrusted=False) |
|
82 | parentui = self.parentui or ui.ui(report_untrusted=False) | |
83 |
|
83 | |||
84 | def config(section, name, default=None, untrusted=True): |
|
84 | def config(section, name, default=None, untrusted=True): | |
85 | return parentui.config(section, name, default, untrusted) |
|
85 | return parentui.config(section, name, default, untrusted) | |
86 |
|
86 | |||
87 | url = req.env['REQUEST_URI'].split('?')[0] |
|
87 | url = req.env['REQUEST_URI'].split('?')[0] | |
88 | if not url.endswith('/'): |
|
88 | if not url.endswith('/'): | |
89 | url += '/' |
|
89 | url += '/' | |
90 |
|
90 | |||
|
91 | staticurl = config('web', 'staticurl') or url + 'static/' | |||
|
92 | if not staticurl.endswith('/'): | |||
|
93 | staticurl += '/' | |||
|
94 | ||||
91 | style = self.style |
|
95 | style = self.style | |
92 | if style is None: |
|
96 | if style is None: | |
93 | style = config('web', 'style', '') |
|
97 | style = config('web', 'style', '') | |
94 | if req.form.has_key('style'): |
|
98 | if req.form.has_key('style'): | |
95 | style = req.form['style'][0] |
|
99 | style = req.form['style'][0] | |
96 | mapfile = style_map(templater.templatepath(), style) |
|
100 | mapfile = style_map(templater.templatepath(), style) | |
97 | tmpl = templater.templater(mapfile, templater.common_filters, |
|
101 | tmpl = templater.templater(mapfile, templater.common_filters, | |
98 | defaults={"header": header, |
|
102 | defaults={"header": header, | |
99 | "footer": footer, |
|
103 | "footer": footer, | |
100 | "motd": motd, |
|
104 | "motd": motd, | |
101 |
"url": url |
|
105 | "url": url, | |
|
106 | "staticurl": staticurl}) | |||
102 |
|
107 | |||
103 | def archivelist(ui, nodeid, url): |
|
108 | def archivelist(ui, nodeid, url): | |
104 | allowed = ui.configlist("web", "allow_archive", untrusted=True) |
|
109 | allowed = ui.configlist("web", "allow_archive", untrusted=True) | |
105 | for i in [('zip', '.zip'), ('gz', '.tar.gz'), ('bz2', '.tar.bz2')]: |
|
110 | for i in [('zip', '.zip'), ('gz', '.tar.gz'), ('bz2', '.tar.bz2')]: | |
106 | if i[0] in allowed or ui.configbool("web", "allow" + i[0], |
|
111 | if i[0] in allowed or ui.configbool("web", "allow" + i[0], | |
107 | untrusted=True): |
|
112 | untrusted=True): | |
108 | yield {"type" : i[0], "extension": i[1], |
|
113 | yield {"type" : i[0], "extension": i[1], | |
109 | "node": nodeid, "url": url} |
|
114 | "node": nodeid, "url": url} | |
110 |
|
115 | |||
111 | def entries(sortcolumn="", descending=False, **map): |
|
116 | def entries(sortcolumn="", descending=False, **map): | |
112 | def sessionvars(**map): |
|
117 | def sessionvars(**map): | |
113 | fields = [] |
|
118 | fields = [] | |
114 | if req.form.has_key('style'): |
|
119 | if req.form.has_key('style'): | |
115 | style = req.form['style'][0] |
|
120 | style = req.form['style'][0] | |
116 | if style != get('web', 'style', ''): |
|
121 | if style != get('web', 'style', ''): | |
117 | fields.append(('style', style)) |
|
122 | fields.append(('style', style)) | |
118 |
|
123 | |||
119 | separator = url[-1] == '?' and ';' or '?' |
|
124 | separator = url[-1] == '?' and ';' or '?' | |
120 | for name, value in fields: |
|
125 | for name, value in fields: | |
121 | yield dict(name=name, value=value, separator=separator) |
|
126 | yield dict(name=name, value=value, separator=separator) | |
122 | separator = ';' |
|
127 | separator = ';' | |
123 |
|
128 | |||
124 | rows = [] |
|
129 | rows = [] | |
125 | parity = 0 |
|
130 | parity = 0 | |
126 | for name, path in self.repos: |
|
131 | for name, path in self.repos: | |
127 | u = ui.ui(parentui=parentui) |
|
132 | u = ui.ui(parentui=parentui) | |
128 | try: |
|
133 | try: | |
129 | u.readconfig(os.path.join(path, '.hg', 'hgrc')) |
|
134 | u.readconfig(os.path.join(path, '.hg', 'hgrc')) | |
130 | except IOError: |
|
135 | except IOError: | |
131 | pass |
|
136 | pass | |
132 | def get(section, name, default=None): |
|
137 | def get(section, name, default=None): | |
133 | return u.config(section, name, default, untrusted=True) |
|
138 | return u.config(section, name, default, untrusted=True) | |
134 |
|
139 | |||
135 | url = ('/'.join([req.env["REQUEST_URI"].split('?')[0], name]) |
|
140 | url = ('/'.join([req.env["REQUEST_URI"].split('?')[0], name]) | |
136 | .replace("//", "/")) + '/' |
|
141 | .replace("//", "/")) + '/' | |
137 |
|
142 | |||
138 | # update time with local timezone |
|
143 | # update time with local timezone | |
139 | try: |
|
144 | try: | |
140 | d = (get_mtime(path), util.makedate()[1]) |
|
145 | d = (get_mtime(path), util.makedate()[1]) | |
141 | except OSError: |
|
146 | except OSError: | |
142 | continue |
|
147 | continue | |
143 |
|
148 | |||
144 | contact = (get("ui", "username") or # preferred |
|
149 | contact = (get("ui", "username") or # preferred | |
145 | get("web", "contact") or # deprecated |
|
150 | get("web", "contact") or # deprecated | |
146 | get("web", "author", "")) # also |
|
151 | get("web", "author", "")) # also | |
147 | description = get("web", "description", "") |
|
152 | description = get("web", "description", "") | |
148 | name = get("web", "name", name) |
|
153 | name = get("web", "name", name) | |
149 | row = dict(contact=contact or "unknown", |
|
154 | row = dict(contact=contact or "unknown", | |
150 | contact_sort=contact.upper() or "unknown", |
|
155 | contact_sort=contact.upper() or "unknown", | |
151 | name=name, |
|
156 | name=name, | |
152 | name_sort=name, |
|
157 | name_sort=name, | |
153 | url=url, |
|
158 | url=url, | |
154 | description=description or "unknown", |
|
159 | description=description or "unknown", | |
155 | description_sort=description.upper() or "unknown", |
|
160 | description_sort=description.upper() or "unknown", | |
156 | lastchange=d, |
|
161 | lastchange=d, | |
157 | lastchange_sort=d[1]-d[0], |
|
162 | lastchange_sort=d[1]-d[0], | |
158 | sessionvars=sessionvars, |
|
163 | sessionvars=sessionvars, | |
159 | archives=archivelist(u, "tip", url)) |
|
164 | archives=archivelist(u, "tip", url)) | |
160 | if (not sortcolumn |
|
165 | if (not sortcolumn | |
161 | or (sortcolumn, descending) == self.repos_sorted): |
|
166 | or (sortcolumn, descending) == self.repos_sorted): | |
162 | # fast path for unsorted output |
|
167 | # fast path for unsorted output | |
163 | row['parity'] = parity |
|
168 | row['parity'] = parity | |
164 | parity = 1 - parity |
|
169 | parity = 1 - parity | |
165 | yield row |
|
170 | yield row | |
166 | else: |
|
171 | else: | |
167 | rows.append((row["%s_sort" % sortcolumn], row)) |
|
172 | rows.append((row["%s_sort" % sortcolumn], row)) | |
168 | if rows: |
|
173 | if rows: | |
169 | rows.sort() |
|
174 | rows.sort() | |
170 | if descending: |
|
175 | if descending: | |
171 | rows.reverse() |
|
176 | rows.reverse() | |
172 | for key, row in rows: |
|
177 | for key, row in rows: | |
173 | row['parity'] = parity |
|
178 | row['parity'] = parity | |
174 | parity = 1 - parity |
|
179 | parity = 1 - parity | |
175 | yield row |
|
180 | yield row | |
176 |
|
181 | |||
177 | virtual = req.env.get("PATH_INFO", "").strip('/') |
|
182 | virtual = req.env.get("PATH_INFO", "").strip('/') | |
178 | if virtual.startswith('static/'): |
|
183 | if virtual.startswith('static/'): | |
179 | static = os.path.join(templater.templatepath(), 'static') |
|
184 | static = os.path.join(templater.templatepath(), 'static') | |
180 | fname = virtual[7:] |
|
185 | fname = virtual[7:] | |
181 | req.write(staticfile(static, fname, req) or |
|
186 | req.write(staticfile(static, fname, req) or | |
182 | tmpl('error', error='%r not found' % fname)) |
|
187 | tmpl('error', error='%r not found' % fname)) | |
183 | elif virtual: |
|
188 | elif virtual: | |
184 | while virtual: |
|
189 | while virtual: | |
185 | real = dict(self.repos).get(virtual) |
|
190 | real = dict(self.repos).get(virtual) | |
186 | if real: |
|
191 | if real: | |
187 | break |
|
192 | break | |
188 | up = virtual.rfind('/') |
|
193 | up = virtual.rfind('/') | |
189 | if up < 0: |
|
194 | if up < 0: | |
190 | break |
|
195 | break | |
191 | virtual = virtual[:up] |
|
196 | virtual = virtual[:up] | |
192 | if real: |
|
197 | if real: | |
193 | req.env['REPO_NAME'] = virtual |
|
198 | req.env['REPO_NAME'] = virtual | |
194 | try: |
|
199 | try: | |
195 | repo = hg.repository(parentui, real) |
|
200 | repo = hg.repository(parentui, real) | |
196 | hgweb(repo).run_wsgi(req) |
|
201 | hgweb(repo).run_wsgi(req) | |
197 | except IOError, inst: |
|
202 | except IOError, inst: | |
198 | req.write(tmpl("error", error=inst.strerror)) |
|
203 | req.write(tmpl("error", error=inst.strerror)) | |
199 | except hg.RepoError, inst: |
|
204 | except hg.RepoError, inst: | |
200 | req.write(tmpl("error", error=str(inst))) |
|
205 | req.write(tmpl("error", error=str(inst))) | |
201 | else: |
|
206 | else: | |
202 | req.write(tmpl("notfound", repo=virtual)) |
|
207 | req.write(tmpl("notfound", repo=virtual)) | |
203 | else: |
|
208 | else: | |
204 | if req.form.has_key('static'): |
|
209 | if req.form.has_key('static'): | |
205 | static = os.path.join(templater.templatepath(), "static") |
|
210 | static = os.path.join(templater.templatepath(), "static") | |
206 | fname = req.form['static'][0] |
|
211 | fname = req.form['static'][0] | |
207 | req.write(staticfile(static, fname, req) |
|
212 | req.write(staticfile(static, fname, req) | |
208 | or tmpl("error", error="%r not found" % fname)) |
|
213 | or tmpl("error", error="%r not found" % fname)) | |
209 | else: |
|
214 | else: | |
210 | sortable = ["name", "description", "contact", "lastchange"] |
|
215 | sortable = ["name", "description", "contact", "lastchange"] | |
211 | sortcolumn, descending = self.repos_sorted |
|
216 | sortcolumn, descending = self.repos_sorted | |
212 | if req.form.has_key('sort'): |
|
217 | if req.form.has_key('sort'): | |
213 | sortcolumn = req.form['sort'][0] |
|
218 | sortcolumn = req.form['sort'][0] | |
214 | descending = sortcolumn.startswith('-') |
|
219 | descending = sortcolumn.startswith('-') | |
215 | if descending: |
|
220 | if descending: | |
216 | sortcolumn = sortcolumn[1:] |
|
221 | sortcolumn = sortcolumn[1:] | |
217 | if sortcolumn not in sortable: |
|
222 | if sortcolumn not in sortable: | |
218 | sortcolumn = "" |
|
223 | sortcolumn = "" | |
219 |
|
224 | |||
220 | sort = [("sort_%s" % column, |
|
225 | sort = [("sort_%s" % column, | |
221 | "%s%s" % ((not descending and column == sortcolumn) |
|
226 | "%s%s" % ((not descending and column == sortcolumn) | |
222 | and "-" or "", column)) |
|
227 | and "-" or "", column)) | |
223 | for column in sortable] |
|
228 | for column in sortable] | |
224 | req.write(tmpl("index", entries=entries, |
|
229 | req.write(tmpl("index", entries=entries, | |
225 | sortcolumn=sortcolumn, descending=descending, |
|
230 | sortcolumn=sortcolumn, descending=descending, | |
226 | **dict(sort))) |
|
231 | **dict(sort))) |
@@ -1,10 +1,10 | |||||
1 | Content-type: text/html; charset={encoding} |
|
1 | Content-type: text/html; charset={encoding} | |
2 |
|
2 | |||
3 | <?xml version="1.0" encoding="{encoding}"?> |
|
3 | <?xml version="1.0" encoding="{encoding}"?> | |
4 | <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> |
|
4 | <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> | |
5 | <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en-US" lang="en-US"> |
|
5 | <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en-US" lang="en-US"> | |
6 | <head> |
|
6 | <head> | |
7 |
<link rel="icon" href="{ |
|
7 | <link rel="icon" href="{staticurl}hgicon.png" type="image/png"> | |
8 | <meta name="robots" content="index, nofollow"/> |
|
8 | <meta name="robots" content="index, nofollow"/> | |
9 |
<link rel="stylesheet" href="{ |
|
9 | <link rel="stylesheet" href="{staticurl}style-gitweb.css" type="text/css" /> | |
10 |
|
10 |
@@ -1,8 +1,8 | |||||
1 | Content-type: text/html; charset={encoding} |
|
1 | Content-type: text/html; charset={encoding} | |
2 |
|
2 | |||
3 | <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> |
|
3 | <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> | |
4 | <html> |
|
4 | <html> | |
5 | <head> |
|
5 | <head> | |
6 |
<link rel="icon" href="#url# |
|
6 | <link rel="icon" href="#staticurl#hgicon.png" type="image/png"> | |
7 | <meta name="robots" content="index, nofollow" /> |
|
7 | <meta name="robots" content="index, nofollow" /> | |
8 |
<link rel="stylesheet" href="#url# |
|
8 | <link rel="stylesheet" href="#staticurl#style.css" type="text/css" /> |
General Comments 0
You need to be logged in to leave comments.
Login now