- Use '/' key to quickly access this field.
- Enter a name of repository, or repository group for quick search.
- Prefix query to allow special search:
user:admin, to search for usernames, always global
user_group:devops, to search for user groups, always global
pr:303, to search for pull request number, title, or description, always global
commit:efced4, to search for commits, scoped to repositories or groups
file:models.py, to search for file paths, scoped to repositories or groups
For advanced full text search visit: repository search
update SGEEngineSet to use SGE job arrays
SGE job arrays allow one job id to be associated with a set of processes on
an SGE cluster.
modified SGEEngineSet to be a subclass PBSEngineSet
ipcluster will now generate a default SGE job script if --sge-script is
not provided. most folks should ignore the --sge-script option unless they
know they need it.
if --sge-script is passed, check that the script exists and that the user has
defined a "#$ -t" setting within the script. if not, add the setting
for them by copying the script to a temp file and launching the job
array using the modified temp file.
ipengines terminate cleanly now when the ipcluster command exits.
i think we still need to handle furl files when engines are assigned to different
hosts via SGE. without using ssh or nfs, the only other way is to put
the contents of the furl in the job script before submission but this is
less secure. need to discuss this.