Simple interactive bacgkround jobs with IPython¶
We start by loading the backgroundjobs
library and defining a few trivial functions to illustrate things with.
from __future__ import print_function
from IPython.lib import backgroundjobs as bg
import sys
import time
def sleepfunc(interval=2, *a, **kw):
args = dict(interval=interval,
args=a,
kwargs=kw)
time.sleep(interval)
return args
def diefunc(interval=2, *a, **kw):
time.sleep(interval)
raise Exception("Dead job with interval %s" % interval)
def printfunc(interval=1, reps=5):
for n in range(reps):
time.sleep(interval)
print('In the background...', n)
sys.stdout.flush()
print('All done!')
sys.stdout.flush()
Now, we can create a job manager (called simply jobs
) and use it to submit new jobs.
Run the cell below and wait a few seconds for the whole thing to finish, until you see the "All done!" printout.
jobs = bg.BackgroundJobManager()
# Start a few jobs, the first one will have ID # 0
jobs.new(sleepfunc, 4)
jobs.new(sleepfunc, kw={'reps':2})
jobs.new('printfunc(1,3)')
# This makes a couple of jobs which will die. Let's keep a reference to
# them for easier traceback reporting later
diejob1 = jobs.new(diefunc, 1)
diejob2 = jobs.new(diefunc, 2)
You can check the status of your jobs at any time:
jobs.status()
For any completed job, you can get its result easily:
jobs[0].result
j0 = jobs[0]
j0.join?
You can get the traceback of any dead job. Run the line below again interactively until it prints a traceback (check the status of the job):
print "Status of diejob1:", diejob1.status
diejob1.traceback() # jobs.traceback(4) would also work here, with the job number
This will print all tracebacks for all dead jobs:
jobs.traceback()
The job manager can be flushed of all completed jobs at any time:
jobs.flush()
After that, the status is simply empty:
jobs.status()
It's easy to wait on a job:
j = jobs.new(sleepfunc, 2)
print("Will wait for j now...")
sys.stdout.flush()
j.join()
print("Result from j:")
j.result