##// END OF EJS Templates
Reset the interactive namespace __warningregistry__ before executing code...
Reset the interactive namespace __warningregistry__ before executing code Fixes #6611. Idea: Right now, people often don't see important warnings when running code in IPython, because (to a first approximation) any given warning will only issue once per session. Blink and you'll miss it! This is a very common contributor to confused emails to numpy-discussion. E.g.: In [5]: 1 / my_array_with_random_contents /home/njs/.user-python2.7-64bit-3/bin/ipython:1: RuntimeWarning: divide by zero encountered in divide #!/home/njs/.user-python2.7-64bit-3/bin/python Out[5]: array([ 1.77073316, -2.29765021, -2.01800811, ..., 1.13871243, -1.08302964, -8.6185091 ]) Oo, right, guess I gotta be careful of those zeros -- thanks, numpy, for giving me that warning! A few days later: In [592]: 1 / some_other_array Out[592]: array([ 3.07735763, 0.50769289, 0.83984078, ..., -0.67563917, -0.85736257, -1.36511271]) Oops, it turns out that this array had a zero in it too, and that's going to bite me later. But no warning this time! The effect of this commit is to make it so that warnings triggered by the code in cell 5 do *not* suppress warnings triggered by the code in cell 592. Note that this only applies to warnings triggered *directly* by code entered interactively -- if somepkg.foo() calls anotherpkg.bad_func() which issues a warning, then this warning will still only be displayed once, even if multiple cells call somepkg.foo(). But if cell 5 and cell 592 both call anotherpkg.bad_func() directly, then both will get warnings. (Important exception: if foo() is defined *interactively*, and calls anotherpkg.bad_func(), then every cell that calls foo() will display the warning again. This is unavoidable without fixes to CPython upstream.) Explanation: Python's warning system has some weird quirks. By default, it tries to suppress duplicate warnings, where "duplicate" means the same warning message triggered twice by the same line of code. This requires determining which line of code is responsible for triggering a warning, and this is controlled by the stacklevel= argument to warnings.warn. Basically, though, the idea is that if foo() calls bar() which calls baz() which calls some_deprecated_api(), then baz() will get counted as being "responsible", and the warning system will make a note that the usage of some_deprecated_api() inside baz() has already been warned about and doesn't need to be warned about again. So far so good. To accomplish this, obviously, there has to be a record of somewhere which line this was. You might think that this would be done by recording the filename:linenumber pair in a dict inside the warnings module, or something like that. You would be wrong. What actually happens is that the warnings module will use stack introspection to reach into baz()'s execution environment, create a global (module-level) variable there named __warningregistry__, and then, inside this dictionary, record just the line number. Basically, it assumes that any given module contains only one line 1, only one line 2, etc., so storing the filename is irrelevant. Obviously for interactive code this is totally wrong -- all cells share the same execution environment and global namespace, and they all contain a new line 1. Currently the warnings module treats these as if they were all the same line. In fact they are not the same line; once we have executed a given chunk of code, we will never see those particular lines again. As soon as a given chunk of code finishes executing, its line number labels become meaningless, and the corresponding warning registry entries become meaningless as well. Therefore, with this patch we delete the __warningregistry__ each time we execute a new block of code.

File last commit:

r9140:06edb0ca
r18548:61431d7d
Show More
compilerop.py
144 lines | 5.8 KiB | text/x-python | PythonLexer
"""Compiler tools with improved interactive support.
Provides compilation machinery similar to codeop, but with caching support so
we can provide interactive tracebacks.
Authors
-------
* Robert Kern
* Fernando Perez
* Thomas Kluyver
"""
# Note: though it might be more natural to name this module 'compiler', that
# name is in the stdlib and name collisions with the stdlib tend to produce
# weird problems (often with third-party tools).
#-----------------------------------------------------------------------------
# Copyright (C) 2010-2011 The IPython Development Team.
#
# Distributed under the terms of the BSD License.
#
# The full license is in the file COPYING.txt, distributed with this software.
#-----------------------------------------------------------------------------
#-----------------------------------------------------------------------------
# Imports
#-----------------------------------------------------------------------------
from __future__ import print_function
# Stdlib imports
import __future__
from ast import PyCF_ONLY_AST
import codeop
import functools
import hashlib
import linecache
import operator
import time
#-----------------------------------------------------------------------------
# Constants
#-----------------------------------------------------------------------------
# Roughtly equal to PyCF_MASK | PyCF_MASK_OBSOLETE as defined in pythonrun.h,
# this is used as a bitmask to extract future-related code flags.
PyCF_MASK = functools.reduce(operator.or_,
(getattr(__future__, fname).compiler_flag
for fname in __future__.all_feature_names))
#-----------------------------------------------------------------------------
# Local utilities
#-----------------------------------------------------------------------------
def code_name(code, number=0):
""" Compute a (probably) unique name for code for caching.
This now expects code to be unicode.
"""
hash_digest = hashlib.md5(code.encode("utf-8")).hexdigest()
# Include the number and 12 characters of the hash in the name. It's
# pretty much impossible that in a single session we'll have collisions
# even with truncated hashes, and the full one makes tracebacks too long
return '<ipython-input-{0}-{1}>'.format(number, hash_digest[:12])
#-----------------------------------------------------------------------------
# Classes and functions
#-----------------------------------------------------------------------------
class CachingCompiler(codeop.Compile):
"""A compiler that caches code compiled from interactive statements.
"""
def __init__(self):
codeop.Compile.__init__(self)
# This is ugly, but it must be done this way to allow multiple
# simultaneous ipython instances to coexist. Since Python itself
# directly accesses the data structures in the linecache module, and
# the cache therein is global, we must work with that data structure.
# We must hold a reference to the original checkcache routine and call
# that in our own check_cache() below, but the special IPython cache
# must also be shared by all IPython instances. If we were to hold
# separate caches (one in each CachingCompiler instance), any call made
# by Python itself to linecache.checkcache() would obliterate the
# cached data from the other IPython instances.
if not hasattr(linecache, '_ipython_cache'):
linecache._ipython_cache = {}
if not hasattr(linecache, '_checkcache_ori'):
linecache._checkcache_ori = linecache.checkcache
# Now, we must monkeypatch the linecache directly so that parts of the
# stdlib that call it outside our control go through our codepath
# (otherwise we'd lose our tracebacks).
linecache.checkcache = check_linecache_ipython
def ast_parse(self, source, filename='<unknown>', symbol='exec'):
"""Parse code to an AST with the current compiler flags active.
Arguments are exactly the same as ast.parse (in the standard library),
and are passed to the built-in compile function."""
return compile(source, filename, symbol, self.flags | PyCF_ONLY_AST, 1)
def reset_compiler_flags(self):
"""Reset compiler flags to default state."""
# This value is copied from codeop.Compile.__init__, so if that ever
# changes, it will need to be updated.
self.flags = codeop.PyCF_DONT_IMPLY_DEDENT
@property
def compiler_flags(self):
"""Flags currently active in the compilation process.
"""
return self.flags
def cache(self, code, number=0):
"""Make a name for a block of code, and cache the code.
Parameters
----------
code : str
The Python source code to cache.
number : int
A number which forms part of the code's name. Used for the execution
counter.
Returns
-------
The name of the cached code (as a string). Pass this as the filename
argument to compilation, so that tracebacks are correctly hooked up.
"""
name = code_name(code, number)
entry = (len(code), time.time(),
[line+'\n' for line in code.splitlines()], name)
linecache.cache[name] = entry
linecache._ipython_cache[name] = entry
return name
def check_linecache_ipython(*args):
"""Call linecache.checkcache() safely protecting our cached values.
"""
# First call the orignal checkcache as intended
linecache._checkcache_ori(*args)
# Then, update back the cache with our data, so that tracebacks related
# to our compiled codes can be produced.
linecache.cache.update(linecache._ipython_cache)