Short answer: no, there is no solution as simple as somehow calling eval
that's magically safe.
You said in a comment that this question is not a duplicate of Python: make eval safe, although the title sounds the same, and the top answers say the same thing:
- No, you can't make
eval
safe.
- Use
ast.literal_eval
if that works for you, otherwise you need a dedicated parser that only parses what you prepare for.
One would think that if there's a way to make eval
safe then people would have done so already. And they have! We have pysandbox
. Or, uh, we don't have it. It's archived and it has this banner:
WARNING: pysandbox is BROKEN BY DESIGN, please move to a new sandboxing solution (run python in a sandbox, not the opposite!)
See also https://lwn.net/Articles/574215/.
To be a bit more specific, let me attack the approach in your self-answer:
def not_actually_safe_eval(s):
# your suggestion of a "safe eval" that is NOT SAFE
vals = {"jj": None, "kk": None}
return eval(compile(s, filename="<string>", mode="exec"), {}, vals)
# courtesy of Ned Batchelder https://nedbatchelder.com/blog/201206/eval_really_is_dangerous.html
# updated to Python 3.9
s = """
(lambda fc=(
lambda n: [
c for c in
().__class__.__bases__[0].__subclasses__()
if c.__name__ == n
][0]
):
fc("function")(
fc("code")(
0,0,0,0,0,0,b"KABOOM",(),(),(),"","",0,b""
),{}
)()
)()
"""
not_actually_safe_eval(s)
This is what this script does:
$ python not_actually_safe_eval_demo.py
Segmentation fault
We can agree that if your approach can cause the interpreter to segfault given certain inputs, it's not safe.
You can start trying to get around this approach, fix more and more edge cases. The bottom line is the same.
Eval cannot reasonably be made safe. Don't call it (or exec
) on untrusted inputs. If you want to sandbox python, put python itself in an isolated environment from which bad actors cannot get out of.