[ Can signals be caught and handled in python in-between reference counting operations? ]
Obviously the GIL prevents switching contexts between threads to protect reference counting, but is signal handling completely safe in CPython?
Answer 1
Signals in Python are caught by a very simple signal handler which, in effect, simply schedules the actual signal handler function to be called on the main thread. The C signal handler doesn't touch any Python objects, so it doesn't risk corrupting any state, while the Python signal handler is executed in-between bytecode op evaluations, so it too won't corrupt CPython's internal state.
Answer 2
A signal could be delivered and handled in the middle of a reference counting operation. In case you wonder why CPython doesn't use atomic CPU instructions for reference counting: They are too slow. Atomic operations use memory barrier to sync CPU caches (L1, L2, shared L3) and CPUs (ccNUMA). As you can imagine it prevents lots of optimizations. Modern CPU are insanely fast, so fast that they spend a lot of time doing nothing but waiting for data. Reference increment and decrement are very common operations in CPython. Memory barriers prevent out-of-order execution which is a very important optimization trick.
The reference counting code is carefully written and takes multi-threading and signals into account. Signal handlers cannot access a partly created or destroyed Python object just like threads can't, too. Macros like Py_CLEAR
take care of edge cases. I/O functions take care of EINTR, too. 3.3 has an improved subprocess module that uses only async-signal-safe function between fork() and execvpe().
You don't have to worry. We have some clever people that know their POSIX fu quite well.