You can still find divisors for Gaussian integers. If x, y, and xy are all Gaussian integers, which will be trivially fulfilled for any x when y=1, then x, y both divide xy.
You can then generalize the \sigma function by summing over all the divisors of z and dividing by |z|.
The resulting number \sigma(z) lies in C (or maybe Q + iQ), not just Q, but it’s perfectly well defined.
If you sum over all the divisors of z, the result is perfectly well defined; however, it’s 0. Whenever x divides z, so does -x.
Over the integers, this is solved by summing over all positive divisors. However, there’s no canonical choice of what divisors to consider positive in the case of Gaussian integers, and making various arbitrary choices (like summing over all divisors in the upper half-plane) leads to unsatisfying results.
That’s like saying the standard choice of branch cut for the complex logarithm is arbitrary.
When you complexify, things get messier. My point is that making a generalization is possible (though it’s probably best to sum over integers with 0 \leq arg(z) < \pi, as you pointed out), which is the only claim I’m interested in disputing. Whether it’s nice to look at is irrelevant to whether it’s functional enough to be punnable.
You’re right—the generalization works.
Mainly what I don’t like about it is that \sigma(z) no longer has the nice properties it had over the integers: for example, it’s no longer multiplicative. This doesn’t stop Gaussian integers from being friendly, though, and the rest is a matter of aesthetics.