Yeah, that’s probably part of it, although technically they are only the same with the quotient function being the very natural function of throwing away whatever component is not in the vector subspace to project straight down into that subspace, but this is not the only possible choice of function and so not the only possible space to get as a result.
I think all monotonic functions would give a homeomorphic space, but functions with discontinuities would not and I’m not sure about functions that are surjective but not injective. And functions that are not surjective fail the criteria for generating a quotient space.
Edit: I think maybe functions with discontinuities do still give a continuous space so long as they are surjective, which is required It would just break the vector properties between the two spaces, but that’s not required for a topological space. This is inspiring me to want to study topology more : )
Oh yeah, that makes sense. I wouldn’t want to make that assumption though, since activation functions are explicitly non-linear, otherwise the multiple layers can be multiplied together and a multi-layer perceptron would just be an indirect way of doing a single linear map.
Yeah, that’s probably part of it, although technically they are only the same with the quotient function being the very natural function of throwing away whatever component is not in the vector subspace to project straight down into that subspace, but this is not the only possible choice of function and so not the only possible space to get as a result.
I think all monotonic functions would give a homeomorphic space, but functions with discontinuities would not and I’m not sure about functions that are surjective but not injective. And functions that are not surjective fail the criteria for generating a quotient space.
Edit: I think maybe functions with discontinuities do still give a continuous space so long as they are surjective, which is required It would just break the vector properties between the two spaces, but that’s not required for a topological space. This is inspiring me to want to study topology more : )
Oh, I meant in the category of (topological) vector spaces, which requires the quotient maps to be linear.
Oh yeah, that makes sense. I wouldn’t want to make that assumption though, since activation functions are explicitly non-linear, otherwise the multiple layers can be multiplied together and a multi-layer perceptron would just be an indirect way of doing a single linear map.