Hm, it seems to me that there is a distinction between 1) hiding information (or encapsulating it, or making it private), 2) ignoring it, and 3) getting rid of it all together.
For setPassword perhaps a programmer who uses this method can’t see the internals of what is actually happening (the salting, hashing and storing). They just call user.setPassword(form.password) and it does what they need it to do.
For User, in the example you give with List<User>, maybe we want to count how many users there are, and in doing so we don’t care about what properties users have. It could be email and password, or it could be username and dob, in the context of counting how many users there are you don’t care. However, the inner details aren’t actually hidden, you’re just choosing to ignore it.
For ideal gasses, we’re getting rid of the information about particles. It’s not that it’s hidden/private/encapsulated, it’s just not even there after we replace it with the summary statistics.
What do you think? Am I misunderstanding something?
And in the case that I am correct about the distinction, I wonder if it’s something worth pointing out.
Sounds like the distinction is about where/how we’re drawing the abstraction boundaries.
“Hiding information” suggests that there’s some object X with a boundary (i.e. a Markov blanket), and only the summary information is visible outside that boundary.
“Ignoring information” suggests that there’s some other object(s) Y with a boundary around them, and only the summary information about X is visible inside that boundary.
So basically we’re defining which variables are “far away” by exclusion in one case (i.e. “everything except blah is far away”) and inclusion in the other case (i.e. “only blah is far away”). I could definitely imagine the two having different algorithmic implications and different applications.
As for “getting rid of information”, I think that’s hiding information plus somehow eliminating our own ability to observe the hidden part. Again, I could definitely imagine that having additional algorithmic implications or applications. (Though this one feels weird for me to think about at all; I usually imagine everything from an external perspective where everything is always observable and immutable.)
Hm, it seems to me that there is a distinction between 1) hiding information (or encapsulating it, or making it private), 2) ignoring it, and 3) getting rid of it all together.
For
setPassword
perhaps a programmer who uses this method can’t see the internals of what is actually happening (the salting, hashing and storing). They just calluser.setPassword(form.password)
and it does what they need it to do.For
User
, in the example you give withList<User>
, maybe we want to count how many users there are, and in doing so we don’t care about what properties users have. It could beemail
andpassword
, or it could beusername
anddob
, in the context of counting how many users there are you don’t care. However, the inner details aren’t actually hidden, you’re just choosing to ignore it.For ideal gasses, we’re getting rid of the information about particles. It’s not that it’s hidden/private/encapsulated, it’s just not even there after we replace it with the summary statistics.
What do you think? Am I misunderstanding something?
And in the case that I am correct about the distinction, I wonder if it’s something worth pointing out.
Sounds like the distinction is about where/how we’re drawing the abstraction boundaries.
“Hiding information” suggests that there’s some object X with a boundary (i.e. a Markov blanket), and only the summary information is visible outside that boundary.
“Ignoring information” suggests that there’s some other object(s) Y with a boundary around them, and only the summary information about X is visible inside that boundary.
So basically we’re defining which variables are “far away” by exclusion in one case (i.e. “everything except blah is far away”) and inclusion in the other case (i.e. “only blah is far away”). I could definitely imagine the two having different algorithmic implications and different applications.
As for “getting rid of information”, I think that’s hiding information plus somehow eliminating our own ability to observe the hidden part. Again, I could definitely imagine that having additional algorithmic implications or applications. (Though this one feels weird for me to think about at all; I usually imagine everything from an external perspective where everything is always observable and immutable.)
Yeah I think your descriptions match what I was getting at.