Red or green weapons i.e. swords, longswords, battleaxes (not axes or hammers though) seem to have a mana scaling dependent on their +n modifier (although green weapons have a drop-off at higher modifiers. It appears to be a clear enough pattern that it’s not a statistical artefact. I’ve not found anything else about the tools or jewellery though.
Good point, perhaps my view is skewed as I do almost all of my learning and explaining in technical fields (mostly chemistry and biology) and with people who are on a similar knowledge level to me. I can imagine that in a situation of trust but little knowledge (e.g. I am explaining my work to a family member) or in a different field to mine they would be more useful.
I think my assessment here may have been too focussed on a specific subset of analogy use, which I did not properly specify in the post.
Edit to clarify: I still believe intuition pumps in philosophy are a bad sort of analogy in that they are too easily manipulated to serve the philosophical interests of the speaker
If this turns out to be basically true, then what about wild wolves? I think there is a strong case that the capacity for this sort of communication to have been bred into domestic dogs as a result of humans selecting for e.g. better overall intelligence and ability to understand human commands.
Another option is that wild wolf packs have the capacity for this sort of communication but don’t (unless we’ve simply not noticed it) and this seems much less likely to me, for the sole reason that being able to communicate in this way would give a very large advantage to wild wolves. It would be odd if they kept the cognitive machinery for this around (and using up resources for the rest of their bodies) without making use of it.
There is a final option that developing a language is like discovering a technology, and once a language exists it is much easier to teach it to others than it originally was to develop the language. This would be very interesting to investigate, perhaps languages are like a sort of software on the brain, which are able to convert various processes (association learning, pattern recognition, episodic memory) into something more structured which allows for easier reasoning. This is getting very Sapir-Whorf hypothesis-ey and as someone who is not a linguist or anthropologist I can’t really say if this is even reasonable or not.
As an aside the second option reminds me of the experiments to try and teach chimpanzees to use human sign language. (which were considered at the time to be a great success but were less than stellar) Chimpanzees in the wild have a very rudimentary form of sign language but have not developed it into something like a human language despite the potential advantages (either in social conflicts or in hunting/gathering food etc.). This to me suggests that chimpanzees probably don’t have the capacity for more complex sign languages than they already have.
Thanks for the feedback!
You may be right there, and I would certainly be pleased to hear of any projects like this.
I believe the model could work without it, but AD seems to be an attractive state that many human brains fall into with various genetic associations. The main evidence for it is that mutations in Aβ precursor protein can have very high penetrance (i.e. everyone who has the mutation develops early-onset AD (https://link.springer.com/content/pdf/10.1007/s11920-000-0061-z.pdf). You are definitely right that I was too specific in my assessment of exactly how Aβ plaques cause a feedback mechanism, thanks for catching that. I have amended the post to fix that.
Lastly what do you mean specifically by prion-like? Amyloid fibrils are a prion-like structure in the sense that the growth of existing fibres is much, much more favourable than the formation of new ones. (this leads to exponential growth as long fibres break apart leaving new open ends for new protein molecules to add) However Aβ plaque formation was reversed in the mice given EET-A which means that at some physiologically achievable concentrations of free Aβ, the amyloids dissipate due to un-misfolding of Aβ (at least in mouse models). This would suggest that the cause of AD is various factors pushing the brain over a threshold where Aβ can accumulate. (which could be metabolic, or mutations which make Aβ more likely to accumulate) This is in contrast to “classical” prions where the original misfolded protein is able to continuously cause the misfolding of normal protein at normal physiological conditions, and the only barrier to a prion disease occurring is that no misfolded protein is present.
The paper which you sent also postulates a feedback loop between Aβ and Tau which is interesting. I had considered the Aβ feedback into the earlier mechanism as an afterthought but perhaps it is more important than my model suggested.
Perhaps I am being too confident in it. I didn’t have time to cite sources but the biology of AD seems to be a microcosm of the biology of ageing overall, and EET-A has shown a bunch of random unconnected benefits in mouse models (regenerating blood vessels after a heart attack etc.).
I do not know how I would obtain it (one would probably need free access to a chemical lab to synthesize it, just looking at EET and other analogues they seem relatively synthesize-able) as for dosing I would dose at comparable ppm levels to the rodent models.
I did 3 separate parts partially because I thought they seemed rather unconnected, and mostly because I was concerned about posting a very long and cumbersome post. Now that I look at it, it didn’t really need to be three parts at all, it just felt a lot longer when I was writing it.
I more meant “keeping around cognitive machinery which is capable of this” without making use of it. Given that wild wolves use (relatively) simple hunting strategies which do not seem to rely on much communication, there doesn’t seem to be much need to have a brain capable of communicating relatively abstract thoughts. That doesn’t seem to affect your core argument though
Good point about autistic humans who can’t learn sign language though, I hadn’t considered that. I guess my model of autism was more like:
“Autism affects the brain in lots of different ways which is able to knock out specific abilities (like speech) without knocking out other abilities (like the capability to have and communicate complex thoughts, which would not have evolved in an animal without speech)”
than drawing on some amount of general purpose computing behind each one. I haven’t studied autism enough to know if this is correct.