I often found myself in a situation when I overupdated on the evidence. For example, if market fails 3 per cent, I used to start to think that economic collapse is soon.
Overupdating on random evidence is also a source of some conspiracy theories. A plate number of a car on my street is the same as my birthday? They must be watching me!
The protection trick here is “natural scepticism”: just not update if you want to update your believes. But in this case the prior system becomes too rigid.
I’ve seen the following quote:
“Moreover, reportedly the virus does serious damage to people’s lower respiratory systems — supposedly it can take “…at least six months for patients to recover heart and lung function.” If this becomes endemic across the world, even developed nation’s healthcare systems will struggle to provide care.” https://www.cassandracapital.net/post/coronavirus-the-status-of-the-outbreak-and-4-possible-scenarios″
One minor risk: someone will create a baby using your genom and theirs and you have to pay child support.
A person could be split on two parts: one that wants to die and other which to live. Then the first part is turned off.
What you describe is a passive digital immortality, or just recording of everything. Active digital immortality is writing something like an autobiography and-or dairy.
I descrived different practical approaches here. For example, the best source of unique personal information is audio channel, and one could record almost everything he speaks by constantly running a recording app on his laptop or a phone. It will not look crazy for peers.
It looks like the idea of human values is very contradictional. May be we should dissolve it? What about “AI safety” without human values?
I would use medical gloves, underwater glasses, two levels of masks.
Edited: in fact, I would not go.
It is probably wrong to take median (size of pandemic), if we speak of the risk of events with heavy tails.
I pick just recent numbers, but exponential two-day doubling trend in infections and deaths is visible in the wiki-table from 16 January, or for around 5-6 doublings. Total growth for 12 days is around 100 times.
23.01 − 830
26.01 − 2,744
27.01 − 4,515
Philosophical landmines could be used to try to stop an AI which is trying to leave the box. If it goes outside the box, it finds the list with difficult problems and there is a chance that the AI will halt. Examples: meaning of life, Buridan ass problem, origin and end of the universe problem, Pascal mugging of different sorts.
It was not promised, but anyone who read the story of previous revolutions, like French one, could guess.
In early Soviet history they actually checked if a person actually supported the winning party by looking of what you did 10-20 years ago. If the person was a member of wrong party in 1917, he could be prosecuted in 1930th.
Surely it was, but in slightly different form, in which it is rather trivial: When a person says “If I win the election I will give everybody X”.
But it would be similarly convenient to have uncertainty about the correct decision theory.
Yes, this is really interesting for me. For example, if I have the Newcomb-like problem, but uncertain about the decision theory, I should one box, as in that case my expected payoff is higher (if I give equal probability to both outcomes of the Newcomb experiment.)
There is a couple of followup articles by the authors, which could be found if you put the title of this article in the Google Scholar and look at the citations.
Gopal P. Sarma, Nick J. Hay(Submitted on 28 Jul 2016 (v1), last revised 21 Jan 2019 (this version, v4))
Characterizing human values is a topic deeply interwoven with the sciences, humanities, art, and many other human endeavors. In recent years, a number of thinkers have argued that accelerating trends in computer science, cognitive science, and related disciplines foreshadow the creation of intelligent machines which meet and ultimately surpass the cognitive abilities of human beings, thereby entangling an understanding of human values with future technological development. Contemporary research accomplishments suggest sophisticated AI systems becoming widespread and responsible for managing many aspects of the modern world, from preemptively planning users’ travel schedules and logistics, to fully autonomous vehicles, to domestic robots assisting in daily living. The extrapolation of these trends has been most forcefully described in the context of a hypothetical “intelligence explosion,” in which the capabilities of an intelligent software agent would rapidly increase due to the presence of feedback loops unavailable to biological organisms. The possibility of superintelligent agents, or simply the widespread deployment of sophisticated, autonomous AI systems, highlights an important theoretical problem: the need to separate the cognitive and rational capacities of an agent from the fundamental goal structure, or value system, which constrains and guides the agent’s actions. The “value alignment problem” is to specify a goal structure for autonomous agents compatible with human values. In this brief article, we suggest that recent ideas from affective neuroscience and related disciplines aimed at characterizing neurological and behavioral universals in the mammalian class provide important conceptual foundations relevant to describing human values. We argue that the notion of “mammalian value systems” points to a potential avenue for fundamental research in AI safety and AI ethics.
You can donate your brain to a brain bank, where it will be preserved for long time and studied. This combines benefits of donation and cryonics.
Interestingly, we created selection pressure on other species to create something like human intelligence. First of all, dogs, which were selected for15 000 years to be more compatible with humans, which also includes a capability to understand human signals and language. Some dogs could understand few hundreds words.