Hilary Putnam, one of the most famous philosophers of the twentieth century, has a blog
Panorama
The macro/micro validity tradeoff
Many economists insist that the realism of their assumptions is not important—the only important thing is that at the end of the day, the model fits the data of whatever phenomenon it’s supposed to be modeling. This is called an “as if” model. For example, maybe individuals don’t have rational expectations, but if the economy behaves as if they do, then it’s OK to use a rational expectations model.
So I realized that there’s a fundamental tradeoff here. The more you insist on fitting the micro data (plausibility), the less you will be able to fit the macro data (“as if” validity). I tried to write about this earlier, but I think this is a cleaner way of putting it: There is a tradeoff between macro validity and micro validity.
How severe is the tradeoff? It depends. For example, in physical chemistry, there’s barely any tradeoff at all. If you use more precise quantum mechanics to model a molecule (micro validity), it will only improve your modeling of chemical reactions involving that molecule (macro validity). That’s because, as a positivist might say, quantum mechanics really is the thing that is making the chemical reactions happen.
In econ, the tradeoff is often far more severe. For example, Smets-Wouters type macro models fit some aggregate time-series really well, but they rely on a bunch of pretty dodgy assumptions to do it. Another example is the micro/macro conflict over the Frisch elasticity of labor supply.
Famous neurologist and science popularizer Oliver Sacks has died. Which of his books are your favorites?
Inability and Obligation in Moral Judgment
It is often thought that judgments about what we ought to do are limited by judgments about what we can do, or that “ought implies can.” We conducted eight experiments to test the link between a range of moral requirements and abilities in ordinary moral evaluations. Moral obligations were repeatedly attributed in tandem with inability, regardless of the type (Experiments 1–3), temporal duration (Experiment 5), or scope (Experiment 6) of inability. This pattern was consistently observed using a variety of moral vocabulary to probe moral judgments and was insensitive to different levels of seriousness for the consequences of inaction (Experiment 4). Judgments about moral obligation were no different for individuals who can or cannot perform physical actions, and these judgments differed from evaluations of a non-moral obligation (Experiment 7). Together these results demonstrate that commonsense morality rejects the “ought implies can” principle for moral requirements, and that judgments about moral obligation are made independently of considerations about ability. By contrast, judgments of blame were highly sensitive to considerations about ability (Experiment 8), which suggests that commonsense morality might accept a “blame implies can” principle.
Julian Savulescu: The Philosopher Who Says We Should Play God
Australian bioethicist Julian Savulescu has a knack for provocation. Take human cloning. He says most of us would readily accept it if it benefited us. As for eugenics—creating smarter, stronger, more beautiful babies—he believes we have an ethical obligation to use advanced technology to select the best possible children.
A protégé of the philosopher Peter Singer, Savulescu is a prominent moral philosopher at the University of Oxford, where he directs the Uehiro Centre for Practical Ethics. He also edits the Journal of Medical Ethics. Savulescu isn’t shy about stepping onto ethical minefields. He sees nothing wrong with doping to help cyclists climb those steep mountains in the Tour de France. Some elite athletes will always cheat to boost their performance, so instead of trying to enforce rules that will be broken, he claims we’d be better off with a system that allows low-dose doping.
So does Savulescu just get off being outrageous? “I actually think of myself as the voice of common sense,” he says, though he admits to receiving his share of hate mail. He’s frustrated by how hard it is to have reasoned arguments about loaded issues without getting flamed on the Internet. Savulescu thinks we need to become far more adept at sorting out difficult moral issues. Otherwise, he says, the human species will face dire consequences in the coming decades.
A Defense of the Rights of Artificial Intelligences by Eric Schwitzgebel and Mara [official surname still be decided]
There are possible artificially intelligent beings who do not differ in any morally relevant respect from human beings. Such possible beings would deserve moral consideration similar to that of human beings. Our duties to them would not be appreciably reduced by the fact that they are non-human, nor by the fact that they owe their existence to us. Indeed, if they owe their existence to us, we would likely have additional moral obligations to them that we don’t ordinarily owe to human strangers – obligations similar to those of parent to child or god to creature. Given our moral obligations to such AIs, two principles for ethical AI design recommend themselves: (1) design AIs that tend to provoke reactions from users that accurately reflect the AIs’ real moral status, and (2) avoid designing AIs whose moral status is unclear. Since human moral intuition and moral theory evolved and developed in contexts without AI, those intuitions and theories might break down or become destabilized when confronted with the wide range of weird minds that AI design might make possible.
As always, comments warmly welcomed—either by email or on this blog post. We’re submitting it to a special issue of Midwest Studies with a hard deadline of September 15, so comments before that deadline would be especially useful.
Introducing JASP: A free and intuitive statistics software that might finally replace SPSS
Are you tired of SPSS’s confusing menus and of the ugly tables it generates? Are you annoyed by having statistical software only at university computers? Would you like to use advanced techniques such as Bayesian statistics, but you lack the time to learn a programming language (like R or Python) because you prefer to focus on your research?
While there was no real solution to this problem for a long time, there is now good news for you! A group of researchers at the University of Amsterdam are developing JASP, a free open-source statistics package that includes both standard and more advanced techniques and puts major emphasis on providing an intuitive user interface.
(no affiliation with creators of JASP)
Neural Networks, Types, and Functional Programming by Christopher Olah
If we think we’ll probably see deep learning very differently in 30 years, that suggests an interesting question: how are we going to see it? Of course, no one can actually know how we’ll come to understand the field. But it is interesting to speculate.
At present, three narratives are competing to be the way we understand deep learning. There’s the neuroscience narrative, drawing analogies to biology. There’s the representations narrative, centered on transformations of data and the manifold hypothesis. Finally, there’s a probabilistic narrative, which interprets neural networks as finding latent variables. These narratives aren’t mutually exclusive, but they do present very different ways of thinking about deep learning.
This essay extends the representations narrative to a new answer: deep learning studies a connection between optimization and functional programming.
In this view, the representations narrative in deep learning corresponds to type theory in functional programming. It sees deep learning as the junction of two fields we already know to be incredibly rich. What we find, seems so beautiful to me, feels so natural, that the mathematician in me could believe it to be something fundamental about reality.
This is an extremely speculative idea. I am not arguing that it is true. I wish to argue only that it is plausible, that one could imagine deep learning evolving in this direction. To be clear: I am primarily making an aesthetic argument, rather than an argument of fact. I wish to show that this is a natural and elegant idea, encompassing what we presently call deep learning.
A Neural Algorithm of Artistic Style
In fine art, especially painting, humans have mastered the skill to create unique visual experiences through composing a complex interplay between the content and style of an image. Thus far the algorithmic basis of this process is unknown and there exists no artificial system with similar capabilities. However, in other key areas of visual perception such as object and face recognition near-human performance was recently demonstrated by a class of biologically inspired vision models called Deep Neural Networks.1, 2 Here we introduce an artificial system based on a Deep Neural Network that creates artistic images of high perceptual quality. The system uses neural representations to separate and recombine content and style of arbitrary images, providing a neural algorithm for the creation of artistic images. Moreover, in light of the striking similarities between performance-optimised artificial neural networks and biological vision,3–7 our work offers a path forward to an algorithmic understanding of how humans create and perceive artistic imagery.
Last Wednesday, “A Neural Algorithm of Artistic Style” was posted to ArXiv, featuring some of the most compelling imagery generated by deep convolutional neural networks since Google Research’s “DeepDream” post.
On Sunday, Kai Sheng Tai posted the first public implementation. I immediately stopped working on my implementation and started playing with his. Unfortunately, his results don’t quite match the paper, and it’s unclear why. I’m just getting started with this topic, so as I learn I want to share my understanding of the algorithm here, along with some results I got from testing his code.
Hummingbirds find protection building nests under hawks
(Phys.org)—An international team of researchers working in a part of Arizona has found evidence of a hummingbird species benefiting by building nests in trees beneath hawk hunting grounds. In their paper published in the journal Science Advances, the team describes the study they carried out and just how much safer the hummingbirds appeared to be when living in close proximity to hawks.
To learn more about black-chinned hummingbirds living in Arizona’s Chiricahua Mountains, the team walked among the trees looking up, as part of a three year study of hummingbirds living beneath 12 hawk nests. In so doing, they discovered that hummingbird nests beneath hawks were approximately 80 percent safer from Mexican jays eating their eggs than were unprotected nests.
There were two types of hawks involved in the study, Cooper and goshawks, both find food by looking down from their perch high in a tree—when they spot something, such as a jay, they swoop down between the branches and grab their meal. They don’t generally target hummingbirds, however, the team noted, likely because they are too small and fast. That led to what the researchers call cones of protection, where nests within a certain area under a hawk’s nest would be protected by the hawks. Jays, they noted, were more likely to fly higher in such areas, above the cone.
Scaling Laws and the Speed of Animals
In a recent issue of the American Journal of Physics, I read an interesting paper by Nicole Meyer-Vernet and Jean-Pierre Rospars examining the top speeds of organisms of varying sizes, from bacteria up to blue whales. They found that the time it takes for an animal to move its own body length is almost independent of mass, across 21 orders of magnitude. They derived a simple scaling argument and order-of-magnitude estimate for this remarkable fact. Before I elaborate further on their paper, I will give an overview of scaling arguments and their power.
There is a false dichotomy in physics, that concepts can either be explained in quasi-philosophical vague descriptive arguments, or in terms of rigorous formulae that take years of study to understand. For example, one could describe general relativity with the bowling-ball-on-a-bedsheet analogy, and when that fails, crack out the Einstein field equations. However, this sad dichotomy is actually a happy trichotomy: in between the two extremes is the powerful tool of scaling arguments.
A cautionary tale about perverse incentives: Why drivers in China intentionally kill the pedestrians they hit.
A redditor has created a .docx document that summarizes which studies have been replicated in recent big psychology replication study.
The Fallacy of Placing Confidence in Confidence Intervals
Welcome to the web site for the upcoming paper “The Fallacy of Placing Confidence in Confidence Intervals.” Here you will find a number of resources connected to the paper, including the itself, the supplement, teaching resources and in the future, links to discussion of the content.
The paper is accepted for publication in Psychonomic Bulletin & Review.
Interval estimates – estimates of parameters that include an allowance for sampling uncertainty – have long been touted as a key component of statistical analyses. There are several kinds of interval estimates, but the most popular are confidence intervals (CIs): intervals that contain the true parameter value in some known proportion of repeated samples, on average. The width of confidence intervals is thought to index the precision of an estimate; CIs are thought to be a guide to which parameter values are plausible or reasonable; and the confidence coefficient of the interval (e.g., 95%) is thought to index the plausibility that the true parameter is included in the interval. We show in a number of examples that CIs do not necessarily have any of these properties, and can lead to unjustified or arbitrary inferences. For this reason, we caution against relying upon confidence interval theory to justify interval estimates, and suggest that other theories of interval estimation should be used instead
26 Things I Learned in the Deep Learning Summer School
In the beginning of August I got the chance to attend the Deep Learning Summer School in Montreal. It consisted of 10 days of talks from some of the most well-known neural network researchers. During this time I learned a lot, way more than I could ever fit into a blog post. Instead of trying to pass on 60 hours worth of neural network knowledge, I have made a list of small interesting nuggets of information that I was able to summarise in a paragraph.
At the moment of writing, the summer school website is still online, along with all the presentation slides. All of the information and most of the illustrations come from these slides and are the work of their original authors. The talks in the summer school were filmed as well, hopefully they will also find their way to the web.
How Soylent and Oculus Could Fix The Prison System
here’s one way we could rebuild the prison system:
Step 1: Soylent
Step 2: Oculus Rift
Step 3: Health and hygiene
Step 4: A simulation that rewards good behavior
Step 5: Administration
Excerpt:
Prisoners have cellmates and gym time and free time in the prison yard because solitary confinement makes you go nuts. You need human contact if you don’t want to pop out of prison a crazy person. The problem is these places are where all the violence happens.
However, you could take the fear factor out of prisons by simply making all socialization happen through virtual reality. Bonus, you could deliver rich education through VR as well.
Virtual reality headsets are so good now (and getting better) that they can make your brain feel like you’re actually somewhere else. I get the same feeling in the pit of my stomach when I’m standing on a cliff in virtual reality as I do when I’m experiencing heights IRL.
By equipping every inmate with an Oculus Rift headset in his or her own cell, you could isolate prisoners from violence without isolating them from people. Put all the prisoners inside Second Life, Prison Edition, give them all a headset, and let them build virtual characters. You could design an awesome system for rehabilitation, give access to e-learning tools, Kindle books, Minecraft and other digital tools for creativity (prison is boring), psychologist sessions (the psychologist could log in remotely from anywhere in the world), and even handle all correspondence and prison visits from relatives and friends electronically.
What this eliminates: prison yards, prison libraries, packages and letters secretly containing drugs or shanks.
Blowing the whistle on the uc berkeley mathematics department
This remark that I should align more with department standards has been the resounding theme of my time at Berkeley, and Arthur Ogus’s comment in the April 18th, 2014 memo was not an isolated slip. On September 22nd, 2013 he wrote in an email “But I do think it that it [sic] is very important that you not deviate too far from the department norms.” On November 12th, 2014 he wrote “I hope that, on the basis of our conversation, you can further adjust to the norms of our department.” This raises the question: What does it mean to adhere to department norms if one has the highest student evaluation scores in the department, students performing statistically significantly better in subsequent courses, and faculty observations universally reporting “extraordinary skills at lecturing, presentation, and engaging students”?
This question is one that I asked, and in response it was made very clear to me what is meant by the norms of the department. It means teach from the textbook. It means stop emailing students with encouragement, handwritten notes and homework problems, and instead assign problems from the textbook at the start of the semester. It means stop using evidence-based practices like formative assessment. It means micro-manage the Graduate Student Instructors rather than allowing them to use their own, considerable, talent and creativity. And most of all it means this: Stop motivating students to work hard and attend class by being engaging, encouraging and inspiring, by sharing with them a passion for the beauty and wonder of mathematics, but instead by forcing them into obedience with endless busywork in the form of GPA-affecting homework and quizzes and assessments, day after day, semester after semester.
In a nutshell: Stop making us look bad. If you don’t, we’ll fire you.
Final Kiss of Two Stars Heading for Catastrophe
Using ESO’s Very Large Telescope, an international team of astronomers have found the hottest and most massive double star with components so close that they touch each other. The two stars in the extreme system VFTS 352 could be heading for a dramatic end, during which the two stars either coalesce to create a single giant star, or form a binary black hole.
UN climate reports are increasingly unreadable
The climate summary findings of the Intergovernmental Panel on Climate Change (IPCC) are becoming increasingly unreadable, a linguistics analysis suggests.
IPCC summaries are intended for non-scientific audiences. Yet their readability has dropped over the past two decades, and reached a low point with the fifth and latest summary published in 2014, according to a study published in Nature Climate Change1.
The study used the Flesch Reading Ease test, which assumes that texts with longer sentences and more complex words are harder to read. Reports from the IPCC’s Working Group III, which focuses on what can be done to mitigate climate change by cutting carbon dioxide emissions, received the lowest marks for readability.
Confusion created by the writing style of the summaries could hamper political progress on tackling greenhouse-gas emissions, thinks Ralf Barkemeyer, who led the analysis and works on sustainable business management at the KEDGE Business School in Bordeaux, France. The readability scores “are not just low but exceptionally low”, he says.
Solving a Non-Existent Unsolved Problem: The Critical Brachistochrone