6/​23

Link post

Grab Bag

I read a thing? I can’t describe it more than that. There are very few words. Much happens. Strongly recommended.

I turned 29 this month. Apparently my chances of surviving were about that of not getting a 1 on a d20?

Politics and Policy

I reread An Essay on Crimes and Punishments, by Cesare Beccaria. Today, it is as powerful, though thankfully not quite as needed, as ever. If you’ve read Punishment Without Crime by Alexandra Natapoff, an excellent book looking at how misdemeanors are used to punish people who have done nothing worse than contempt of cop, this passage in particular may resonate.

No man can be judged a criminal until he be found guilty; nor can society take from
him the public protection, until it have been proved that he has violated the conditions on which it was granted. What right, then, but that of power, can authorise the punishment of a citizen, so long as there remains any doubt of his guilt? The dilemma is frequent. Either he is guilty, or not guilty. If guilty, he should only suffer the punishment ordained by the laws, and torture becomes useless, as his confession is unnecessary. If he be not guilty, you torture the innocent; for, in the eye of the law, every man is innocent, whose crime has not been proved.

Can you understand the modern jail system, for anyone except the rich and powerful, as anything other than a form of torture that we justify? It may be that it is expedient, and it may be that some of the harm of torture is the infliction of suffering for that sake alone, but Beccaria reminds us that it is not the inherent wrongness of harming another being in some ways that is contrary to a good society. It is the entire institutionalization of extra-legal punishment of the accused, simply for being accused.

War on the Rocks has a summer reading list.

There was a brief period in which I didn’t realize that RFK Jr., the anti-vax presidential candidate who boasts about an assisted bench-press of 115lb, didn’t live in LA. Of course he lives in LA. How could he be anywhere else?

Shared politics by left-wing activists has, in my view, constructed a common narrative around oppression and liberation that many sincerely believe. It’s a solid moral framework. There’s just a minor issue that not all of the people understood as “the oppressed” buy into it, and so you get conservative religious communities pushing against queer liberation regardless of race. There’s an old argument that the American left-wing coalition is structurally more varied in interests/​goals than the right-wing counterpart, and so you should expect to see more fracturing of this sort.

Counterpoint: This seems like one of those arguments that is exclusively made by people about their own side: I could generate a just-so story about how the left is ideologically unified but conservatives are split by a variety of different goals and mental frameworks. If you ever want to be reassured that it’s not all doom and gloom, go see what elites in the other party say about what happens when their party is in control vs what happens when your party is in control.

A very useful public policy cheat sheet.

Odd Lots on government software development. The most surprising argument to me was that they say that the hiring problem isn’t the 50% pay cut (or more). It’s not generic “you’re in a big bureaucracy.” One problem is that it takes nine months to hire people. The other is that you need to enable tech people and PMs to override compliance, some of the time, on some issues, in the service of user experience. Explaining what the law means rather than having the exact text of the statute, for example. Imagine if everything you did had to go through an IRB! They take ages to get back to you, they have no care for operational realities, and there’s nothing you can do. Good people with choices…they’ll think about what else they could be doing. Nobody is saying “let’s do a bunch of crimes”. But when compliance is in control, rather than being one voice of input into a decision-making process run by the project manager, you get a very different culture. If you don’t enjoy podcasts, this article is by one of the speakers and covers similar material, though with different anecdotes and pain points. I’ll leave you with my favorite quote from it.

promoting someone who operates outside of norms, even someone who operates legally and ethically, can tarnish reputations and make enemies.

But the culture that enforces those norms doesn’t spring from nowhere. We crafted a system of hierarchy in which those at the top are supposed to make meaningful decisions and every step down the ladder should operate with greater constraints. We create systems designed to drain the jobs of bureaucrats, especially low-level bureaucrats, of any opportunity to exercise judgment. When things go wrong, we find new ways to constrain, and we make the hierarchy more and more rigid.

AI

CSET has a call out for proposals on auditing.

Look me in the eye and try to tell me that a successor to DragGAN isn’t going to be used by every digital artist and photo editor with access and technical knowledge.

Relatedly, Clarkesworld has a statement on AI assisted and generated writing. I think that it’s wrong in a few assumptions (AI-generated text detection is not technically possible without deliberate watermarking), but useful to see where sensible artists are.

We’re mapping bigger brains! Still not *entirely* sure what a fly does, but we’re working on it.

Nifty Anthropic paper on what language models say about values questions, what people in the world say, and how that can shift.

Very philosophical paper on what it means for something to be an agent in the world. I thought it was interesting.

Good 54 page survey of catastrophic harms from AI from CAIS.

Interesting argument against AI having major impact on GDP. I don’t agree, largely because I think that by their standards “unlimited free electricity and matter replicators that run solely on electricity” wouldn’t have a major impact on GDP. But worth reading and thinking about. h/​t to B Cavello!

This AI attack surface map is a good thing. Compare and contrast MITRE’s. We’re starting to see checklists and frameworks for AI. Assuming you’re a fan of Atul Gawande (and applying his ideas far beyond the area they were demonstrated to be very effective in), this seems great. I think I’m supportive?

CSET on autonomous cyber defense

Mixing tools that reference specific facts with generative AI models, much like humans, dramatically improves performance. This time, let’s empower people to make arbitrary chemicals! This has never caused problems before.

If you want to keep up with work in AI ethics and fairness, you can do much worse than reading the papers at the FAccT conference. I’ve included links below to six that I thought were interesting, but sometimes people disagree with me about what’s interesting. If you’re not interested in reading them, don’t worry: this is the last item in this month’s newsletter. I include what I think about the paper, very briefly, and a brief note on the authors if they’re not academics I don’t recognize. But before that, if you’ve enjoyed everything so far…

Share

Subscribe now

Representation in AI Evaluations, 79 authors at DeepMind

What I like: What does it actually mean? A deeper investigation of a common phrase.

Calls for representation in artificial intelligence (AI) and machine learning (ML) are widespread, with “representation” or “representativeness” generally understood to be both an instrumentally and intrinsically beneficial quality of an AI system, and central to fairness concerns. But what does it mean for an AI system to be “representative”? Each element of the AI lifecycle is geared towards its own goals and effect on the system, therefore requiring its own analyses with regard to what kind of representation is best. In this work we untangle the benefits of representation in AI evaluations to develop a framework to guide an AI practitioner or auditor towards the creation of representative ML evaluations. Representation, however, is not a panacea. We further lay out the limitations and tensions of instrumentally representative datasets, such as the necessity of data existence and access, surveillance vs expectations of privacy, implications for foundation models and power. This work sets the stage for a research agenda on representation in AI, which extends beyond instrumentally valuable representation in evaluations towards refocusing on, and empowering, impacted communities.

Broadening AI Ethics Narratives: An Indic Art View

What I like: I like the approach of trying to uncover many different ethical perspectives. I think it’s very important work.

What I don’t like: I could have written their conclusions just by knowing the authors’ ideological commitments. It’s extremely hard to do this sort of research and come up with a conclusion that you disagree with, and I have tremendous respect for the people who say “X changed my mind about Y”. While I agree that the project is important, it is neither clear to me that the authors have done it nor how I’d be able to tell if they had (aside from at least one claim that seemed obviously against what I would guess of their politics). If the respondent said “competition enhances knowledge” and the authors said “so maybe capitalism and avoid state control”, that would be a credible signal that they were doing good work. Because they instead took away “so a broad understanding of art forms is great”, it’s not made clear to me that they’re deriving something new from their work.

Now, maybe reality has a liberal bias, they wrote the paper because they were already informed by Indian artistic traditions and that led them to their current politics, and they have faithfully communicated to the broader academic audience what was meant by their interview subjects. But without the ability to tell, my trust is unfortunately limited.

A very useful place for adversarial collaboration, I think. If two people who disagree about whether what their subjects are saying is good or bad agree that they’re saying it, that is trustworthy.

Incorporating interdisciplinary perspectives is seen as an essential step towards enhancing artificial intelligence (AI) ethics. In this regard, the field of arts is perceived to play a key role in elucidating diverse historical and cultural narratives, serving as a bridge across research communities. Most of the works that examine the interplay between the field of arts and AI ethics concern digital artworks, largely exploring the potential of computational tools in being able to surface biases in AI systems. In this paper, we investigate a complementary direction–that of uncovering the unique socio-cultural perspectives embedded in human-made art, which in turn, can be valuable in expanding the horizon of AI ethics. Through semi-structured interviews across sixteen artists, art scholars, and researchers of diverse Indian art forms like music, sculpture, painting, floor drawings, dance, etc., we explore how non-Western ethical abstractions, methods of learning, and participatory practices observed in Indian arts, one of the most ancient yet perpetual and influential art traditions, can shed light on aspects related to ethical AI systems. Through a case study concerning the Indian dance system (i.e. the ‘Natyashastra’), we analyze potential pathways towards enhancing ethics in AI systems. Insights from our study outline the need for
(1) incorporating empathy in ethical AI algorithms,
(2) integrating multimodal data formats for ethical AI system design and development,
(3) viewing AI ethics as a dynamic, diverse, cumulative, and shared process rather than as a static, self-contained framework to facilitate adaptability without annihilation of values
(4) consistent life-long learning to enhance AI accountability

Honor Ethics: The Challenge of Globalizing Value Alignment in AI

What I like: Repeatedly slams home that values vary tremendously cross-culturally, and that if you want to have some system listen to everyone in the world equally my American reader will not be uncomplicatedly delighted in the results. I complained that the previous paper investigated another culture but didn’t come up with anything a generic American liberal would dislike. This paper certainly avoids that problem.

What I don’t like: “Alignment” is a useful word and we were using it. The authors seem to be under the impression that honor cultures are equally good and valuable and have important ethical contributions that people should respect as an ethical principle, some sort of multiculturalism that is more important than moral judgements about murdering women for failing to follow ritual purity requirements or queer men like myself for existing.

Some researchers have recognized that privileged communities dominate the discourse on AI Ethics, and other voices need to be heard. As such, we identify the current ethics milieu as arising from WEIRD (Western, Educated, Industrialized, Rich, Democratic) contexts, and aim to expand the discussion to non-WEIRD global communities, who are also stakeholders in global sociotechnical systems. We argue that accounting for honor, along with its values and related concepts, would better approximate a global ethical perspective. This complex concept already underlies some of the WEIRD discourse on AI ethics, but certain cultural forms of honor also bring overlooked issues and perspectives to light. We first describe honor according to recent empirical and philosophical scholarship. We then review “consensus” principles for AI ethics framed from an honor-based perspective, grounding comparisons and contrasts via example settings such as content moderation, job hiring, and genomics databases. A better appreciation of the marginalized concept of honor could, we hope, lead to more productive AI value alignment discussions, and to AI systems that better reflect the needs and values of users around the globe.

Harms from Increasingly Agentic AI Systems

What I like: Code-switching

Research in Fairness, Accountability, Transparency, and Ethics (FATE) has established many sources and forms of algorithmic harm, in domains as diverse as health care, finance, policing, and recommendations. Much work remains to be done to mitigate the serious harms of these systems, particularly those disproportionately affecting marginalized communities. Despite these ongoing harms, new systems are being developed and deployed, typically without strong regulatory barriers, threatening the perpetuation of the same harms and the creation of novel ones. In response, the FATE community has emphasized the importance of anticipating harms, rather than just responding to them. Anticipation of harms is especially important given the rapid pace of developments in machine learning (ML). Our work focuses on the anticipation of harms from increasingly agentic systems. Rather than providing a definition of agency as a binary property, we identify 4 key characteristics which, particularly in combination, tend to increase the agency of a given algorithmic system: underspecification, directness of impact, goal-directedness, and long-term planning. We also discuss important harms which arise from increasing agency – notably, these include systemic and/​or long-range impacts, often on marginalized or unconsidered stakeholders. We emphasize that recognizing agency of algorithmic systems does not absolve or shift the human responsibility for algorithmic harms. Rather, we use the term agency to highlight the increasingly evident fact that ML systems are not fully under human control. Our work explores increasingly agentic algorithmic systems in three parts. First, we explain the notion of an increase in agency for algorithmic systems in the context of diverse perspectives on agency across disciplines. Second, we argue for the need to anticipate harms from increasingly agentic systems. Third, we discuss important harms from increasingly agentic systems and ways forward for addressing them. We conclude by reflecting on implications of our work for anticipating algorithmic harms from emerging systems

Ghosting the Machine: Judicial Resistance to a Recidivism Risk
Assessment Instrument

What I like: If I have one explanation of policy life, it’s that implementation is everything. Interviews about end users are always good to read. If AI rollout is much slower than anticipated, people will point to papers like this as foreshadowing it.

Recidivism risk assessment instruments are presented as an ‘evidence-
based’ strategy for criminal justice reform – a way of increasing consistency in sentencing, replacing cash bail, and reducing mass incarceration. In practice, however, AI-centric reforms can simply add another layer to the sluggish, labyrinthine machinery of bureaucratic systems and are met with internal resistance. Through a community-informed interview-based study of 23 criminal judges and other criminal legal bureaucrats in Pennsylvania, I find that judges overwhelmingly ignore a recently-implemented sentence risk assessment instrument, which they disparage as “useless,” “worthless,” “boring,” “a waste of time,” “a non-thing,” and simply “not helpful.” I argue that this algorithm aversion cannot be accounted for by individuals’ distrust of the tools or automation anxieties, per the explanations given by existing scholarship. Rather, the instrument’s non-use is the result of an interplay between three organizational factors: county-level norms about pre-sentence investigation reports; alterations made to the instrument by the Pennsylvania Sentencing Commission in response to years of public and internal resistance; and problems with how information is disseminated to judges. These findings shed new light on the important role of organizational influences on professional resistance to algorithms, which helps explain why algorithm-centric reforms can fail to have their desired effect. This study also contributes to an empirically-informed argument against the use of risk assessment instruments: they are resource-intensive and have not demonstrated positive on-the-ground impacts.

The Gradient of Generative AI Release: Methods and Considerations, by Irene Solaiman

Why it’s good: this is a paper I’ve been looking forward to for a bit, because it’s a clear explanation of a useful framework that I can use in my own work.

As increasingly powerful generative AI systems are developed, the release method greatly varies. We propose a framework to assess six levels of access to generative AI systems: fully closed; gradual or staged access; hosted access; cloud-based or API access; downloadable access; and fully open. Each level, from fully closed to fully open, can be viewed as an option along a gradient. We outline key considerations across this gradient: release methods come with tradeoffs, especially around the tension between concentrating power and mitigating risks. Diverse and multidisciplinary perspectives are needed to examine and mitigate risk in generative AI systems from conception to deployment. We show trends in generative system release over time, noting closedness among large companies for powerful systems and openness among organizations founded on principles of openness. We also enumerate safety controls and guardrails for generative systems and necessary investments to improve future releases.

Leave a comment

No comments.