In imperative programming languages, the main purpose of a program is to specify a computation, which we then run. But it seems a rather… *unimaginative* use of a computation, simply to *run* it.

Having specified a computation, what else might one want to do with it?

Some examples:

Differentiate numerical computations (i.e. backprop)

Ask whether any possible inputs could yield a particular output (i.e. NP problems)

Search for data within the computation’s output space (i.e. grep)

Given output from the computation, find a data structure which traces its execution (i.e. parsing with context-free grammars)

Parallelize the computation

Interventions & counterfactuals: ask what will happen/what would have happened if some internal variable at some step of the computation were assigned a different value, keeping the rest of the computation the same

Generate a new computation which optimizes the output of the original computation via dynamic programming

Find summary statistics describing the computation (i.e. profiler, maximum forces encountered in a physical simulation, …?)

Embed one computation within another (i.e. a compiler embeds computation specified in Python into the computation performed by a CPU)

Use knowledge of the internal structure of the two computations to optimize the embedding (i.e. optimizing compiler)

Ask what parameters/outputs of the computation need to be observed in order to reliably deduce other parameters/outputs (i.e. inference, identifiability, & experiment design in causal models)

Ask whether observed data is consistent with the computation (i.e. regexes & CFGs; causal model comparison)

Dependency: is the output (or some set of intermediates) independent of some internal value, or independent of the distinction between two (or more) possible values at some point in the computation?

Can we keep around a small amount of data sufficient to quickly reconstruct anything we might want to know about the full computation (i.e. intermediate values and execution path)?

Was any computation repeated (i.e. something we could optimize out via memoization)?

Are there any unused patterns/symmetries in the computational DAG (i.e. potential for refactoring the code)?

Big-O runtime analysis

I’m also interested in more general meta-use-cases for computations, which generate use-cases that aren’t just “run the computation”. For instance, some patterns in the examples above:

When considering physical processes as computations (i.e. simulations), we often want to query/visualize intermediates in the computation in various ways

When treating computations algebraically—i.e. solving computation(input) = output given the computation and the output—we usually want to reason about the internal structure of the computation.

More generally, given only partial information from the computation, deducing other information usually involves operations other than just running the computation. Again, this is especially relevant when the computation simulates some physical process about which we have partial information.

Anything involving runtime, profiling, optimization, debugging, etc will typically involve looking at the internal structure of the computation.

Any semantic question about the program (semantic = about input/output relation, as opposed to syntactic = about the source code itself). Note that this is uncomputable due to rice’s theorem. “Output” includes whether the computation halts.

Find a semantically equivalent computation that fulfills/maximizes some criterion:

more obviously correct

shorter in length

faster to run (on hardware x or model of computation y)

doesn’t use randomness

doesn’t leak info through side-channels

compliant with design pattern x

any other task done by a source-to-source compiler

Find a computation that is semantically equivalent after applying some mapping to the input and/or output:

runs on encrypted input/output pairs (homomorphic encryption)

computation is reversible (required before running on a quantum computer)

redundantly encoded, add metadata to output, etc. Example: run program on untrusted hardware in such a way that the result is trusted (hardware exposed to outer space, folding@home, etc)

Any question about the set of programs performing the same computation.

this computation must take at least x time

this computation cannot be done by any program of length less than x (kolmogorov complexity)

Treat the program as an anthropological artifact.

deduce the state of mind of the person that wrote the program

deduce the social environment in which the program was written

deduce the technology level required to make running the program practical

etc.

(Thanks for reminding me why I love CS so much!)

Mod note: Fixed formatting of this comment.

Substituting parts of the computation (replace a slow, correct algorithm for part of the computation with a fast, approximate one)

Formally verifying properties of the computation

Informally checking properties of the computation (is this algorithm for making loan decisions fair?)

Debugging the computation, or more generally “modifying the computation to do what you actually want”

The process used to generate a valid computation could itself be a valuable compression/training process of some sort. Characteristics of the final computation might be useful as feedback. I guess generalizing is just taking your listed use cases and seeing the computation in question as either up or downstream from the thing we really care about.