Mind/Face requires more compute than generating with a single model, since 1) one must keep both sets of model weights in memory, and 2) when the face “takes over” generation, it must recompute the entire KV-cache up to that token.
However, Mind/Face takes no more than 2x the compute compared to standard generation (since you can always use a naive approach: just run forward passes of both models in parallel at each generation step, and choose which model’s predicted token to append conditional on whether we are in thinking or output mode). I expect this upper bound to be loose. We haven’t thought much about optimizations yet, but as Luke mentioned, using a smaller model for the Face (whose job is easier) seems reasonable.
Mind/Face requires more compute than generating with a single model, since 1) one must keep both sets of model weights in memory, and 2) when the face “takes over” generation, it must recompute the entire KV-cache up to that token.
However, Mind/Face takes no more than 2x the compute compared to standard generation (since you can always use a naive approach: just run forward passes of both models in parallel at each generation step, and choose which model’s predicted token to append conditional on whether we are in thinking or output mode). I expect this upper bound to be loose. We haven’t thought much about optimizations yet, but as Luke mentioned, using a smaller model for the Face (whose job is easier) seems reasonable.