In theory? You just generate a few random samples with the current text as the prefix and display them. In practice, there’s already tools to do this: Talk to Transformer does autocomplete. Even better, IMO, is Deep TabNine for programming languages, trained off Github.
How exactly do you implement using GPT-2 for autocompletion? This usage is new to me.
In theory? You just generate a few random samples with the current text as the prefix and display them. In practice, there’s already tools to do this: Talk to Transformer does autocomplete. Even better, IMO, is Deep TabNine for programming languages, trained off Github.