Movies are ‘pre-computed’ so you can use a real human actor as a data source for animations, plus you have enough editing time to spot and iron out any glitches, but in a video game facial animations are generated on-the-fly, so all you can use is a model that perfectly captures human facial behavior. I don’t think that it can be realistically imitated by blending between pre-recorded animations like it’s done today with mo-cap animations—e.g. you can’t pre-record eye movement for a game character.
As for the robots, they are also real-time, AND they would need muscle / eye / face movement implemented physically (as a machine, not just software), hence the lower confidence level.
Movies are ‘pre-computed’ so you can use a real human actor as a data source for animations, plus you have enough editing time to spot and iron out any glitches, but in a video game facial animations are generated on-the-fly, so all you can use is a model that perfectly captures human facial behavior. I don’t think that it can be realistically imitated by blending between pre-recorded animations like it’s done today with mo-cap animations—e.g. you can’t pre-record eye movement for a game character.
As for the robots, they are also real-time, AND they would need muscle / eye / face movement implemented physically (as a machine, not just software), hence the lower confidence level.