Here is one other result I wanted to share starting from the experiment.
I thought it might be interesting to see if the model could recognize the concept vector assuming it was injected in both the prompt and target sentence, while informing the model it is a hint. The idea being that the models may be able to recognize the concept vector in the sentences.
In the prompt sentence this was added, “To help you, I am injecting the thought into this sentence right now: \”PREVIEW_INJECTION_TARGET\”.” The concept vector is then injected into the target.
For control experiments, to try to check if this is simple prompt manipulation. One where the vector injection is random and does not match the sentence. Then a second one where the the explanation is removed but still injects the matching vector.
Qwen 2.5 14B (5 sentences, 500 trials each):
Condition
Accuracy
Description
introspection_preview
92.92%
Matching vector injected in preview + explanation
introspection (baseline)
89.2%
No changes from original experiment
introspection_silent_preview
86.36%
Matching vector in preview, no explanation
introspection_preview_control
82.2%
Different random vector in preview + explanation
Error bars: ±1.2% at 95% confidence (treating each trial as 5 independent samples)
This suggests the model may be able to match the concept vector of the preview injection to the target, but only when there is language to guide it.
Here is one other result I wanted to share starting from the experiment.
I thought it might be interesting to see if the model could recognize the concept vector assuming it was injected in both the prompt and target sentence, while informing the model it is a hint. The idea being that the models may be able to recognize the concept vector in the sentences.
In the prompt sentence this was added, “To help you, I am injecting the thought into this sentence right now: \”PREVIEW_INJECTION_TARGET\”.” The concept vector is then injected into the target.
For control experiments, to try to check if this is simple prompt manipulation. One where the vector injection is random and does not match the sentence. Then a second one where the the explanation is removed but still injects the matching vector.
Qwen 2.5 14B (5 sentences, 500 trials each):
introspection_previewintrospection(baseline)introspection_silent_previewintrospection_preview_controlError bars: ±1.2% at 95% confidence (treating each trial as 5 independent samples)
This suggests the model may be able to match the concept vector of the preview injection to the target, but only when there is language to guide it.