Educating fashions to precise their uncertainty in phrases


We present {that a} GPT-3 mannequin can be taught to precise uncertainty about its personal solutions in pure language—with out use of mannequin logits. When given a query, the mannequin generates each a solution and a degree of confidence (e.g. “90% confidence” or “excessive confidence”). These ranges map to chances which might be effectively calibrated. The mannequin additionally stays reasonably calibrated underneath distribution shift, and is delicate to uncertainty in its personal solutions, relatively than imitating human examples. To our information, that is the primary time a mannequin has been proven to precise calibrated uncertainty about its personal solutions in pure language. For testing calibration, we introduce the CalibratedMath suite of duties. We examine the calibration of uncertainty expressed in phrases (“verbalized likelihood”) to uncertainty extracted from mannequin logits. Each sorts of uncertainty are able to generalizing calibration underneath distribution shift. We additionally present proof that GPT-3’s capacity to generalize calibration is dependent upon pre-trained latent representations that correlate with epistemic uncertainty over its solutions.

Leave a Reply

Your email address will not be published. Required fields are marked *