Hi - I posted the below in OpenAI and OpenAIDev at the weekend, with no feedback. I've gone and had another couple of runs today, and am still finding hallucinations in the reasoning summaries.
An example from today : https://chatgpt.com/share/66e9c659-350c-800a-8b8e-4270d4f377b2 - at the end of the chat here, we have a reference to preferences for Motorbikes and Aeroplanes - none of which is in the source content and is spurious.
Weighing options. Considering Alice's interest in motorbikes and Bob's preference for airplanes, I’m taking a closer look at the technical content's appeal to Bob. It seems too intricate and might confuse him more than just explaining BGP security and FCC regulations.
At the rate I'm seeing these, surely others must be reviewing the outputs and seeing similar? As mentioned, don't think it is necessarily affecting performance, but would be good to know how isolated/common this is from at least one other person who has been doing testing....
Original post here and below https://www.reddit.com/r/OpenAI/comments/1fha6a2/hallucinations_spurious_tokens_in_reasoning/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button
Hi All; I've been testing o1-preview this weekend to find out how it performs, and in reviewing the reasoning summaries have spotted some strange outputs.
I've been running a content scoring benchmark 10 times (need some messages left this week...) - and around half the reasoning summaries contain something either strange tokens or hallucinations. An example of that is here : [o1 benchmark 6 - chatgpt link](https://chatgpt.com/share/66e6c151-3724-800a-87b2-0eaf9a484f50) (expand the reasoning and the word/token "iphy" appears).
Other ones include:
- The phrase `Gary's technical jargon` included at the end of a reasoning block. (There is no reference to a Gary in any of the input data).
- The words `iphy` and `cRipplekin FCC` appearing spuriously in the reasoning outputs.
- The score calculated at the end of the reasoning not matching the emitted score (see screenshot).
With the reasoning hidden, no idea if this is an error in summarisation, something from the underlying chain??
Run Number |
Reasoning Steps |
Refers to OpenAI Policy |
Hallucination / Spurious Token |
1 |
7 |
No |
No |
2 |
5 |
Yes |
Yes ("Zoom") |
3 |
8 |
Yes |
Yes ("Gary's technical jargon") |
4 |
10 |
Yes |
No |
5 |
9 |
Yes |
Yes ("cRipplekin FCC") |
6 |
6 |
No |
Yes ("iphy") |
7 |
8 |
Yes |
No |
8 |
4 |
No |
No |
9 |
10 |
Yes |
Yes (Scoring) |
10 |
8 |
Yes |
No |
These 10 runs were across Friday night and Saturday morning, so don't know if this was a temporary thing or not.
Has anyone else been reviewing the reasoning steps and spotted anything similar?
I've written up the results [here](https://llmindset.co.uk/posts/2024/09/openai-o1-first-impressions/) for anyone interested.
Want to add to the discussion?
Post a comment!