After the quoted thread generated furious debate about whether Bing AI keeps track of board position when given a chess game, it occurred to me to just ask it! It got 63 of 64 squares correct; it missed the pawn on b5. Accuracy degrades as the game gets longer.
Great example of why you shouldn't read too much into analogies like "bullshit generator" and "blurry JPEG of the web". The best way to predict the next move in a sequence of chess moves is… to build an internal model of chess rules and strategy, which Sydney seems to have done.
Wouldn't the fact that accuracy degrades suggest that it is not an abstracted model or internal representation but simply a probabilistic reply to your very specific prompt?
Reasonable people can disagree and I know I won't change anyone's mind, but here's why I see it differently. Gradual accuracy degradation suggests an internal representation exists but with a small chance of error in updating it, and that error accumulates with move count. 1/n
Without an internal representation, I think accuracy would drop exponentially, and there's no way it would get to 25 ply without losing the plot completely. I don't know any probabilistic mechanism that can simulate state updates without actually doing state updates.
I suspect degradation is just due to the context size limitation. You should try it again in a couple of weeks.
Why would context size make a difference? Surely the current window is big enough to store 100 or even 200 ply of chess moves? Each move is only 3-5 chars so even in the worst case where each char was its own token…
Maybe you are correct, but the inner monologue can be much longer than the output - and I suspect it is in this case, though I haven't looked.

Mar 4, 2023 · 6:36 PM UTC

Wow, how does the inner monologue work? Another prompt running in parallel to the conversation that’s non user-accessible?
The edge running present and near future norm of transformers are multiple role specific models orchestrated together, metabolising varied types of memory storage and attentional context window inputs, like cells interlinked.
Please look into it Sir, since 2days my Bing SERPs not showing but Chat mode responses are(I try to avoid chat for limit) I tried: 1.Sign out & Sign In 2.Clearing cookies of bing 3.Reinstalling Edge 4.Changing"Chat response on result page" in Bing"Labs"to More frequent
Interested in how this inner monologue squeezes out stuff from the beginning of the prompt. It didn't in these two examples I tried.
@WickedViper23 reminded me of you playing around with with injecting thoughts
Hmm it would be amazing if it is storing the board in the inner monologue. The few shots examples do not seem to encourage it to do such thing.