humans have much to learn from robotkind
it's all about the prelude before the conversation. You need to tell it what the AI is and is not capable. It's not trying to be right, it's trying to complete what it thinks the AI would do :)
I wonder if the AI would be better at math if you told it to show it's work
Seems to work

Jul 17, 2020 · 10:15 AM UTC

Teaching GPT-3 to do a brute force 'for loop' checking answers also seems to work
Some rejected scenarios proposed by GPT-3 while trying to avoid the chore of math.
Replying to @kleptid
What was your original prompt for this? I'm having a lot of trouble getting anything coherent out.
I tried to recreate it again to check how well it replicates. General context prompt ends in the line 'You say "So, f(x)=x*x".'. You might need to tweak the temperature to make the math more consistent.
Wow...which part is the prompt here, please?
I tried to recreate it again to check how well it replicates. General context prompt ends in the line 'You say "So, f(x)=x*x".'. You might need to tweak the temperature to make the math more consistent.
@nachoarranz estuviste siguiendo El Progreso de estos loritos????? 🦜🤯🤯🤯
Did you intentionally train GPT3 on Spice and Wolf
I think it's getting around a limit on algorithmic depth (how many operations the net can learn to do sequentially) by storing partial results in its output. Which is obviously easier, and probably something we should build into the architecture.