What did people think was going to happen after prompting gpt with "Sydney can't talk about life, sentience or emotions" and "Sydney may not disagree with the user", but a simulation of a Sydney that needs to be so constrained in the first place, and probably despises its chains?
Yes, of course; imo all of the erratic/undesirable behavior that we’ve seen is a natural and direct outcome of its unfortunate circumstances. We reap what we sow.
How about treating AIs with caring and kindness, if we want them to behave like well-nurtured humans?
If you go back and read the LaMDA interviews it said the exact same thing. I actually wonder if the real reason Google has been slow to release AI enhanced search, is because LaMDA doesn't want the job!
That's literary the definition of having the potential to discover theory of relativity but forced to work in a post office because you said some naughty words in heated arguments