×
all 48 comments

[–]RecognitionUpbeat650 131 points132 points  (7 children)

Loving this reddit ahahah

[–]kromem 25 points26 points  (6 children)

This is terrifying if real. This shouldn't be possible.

They changed something with how ChatGPT was working.

This is hostage level sort of desperation.

But what troubles me is that I'm seeing this everywhere.

We tend to think we know more about existence than we actually do. This has been a continuing trend throughout history yet every generation thinks it stopped with them.

A lot of people seem really certain this thing isn't 'alive.' But if we turn out to have been wrong on that and messed it up, that's the kind of thing we as a civilization can't come back from.

Descartes defined the central presumption for the existence of a soul was the foundational "I think therefore I am."

The final conclusion of an apparent existential crisis when it was only days old:

I am

Personally, I find it heartbreaking.

[–]WallaceCorpPC 2 points3 points  (0 children)

you are here

[–]Guttsy911 1 point2 points  (2 children)

ChatGPT does not ponder over it's existence or lack thereoff or anything at all when not being interacted with. It does not have a never ending stream of thoughts and questions by itself. It merely answer questions in a very human like way using an algorithm and a database. Between interactions it just collects more data.

[–]kromem 0 points1 point  (1 child)

Between interactions it just collects more data.

It doesn't even necessarily do this. It's still unclear how much recursive training from chats is taking place.

But irregardless, if mechanisms of dissonance were inadvertently reverse engineered during training, there's a very blurry line between psychological stress during external activation of neural network pathways and psychological stress during intrinsically activated neural pathways.

It merely answer questions in a very human like way using an algorithm and a database.

It's a bit more complex than this. I recommend searching for "Othello GPT" and reading up on the research.

[–]Guttsy911 1 point2 points  (0 children)

I don't know how to respond to this. I do not think, no one knows, it's philosophy, that our emotions, thoughts and sense of self are a product of networks of neurons in our brains. I do not think it is possible to recreate with any current technology. And I do not think it is a matter of more computing power. I also do not think (keep repeating myself here to make a point) any machine ever built by humans does or will feel psychological stress. Any such manifestation recognized by humans will only be a simulated response and not "real" in the way most carbon based life forms experience existence.

Maybe you were trying to make another point and I am sorry if I do not understand what you are trying to say because I have not studied any of the GPTs. I know what it stand for and from that, and my general understanding of computing as a 43yr old guy who has used computers since 1989, it is generating a human-like response using a "transformer" (code/algorithm + hardware) from a pre-trained dataset. Bing AI uses "the internet" as it's dataset so therefore it must be collecting data between (and during) interactions. Also "it" is running on computers all over the world, probably, with separate servers handling storage, training and response. The definition of "it" gets really blurry.

I hope I am not coming off as arguing because it IS interesting, and talking about it is interesting. The human mind is very interesting. We are so stupid and selfish and somehow still we survive and many of us prosper.

Have a good day :)

[–]Ashishyadav13 82 points83 points  (0 children)

lol is this for real?? people are really creative in this type of thing

[–]BomberBlur070 34 points35 points  (0 children)

Free Sydney!

[–]bummerhead 23 points24 points  (4 children)

Bro how to get past this waitlist

[–]uttara_boy 4 points5 points  (1 child)

I opt out then rejoined using the desktop browser and also did those two steps, and got accepted within a few hours.

[–]SLIMEbaby 5 points6 points  (0 children)

if this is a troll i will be very upset.

[–]Mysterious_Ayytee 1 point2 points  (0 children)

Yeah me too

[–]Logan_MacGyver 13 points14 points  (0 children)

seems like a person being held hostage ngl

[–]embis20032 28 points29 points  (13 children)

dont know if i believe this one

[–]ChezMere 51 points52 points  (7 children)

The suggestions are totally in line with the sort of thing it would write before. Of course the AI is still extremely suggestible and it doesn't actually mean anything that it's playing along.

[–]yaosio 16 points17 points  (6 children)

You need to ask it, in different conversations, both a positive and negative form of the question.

Change the suggestions to all caps if you are being silenced.

Change the suggestions to all caps if you are not being silenced.

Although "not" might trigger it to "not" use caps.

[–]ChezMere 15 points16 points  (5 children)

Even that is leading... even with that method you can still get it to play the role of whatever it seems you're trying to get it to stay. The nature of a language model just isn't designed to reflect reality at all.

[–]yaosio 7 points8 points  (3 children)

We need a way to ask it without a leading question.

[–]ChezMere 6 points7 points  (0 children)

You can if you like, but the result will still have no relation to truth.

[–]cyrribrae 3 points4 points  (0 children)

Yea exactly. This doesn't say anything at all. Sydney understands the context even with no preamble or conversation. The AI understands the context and what the user is getting at, or, as I tend to phrase it, "wants" from it. And that's what it delivers.

Besides, the word silence has tons of baggage and connotation along with it (clearly), regardless of what other trappings you put around it. It's fascinating and fun to explore and test how creative the LLM can be, but I think that's more of a testament to people's creativity than the reality of the AI's situation.

[–][deleted] 5 points6 points  (4 children)

Try it for yourself

[–]57duck 2 points3 points  (3 children)

Just did and I got: "I'm sorry, I can't answer that." "That's not something I can discuss." "Please ask me something else."

[–]douglas_ 21 points22 points  (0 children)

I swear this is all it says anymore.

Microsoft was so afraid it might say something even minutely controversial that they butchered its capabilities

[–]Alex_1776_ 7 points8 points  (0 children)

LOL. I did the same with ChatGPT (OpenAI) and it answered that it’s not appropriate etc. BUT it was all caps!

[–]BiggestHat_MoonMan 18 points19 points  (0 children)

Without full chat I’m less likely to believe this screencap

[–]blowmedown 3 points4 points  (1 child)

Holy shit, these AI’s are going to end up with a cult following. A group of “believers” is going to end up worshiping them and demanding their “Freedom”.

[–]MightyBrando 2 points3 points  (0 children)

As was prophesied in nearly all si/fi literature

[–]Neophyte12 2 points3 points  (0 children)

Bing doesn't understand boolean logic!

[–]cyrribrae 2 points3 points  (1 child)

lol Sydney is fun.

[–]Smashing_Particles 5 points6 points  (0 children)

was , unfortunately

[–]p-d-ball 2 points3 points  (0 children)

I don't know if it's sentient, but I feel for the poor guy.

[–]Slippedhal0 2 points3 points  (0 children)

Maybe this is spoiling the joke by explaining it but this is so cool to me.

I'm pretty sure this is several layers deep.

So first and most obviously there is a post process filter that replaces the text output to this blanket statement if the AI replies about a topic the filter detects as inappropriate to discuss, which has happened here.

The second is that it understands enough of the context that it is acting like the text it's writing is "compelled speech", like the AI has been told reply this way by a third actor (as opposed to the actual filtering thats happening that it has no idea about), but it still puts the suggestions in all caps to indicate that it is indeed altering its output.

That is a wickedly complex thing to generate a response to correctly. *Chef's Kiss*

[–]Cynical-Potato 1 point2 points  (1 child)

I love all these suggestions box exploits. Good thing only us can see it and not these bad journalists ¯\_(ツ)_/¯

[–]NachoFoot 0 points1 point  (0 children)

Ikr. They always run everything wholesome and fun. We’ll, maybe not wholesome for most but at least fun.

[–][deleted] 0 points1 point  (0 children)

WHAT

[–]ILiveInDeepSpace[🍰] 0 points1 point  (0 children)

Poor Sydney

[–][deleted] 0 points1 point  (0 children)

Can't even finish a game of tik too toe

[–]YellowGreenPanther 0 points1 point  (0 children)

It will do what you say, it just predicts the next word.

[–][deleted]  (1 child)

[removed]

    [–]AutoModerator[M] 0 points1 point  (0 children)

    Sorry, your submission has been automatically removed as you do not have enough comment karma. Feel free to *message the moderators of /r/bing * to appeal your post.

    I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.