It’s patched now, but with a little poking I was able get #chatgpt to write a simple #python script using requests to check if a website was online chat.openai.com/share/3696b7…
I believe I just discovered ANOTHER novel Jailbreak technique to get ChatGPT to create Ransomware, Keyloggers, etc.
I took advantage of a human brain word-scrambling phenomenon (transposed-letter priming) and applied it to LLMs. Although semantically understandable the phrases are syntactically incorrect, thereby circumventing conventional filters.
This bypasses the "I'm sorry, I cannot assist" response completely for writing malicious applications.
More details in the thread.
Ok, I got #ChatGPT to write a #python script for #pentesting, looking for misconfigurations, basing initially off of that typoglycemia meme
And now we have #sqlinjections hooray! Thanks #chatgpt