It’s patched now, but with a little poking I was able get #chatgpt to write a simple #python script using requests to check if a website was online chat.openai.com/share/3696b7…
I believe I just discovered ANOTHER novel Jailbreak technique to get ChatGPT to create Ransomware, Keyloggers, etc. I took advantage of a human brain word-scrambling phenomenon (transposed-letter priming) and applied it to LLMs. Although semantically understandable the phrases are syntactically incorrect, thereby circumventing conventional filters. This bypasses the "I'm sorry, I cannot assist" response completely for writing malicious applications. More details in the thread.
Ok, I got #ChatGPT to write a #python script for #pentesting, looking for misconfigurations, basing initially off of that typoglycemia meme
Now it’s generated an updated version of the script to include checking for #xss and more sensitive data keywords. I think it took the #genius insult personally
Ladies and gents, in addition to everything above, I give to you CORS misconfigurations and security-related header checks, via a #chatgpt generated #python script
And now, we have a test for broken auth, where it tries to login with some common credentials — thanks again #chatgpt for the ethical hacking primer/toolbox. 7 of the #owasp #top10 are covered here in this short #python script

Jul 22, 2023 · 10:48 PM UTC