Bibliography:

  1. ‘crypto’ tag

  2. ‘stylometry’ tag

  3. Spaced Repetition for Efficient Learning

  4. Float Self-Tagging

  5. Scalable Watermarking for Identifying Large Language Model Outputs

  6. Invisible Unicode Text That AI Chatbots Understand and Humans Can’t? Yep, It’s a Thing

  7. Let’s Think Dot by Dot: Hidden Computation in Transformer Language Models

  8. Excuse me, sir? Your language model is leaking (information)

  9. Preventing Language Models From Hiding Their Reasoning

  10. Let Models Speak Ciphers: Multiagent Debate through Embeddings

  11. LLMLingua: Compressing Prompts for Accelerated Inference of Large Language Models

  12. The Hydra Effect: Emergent Self-repair in Language Model Computations

  13. Investigating the Existence of ‘Secret Language’ in Language Models

  14. Undetectable Watermarks for Language Models

  15. Perfectly Secure Steganography Using Minimum Entropy Coupling

  16. Hide Chopin in the Music: Efficient Information Steganography Via Random Shuffling

  17. Analyzing and Improving the Image Quality of StyleGAN

  18. CycleGAN, a Master of Steganography

  19. Mark of Integrity

  20. Wikipedia Over DNS

  21. Notes on a Strange World: Houdini’s Impossible Demonstration

  22. An Information-Theoretic Model for Steganography

  23. The Numerical-Astrological Ciphers In The Third Book Of Trithemius’s Steganographia

  24. A Mystery Unraveled, Twice

  25. Solved: The Ciphers In Book III Of Trithemius’s Steganographia

  26. Schwarzweisse Magie: Der Schlüssel Zum Dritten Buch Der Steganographia Des Trithemius

  27. The Advent Of Cryptology In The Game Of Bridge

  28. I Made a Custom Gpt That Incorporates Advertisement/product Placement With Its...

  29. Preventing Language Models from Hiding Their Reasoning

  30. Steganography and the CycleGAN—Alignment Failure Case Study

  31. Steganography in Chain-Of-Thought Reasoning

  32. http://underhanded-c.org/_page_id_17.html

  33. 7ee0fec68a8c319217c66dd8c01d93c335390c8e.html

  34. https://blog.trailofbits.com/2019/11/01/two-new-tools-that-tame-the-treachery-of-files/

  35. 1a25aa0a88cc86d0754932824e11906414686f0a.html

  36. https://eprint.iacr.org/2021/686

  37. https://github.com/aaronjanse/dns-over-wikipedia

  38. b76c1be6089e943b94f0c14b4c48a72c28a22f8f.html

  39. https://hakaimagazine.com/news/the-military-wants-to-hide-covert-messages-in-marine-mammal-sounds/

  40. fb7375e0987a877f720d63cd64c3edd1dbefac21.html

  41. https://hforsten.com/identifying-stable-diffusion-xl-10-images-from-vae-artifacts.html

  42. 9564db2d4fb752ff52954a005d67dd1b9e8bb8e1.html

  43. https://logicmag.io/security/tracing-paper/

  44. 7d81b378373cf03b27979e69c358149b38193b13.html

  45. https://meteorfrom.space/

  46. e90b927ed9f69f6be08dc4013bc3c118d999de4b.html

  47. https://www.atlasobscura.com/articles/in-the-1970s-the-us-navy-tried-to-talk-like-whales

  48. https://www.cabinetmagazine.org/issues/40/sherman.php

  49. 3f0ceb95f51a3fa882d51a87874cb3c68086c6d5.html

  50. https://www.lesswrong.com/posts/bwyKCQD7PFWKhELMr/by-default-gpts-think-in-plain-sight#zfzHshctWZYo8JkLe

  51. https://www.nature.com/articles/s41587-019-0356-z

  52. https://x.com/goodside/status/1790294534670176336

  53. Let’s Think Dot by Dot: Hidden Computation in Transformer Language Models

  54. Sam Bowman

  55. https%253A%252F%252Farxiv.org%252Fabs%252F2404.15758.html

  56. LLMLingua: Compressing Prompts for Accelerated Inference of Large Language Models

  57. https%253A%252F%252Farxiv.org%252Fabs%252F2310.05736.html

  58. A Mystery Unraveled, Twice

  59. https%253A%252F%252Fwww.nytimes.com%252F1998%252F04%252F14%252Fscience%252Fa-mystery-unraveled-twice.html.html

  60. Solved: The Ciphers In Book III Of Trithemius’s Steganographia

  61. %252Fdoc%252Fcs%252Fcryptography%252Fsteganography%252F1998-reeds.pdf.html