Bibliography (65):

  1. Interacting with GPT-2 to Generate Controlled and Believable Musical Sequences in ABC Notation

  2. Folk the Algorithms

  3. The Session

  4. Abc:standard:v2.1

  5. Folk Music Style Modeling Using LSTMs

  6. Folk RNN – Generate Folk Tunes With a Recurrent Neural Network

  7. IraKorshunova/folk-Rnn: Folk Music Modeling With LSTM

  8. Folk-Rnn Search Results

  9. Music Transformer: Generating Music with Long-Term Structure

  10. Music Transformer

  11. Encoding Musical Style with Transformer Autoencoders

  12. Generating Long Sequences with Sparse Transformers

  13. MuseNet: a deep neural network that can generate 4-minute musical compositions with 10 different instruments, and can combine styles from country to Mozart to the Beatles

  14. GPT-2 Preference Learning for Music Generation

  15. GPT-2 Neural Network Poetry

  16. Guide to Advanced Features of Abc2midi

  17. TiMidity++ Download

  18. Data Formatting Issue: Missing Unique IDs & Stray HTML · Issue #9 · IraKorshunova/folk-Rnn

  19. Emacs Manual: Keyboard Macros: 17.3. The Keyboard Macro Counter

  20. gpt-2#data-the-project-gutenberg-poetry-corpus

    [Transclude the forward-link's context]

  21. Data Formatting Issue: Missing Unique IDs & Stray HTML · Issue #9 · IraKorshunova/folk-Rnn

  22. Machine Folk from GPT-2!

  23. GPT-3 Creative Fiction § BPEs

  24. https://x.com/theshawwn

  25. gpt-2#gpt-2-1-5b

    [Transclude the forward-link's context]

  26. BPE Blues

  27. Stream Shawn Presser

  28. RNN Metadata for Mimicking Author Style

  29. CTRL: A Conditional Transformer Language Model For Controllable Generation

  30. 2019-12-07-gwern-shawnpresser-gpt2-music-abccombined-nospaces.tar

  31. 2019-12-04-gpt2-abc-alldata.tar.xz

  32. Yn Bollan Bane (Jig) on The Session

  33. 2019-11-09-ynbollanbane.txt

  34. WaveNet: A Generative Model for Raw Audio

  35. Neural Audio Synthesis of Musical Notes with WaveNet Autoencoders

  36. The challenge of realistic music generation: modeling raw audio at scale

  37. Table 1: Maximum Path Lengths, Per-Layer Complexity and Minimum Number of Sequential Operations for Different Layer Types. _n_ Is the Sequence Length, _d_ Is the Representation Dimension, _k_ Is the Kernel Size of Convolutions and _r_ the Size of the Neighborhood in Restricted Self-Attention

  38. Efficient Attention: Breaking The Quadratic Transformer Bottleneck

  39. Pop Music Transformer: Beat-based Modeling and Generation of Expressive Pop Piano Compositions

  40. Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context

  41. TensorFlow Research Cloud (TRC): Accelerate your cutting-edge machine learning research with free Cloud TPUs

  42. Accidentally Quadratic

  43. Image GPT (iGPT): We find that, just as a large transformer model trained on language can generate coherent text, the same exact model trained on pixel sequences can generate coherent image completions and samples

  44. iGPT: Generative Pretraining from Pixels

  45. Reformer: The Efficient Transformer

  46. T5: Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

  47. The Lakh MIDI Dataset V0.1

  48. https://composing.ai/dataset

  49. https://www.reddit.com/r/WeAreTheMusicMakers/comments/3ajwe4/the_largest_midi_collection_on_the_internet/

  50. midi2abc: Program to Convert MIDI Format Files to Abc Notation

  51. Abc:standard:v2.1 § Commenting

  52. 2020-04-12-Gwern-Gpt-2-117m-Midi-30588051.tar.xz

  53. AI Dungeon 2

  54. 2020-04-18-gpt2-117m-midi-samples.txt

  55. https://news.ycombinator.com/item?id=22981634