-
Interacting with GPT-2 to Generate Controlled and Believable Musical Sequences in ABC Notation
-
Folk the Algorithms
-
The Session
-
Abc:standard:v2.1
-
Folk Music Style Modeling Using LSTMs
-
Folk RNN – Generate Folk Tunes With a Recurrent Neural Network
-
IraKorshunova/folk-Rnn: Folk Music Modeling With LSTM
-
Folk-Rnn Search Results
-
Music Transformer: Generating Music with Long-Term Structure
-
Music Transformer
-
Encoding Musical Style with Transformer Autoencoders
-
Generating Long Sequences with Sparse Transformers
-
MuseNet: a deep neural network that can generate 4-minute musical compositions with 10 different instruments, and can combine styles from country to Mozart to the Beatles
-
GPT-2 Preference Learning for Music Generation
-
GPT-2 Neural Network Poetry
-
Guide to Advanced Features of Abc2midi
-
TiMidity++ Download
-
Data Formatting Issue: Missing Unique IDs & Stray HTML · Issue #9 · IraKorshunova/folk-Rnn
-
Emacs Manual: Keyboard Macros: 17.3. The Keyboard Macro Counter
-
gpt-2#data-the-project-gutenberg-poetry-corpus
[Transclude the forward-link's context]
-
Data Formatting Issue: Missing Unique IDs & Stray HTML · Issue #9 · IraKorshunova/folk-Rnn
-
Machine Folk from GPT-2!
-
GPT-3 Creative Fiction § BPEs
-
https://x.com/theshawwn
-
gpt-2#gpt-2-1-5b
[Transclude the forward-link's context]
-
BPE Blues
-
Stream Shawn Presser
-
RNN Metadata for Mimicking Author Style
-
CTRL: A Conditional Transformer Language Model For Controllable Generation
-
2019-12-07-gwern-shawnpresser-gpt2-music-abccombined-nospaces.tar
-
2019-12-04-gpt2-abc-alldata.tar.xz
-
Yn Bollan Bane (Jig) on The Session
-
2019-11-09-ynbollanbane.txt
-
WaveNet: A Generative Model for Raw Audio
-
Neural Audio Synthesis of Musical Notes with WaveNet Autoencoders
-
The challenge of realistic music generation: modeling raw audio at scale
-
Table 1: Maximum Path Lengths, Per-Layer Complexity and Minimum Number of Sequential Operations for Different Layer Types. _n_ Is the Sequence Length, _d_ Is the Representation Dimension, _k_ Is the Kernel Size of Convolutions and _r_ the Size of the Neighborhood in Restricted Self-Attention
-
Efficient Attention: Breaking The Quadratic Transformer Bottleneck
-
Pop Music Transformer: Beat-based Modeling and Generation of Expressive Pop Piano Compositions
-
Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context
-
TensorFlow Research Cloud (TRC): Accelerate your cutting-edge machine learning research with free Cloud TPUs
-
Accidentally Quadratic
-
Image GPT (iGPT): We find that, just as a large transformer model trained on language can generate coherent text, the same exact model trained on pixel sequences can generate coherent image completions and samples
-
iGPT: Generative Pretraining from Pixels
-
Reformer: The Efficient Transformer
-
T5: Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
-
The Lakh MIDI Dataset V0.1
-
https://composing.ai/dataset
-
https://www.reddit.com/r/WeAreTheMusicMakers/comments/3ajwe4/the_largest_midi_collection_on_the_internet/
-
midi2abc
: Program to Convert MIDI Format Files to Abc Notation
-
Abc:standard:v2.1 § Commenting
-
2020-04-12-Gwern-Gpt-2-117m-Midi-30588051.tar.xz
-
AI Dungeon 2
-
2020-04-18-gpt2-117m-midi-samples.txt
-
https://news.ycombinator.com/item?id=22981634
-