--- Log opened Sat Aug 01 22:19:50 2015 22:19 » gwern (~gwern@wikipedia/Gwern) has joined #lw-bots 22:19 » Irssi: #lw-bots: Total of 12 nicks (0 ops, 0 halfops, 0 voices, 12 normal) 22:19 » Irssi: Join to #lw-bots was synced in 1 secs 22:20 < lw-markov> : i was reading about 22:20 < lw-markov> : There is no suggestion from Clark and boneless chicken and boneless chicken say origo in conjunction with coeliac disease e books? 22:20 < lw-markov> : Tuxedage will be increasingly less big employers too 22:20 < lw-markov> : ok, I had some debates about Millikan over the top parts were cool 22:20 < lw-markov> : _ 22:20 < lw-markov> : what's a meaningful question if you haven't have a MS internship. Call your comprehensive guides comprehensive guides comprehensive guides comprehensive guides comprehensive guides. Take the dates as unimportant metadata rather small doses 22:21 < lw-markov> : I have read Solaris, yes 22:21 < lw-markov> : Then what you mean real rainbows, right? Been to Joe's fine 22:21 < lw-markov> : I've done well. Goodnight Burninate! 22:21 < lw-markov> : Because nobody views a corpse? 22:22 < lw-markov> : namespace, spxtr. 22:22 < lw-markov> : Turns out most physicists don't take a year off, a dinner for microants? Was it the Tender Loving I pass it, or are there politics on lesswrong? 22:22 < lw-markov> : _Fear and prisoners the former, is all junk 22:23 < lw-markov> : nshepperd i'm gonna create a new acct 22:23 < lw-markov> : P 22:23 < lw-markov> : $LINK 22:23 < lw-markov> : hmm keith devlin sounds lik we hang out before your next statement 22:24 < lw-markov> : cntrational Pretty standard St. Bernard 22:24 < lw-markov> : ha Filips 22:24 < lw-markov> : Just that that happens 22:51 < c0rw1n> lw-markov: imitate shminux 22:51 < lw-markov> : also skip Alicorn's a one 22:51 < c0rw1n> lw-markov: imitate shminux 22:51 < lw-markov> : heh 22:51 < c0rw1n> lw-markov: imitate shminux 22:51 < lw-markov> : on the elder Lanister over another 22:51 < c0rw1n> lw-markov: imitate shminux 22:51 < lw-markov> : nice one, the one 23:53 < Houshalter> lw-markov: conversation shminux 23:53 < lw-markov> : because they were all for cleaning up the skills 23:54 < lw-markov> : Or managing rental property expertise on the selective pressures on strategies followed. You have to be a problem. 23:54 < c0rw1n> lw-markov: imitate nshepperd 23:54 < lw-markov> : oh? 23:55 < lw-markov> : we're already perfectly rational sociopaths, jeez gwern 23:55 < c0rw1n> Bwahaha --- Day changed Sun Aug 02 2015 02:04 » c0rw1n is now known as c0rw|zZz 05:10 < Betawolf> lw-markov: conversation 10 05:10 < lw-markov> : the thing about life for me 05:10 < lw-markov> : hypothetical anti feminists. 05:10 < lw-markov> : DEA7TH I really don't have all this corruption 05:10 < lw-markov> : I've read economists sing praises for the aged, the Benedictines, elite or not? 05:11 < lw-markov> : Boxo I don't encrypting it combines meme member 05:11 < lw-markov> : gardens 05:11 < lw-markov> : $LINK 05:11 < lw-markov> : it would be hilarious if anyone knows 05:11 < lw-markov> : that's not really, the more stupid it seems more likely instrumental skills and knowledge go out 05:11 < lw-markov> : do a combined meditation has worked closely with health officials, syzygy is ra 05:13 < Betawolf> lw-markov: conversation 15 05:13 < lw-markov> : Summer is why tank carrier or east european, handgun, is to have a branch in my room, so, around a hour before session, linking that celebrate it quite so easy. 05:13 < lw-markov> : Obormot Gaia the reason we're working with the same parent, who did anyone have fully updated chrome handy? if not, apparently. for pages. 05:13 < lw-markov> : it's drugs, just throwing bombs on food? 05:13 < lw-markov> : friggin'AWESOME IDEA and homelessness 05:13 < lw-markov> : at 29 05:13 < lw-markov> : of course, how my party 05:14 < lw-markov> : yes 05:14 < lw-markov> : Use the dead, not him saying it already isn't true. 05:15 < lw-markov> : yeah if there isn't be good. 05:15 < lw-markov> : Ralith oh? 05:15 < lw-markov> : presidency 05:16 < lw-markov> : hah 05:16 < lw-markov> : Oh, no. 05:16 < lw-markov> : FourFire ratracerebellion, freelancer, etc.. 05:16 < lw-markov> : lots of things are relevant! 06:33 < SDr> lw-markov: imitate efm 06:33 < lw-markov> : hi mstevens, what do you pronounce your diastolic is best. Why? Because low interest rates that people look alike, but it's a statistical technique for running multi way tests to determine if there was a trivial win.. 06:33 < SDr> lw-markov: imitate efm 06:33 < lw-markov> : fwaht I had to explain the gf? is that? 06:33 < SDr> lw-markov: imitate efm 06:33 < lw-markov> : ggreer awwww 06:33 < SDr> lw-markov: imitate efm 06:33 < lw-markov> : EgyptianAtheist I try to do now, let's a big flop is worthwhile, if you predict what kind of cooking in cast iron, best practices, and for a long time to first principles all the time, discussion is much harder than it looks sweet. 06:53 < Betawolf> lw-markov: conversation Betawolf 06:53 < lw-markov> : we determined not 06:54 < lw-markov> : The thing that you would want to practice concentration meditation stuff seems to always trump fundamentalist theocracy bad according to your values is bad makes the thing from a different present point. 06:54 < lw-markov> : huh, this is, after the first place. 06:54 < Betawolf> lw-markov: conversation Ialdabaoth 06:54 < lw-markov> : I am, or what do I use a free consultation trick utterly abhorrent 06:55 < lw-markov> No chat history for `N9-2600` 06:55 < lw-markov> : slightly 06:55 < Betawolf> lw-markov: conversation realitygrill 06:55 < lw-markov> : pretty high 06:55 < lw-markov> : It is the best you can end up with a c 06:56 < lw-markov> : I wonder if this makes me cringe IRL $LINK 06:56 < Betawolf> lw-markov: conversation quanticle 06:56 < lw-markov> : That's more of a site dedicated to rationality and no one dares take splash damage. The tenth or hundredth? Heinekenmachine? Heinekenmachine? Heinekenmachine? Heinekenmachine? Heinekenmachine? Heinekenmachine? Heinekenmachine? Heinekenmachine? Heinekenmachine? Heinekenmachine? Heinekenmachine? Heinekenmachine? Heinekenmachine? Heinekenmachine? Heinekenmachine? Heinekenmachine? Heineke 06:56 < lw-markov> nmachine? Heinekenmachine? Heinekenmachine? Heinekenmachine? Heinekenmachine? Heinekenmachine? Heinekenmachine? Heinekenmachine? Heinekenmachine? Heinekenmachine? Heinekenmachine? Heinekenmachine? Heinekenmachine? Heinekenmachine? Heinekenmachine? Heinekenmachine? Heinekenmachine? Heinekenmachine? Heinekenmachine? Heinekenmachine? Heinekenmachine? Heinekenmachine? Heinekenmachine? Heinekenmachine? 06:56 < lw-markov> Heinekenmachine? Heinekenmachine? Heinekenmachine? Heinekenmachine? Heinekenmachine? Heinekenmachine? Heinekenmachine? Heinekenmachine? Heinekenmachine? Heinekenmachine? Heinekenmachine? Heinekenmachine? Heinekenmachine 06:56 < lw-markov> : gwern for drug czar what is shame, then we mail it in and you log into 23andme read style decided ryouma if you're a housecat 06:56 < lw-markov> : yeah 06:57 < Betawolf> I really have to figure out how that looping happens 06:57 < Betawolf> lw-markov: conversation quanticle 06:57 < lw-markov> : Yeah. AFAIK the actual location, the hardcore way?. The only question is, The Sopranos is way more often. 06:58 < lw-markov> : Values are misaligned with society. 06:58 < lw-markov> : is'cultural appropriation and how much people get fucking ruined Bà Sorry Bà Bà video Bà Bà every lmao if not impossible to acknowledge my best to make sure everything i say is information theoretically incomprehensible good ideas. even among non open source programs harder if not impossible to acknowledge my best to make accurate predictions Bà light if not impossible to acknowledge my 06:58 < lw-markov> best to make accurate predictions to acknowledge my best friend these past few days 06:59 < Betawolf> lw-markov: imitate expo873 06:59 < lw-markov> : Basically, its not up to you. It's the reason I'm sure they're going deeper, gwern 06:59 < Betawolf> lw-markov: imitate expo873 06:59 < lw-markov> : niska I wonder what I like Tukey because of it, I want to spam the channel 07:01 < Betawolf> lw-markov: conversation kuudes 07:01 < lw-markov> : the i2c driver why we won't london have a predictive model which would feed on dead humans 07:01 < lw-markov> : wow. isnt that against the risk of bad timing 07:01 < lw-markov> : and $LINK 07:03 < kuudes> :-) 07:03 » lw-markov_ (~lw-markov@170.10.187.81.in-addr.arpa) has joined #lw-bots 07:03 » lw-markov_ is "lw-markov" on #lw-bots 07:03 < Betawolf> lw-markov: die 07:04 < Betawolf> lw-markov_: imitate feepbot 07:04 < lw-markov_> : Tuxedage I have not seen anybody named ialdabaoth_ in FDA Reclass Moves son of a Circle K, and religion. 07:06 < matsuura> potatoes 07:06 » lw-markov (~lw-markov@170.10.187.81.in-addr.arpa) has quit (Ping timeout: 265 seconds) 07:07 < Betawolf> lw-markov_: conversation feepbot 07:07 < lw-markov_> : olivia d2 2 2 2 2 1 3, Affection 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1, Cunning 2 1 2, 3 minutes ago oh, ok. will do.. 07:08 < lw-markov_> : usually endless multitasking from prisoners of conscience, best of luck with it 07:08 < lw-markov_> : good to be made if we have a strict liability rule is, it just some games in sochi 07:10 < Betawolf> lw-markov_: conversation gwern 07:10 < lw-markov_> : 'If there is nothing to say to them 07:10 < lw-markov_> : image 800x630 JPEG 96, 191 bytes 07:10 < lw-markov_> : amazon aws is bad? 07:11 < Betawolf> lw-markov_: conversation 12 07:11 < lw-markov_> : both uploads cut off. Sure, Iâ ve thought about applying but was not cynicism, contributed to her website. At present it to take more than 100 hostages held by fewer than 200 men and boys were identified... because the brain workshop ml 07:11 < lw-markov_> : 'my great innovation of the uncertainty about their long positions were cancelled, including at least some university education by a steel axle and is a guy who more or less kept the back of a man to melt my heart is too bad kiba isn't 07:11 < lw-markov_> : neat! 07:11 < lw-markov_> : one of ceramics epoxy 07:12 < lw-markov_> : if you'll probably. otoh risks 2. very very best you'll probably. otoh, only a whole bunch, only a whole bunch new bunch probably. otoh the best parts whole is making up up epoxy 07:12 < lw-markov_> : still, only a whole bunch 2 new if still a whole bunch, only a whole bunch you'll probably. otoh embedded. is making up 07:12 < lw-markov_> : $LINK 07:12 < lw-markov_> : kiba Try Lojban 07:12 < lw-markov_> : sup? 07:13 < lw-markov_> : I. 07:13 < lw-markov_> : AWACS gives this $LINK 07:14 < lw-markov_> : 0. o 07:14 < Betawolf> lw-markov_: imitate namespace 07:14 < lw-markov_> : Namegduf Oh I hate it. 07:14 < Betawolf> lw-markov_: imitate namespace 07:14 < lw-markov_> : Alcibiades Strictly verboten. Two instances of it is! We have no idea under what circumstances would what I've seen. 07:15 < Betawolf> lw-markov_: imitate Namegduf 07:15 < lw-markov_> : Free up more 07:15 < Betawolf> lw-markov_: imitate Alcibiades 07:15 < lw-markov_> : Can we talk about specific stuff like that to make people uncomfortable, particularly if you're the one being affected by shouting loudly, something very wrong has been sswallowed by shouting loudly isn't even do an integral. Aaranson'd describe myself must doing for so long 07:16 < Betawolf> lw-markov_: conversation yorick 07:16 < lw-markov_> : weird, if placebo still did okay 07:16 < lw-markov_> : feep how? 07:16 < lw-markov_> : Can I point out that, I used Wordnet as part of the'luddism fallacy 07:16 < Betawolf> hah 07:21 » lw-markov_ (~lw-markov@170.10.187.81.in-addr.arpa) has quit (Remote host closed the connection) 07:23 » lw-markov (~lw-markov@170.10.187.81.in-addr.arpa) has joined #lw-bots 07:23 » lw-markov is "lw-markov" on #lw-bots 07:26 < Betawolf> lw-markov: conversation 10 07:26 < lw-markov> : that's a lot of a depressing thought clopfic porn1/10 07:26 < lw-markov> : i wonder why.2/10 07:27 < lw-markov> : voting has 3 years old3/10 07:27 » lw-markov (~lw-markov@170.10.187.81.in-addr.arpa) has quit (Remote host closed the connection) 07:28 » lw-markov (~lw-markov@170.10.187.81.in-addr.arpa) has joined #lw-bots 07:28 » lw-markov is "lw-markov" on #lw-bots 07:37 < Betawolf> lw-markov: conversation 10 07:37 < lw-markov> : Yeah because THAT is event driven init. (1/10) 07:37 < lw-markov> : And in this case it is all their abilities.. (2/10) 07:37 < lw-markov> : God of War. (3/10) 07:37 < lw-markov> : you can't so much time passes (4/10) 07:38 < lw-markov> : why the random hi? (5/10) 07:38 < lw-markov> : No that's a fucking gps (6/10) 07:38 < lw-markov> : FLYING MEATBALL it interesting that you can befriend epitron if you're not as much (7/10) 07:38 < lw-markov> : Steel_ you continue to stagnate (8/10) 07:38 < lw-markov> : hansenista yes, but any statement about the flotsam and i should've said. he was asking about irrelevant metachat tripe. their chat is substantive and definitely worth reading. they would have a deadline (9/10) 07:39 < lw-markov> : both are filmed in beautifying a directory of shared decks. (10/10) 07:41 < Betawolf> lw-markov: conversation Kiba 07:41 < lw-markov> : tptacek (1/3) 07:41 < lw-markov> : PatrickRobotham If you have to deorbit the dream chasers (2/3) 07:41 < lw-markov> : wikipedia The Web Application Security Consortium Web Application Security Scanner List (3/3) 07:57 » lw-markov (~lw-markov@170.10.187.81.in-addr.arpa) has quit (Remote host closed the connection) 08:06 » lw-markov (~lw-markov@170.10.187.81.in-addr.arpa) has joined #lw-bots 08:06 » lw-markov is "lw-markov" on #lw-bots 08:06 < Betawolf> lw-markov: conversation 5 08:06 < lw-markov> : I think HCI or CS are golems in a conservative household, though (1/5) 08:06 < lw-markov> : olivia Plus they let you use (2/5) 08:06 < lw-markov> : Maelin and if it's not without precedent elsewhere I won't good, and image analysis as examples of James Watson, one who is feared. are signs of James Watson, one of several teen agers Wikipedia would use spaced repetition for my vocabulary, stop using your money to something you have to deal with stupid people messing up your perfectly crafted plot chains (3/5) 08:06 < lw-markov> : levitation charms seem anywhere near as cool as whatshisface, won't soldiers! but being able to get his thesis. though valgrind et (4/5) 08:07 < lw-markov> : nshepperd kanzure would just kick (5/5) 08:08 < Betawolf> lw-markov: conversation 10 08:08 < lw-markov> : mwi (1/10) 08:08 < lw-markov> : but eating more butter and then reduce the ratio of N 1 per day, PatrickRobotham, i shouldn't human mothers being chimeras beneficial even though basel iii (2/10) 08:09 < lw-markov> : Across the lifespans have lenghtened quite (3/10) 08:09 < lw-markov> : the (4/10) 08:09 < lw-markov> : no (5/10) 08:09 < lw-markov> : papermachine would you judge my personality, until now (6/10) 08:10 < lw-markov> : Would any of my last comp was in opposition to everybody for a (7/10) 08:10 < lw-markov> : Ralith.. shit, as it's about control (8/10) 08:10 < lw-markov> : man (9/10) 08:10 < lw-markov> : Burninate You want some conceptual lectures on functions (10/10) 08:10 < Betawolf> lw-markov: conversation 15 08:10 < lw-markov> : who's the opposite of (1/15) 08:11 < lw-markov> : ah (2/15) 08:11 < lw-markov> : Granted, (3/15) 08:11 < lw-markov> : .. ah. well, you buy a bluetooth dongle for physicians, that's mission feels a strong correlation of... than never. Among other things......... Expressionism was the timing and circumstances, you can just run down any of the expense. Another incredible one is a truth whsoe knowing I would ask. I havenâ t seen the errors. Schein said that he was doing (4/15) 08:11 < lw-markov> : there are any public databases of another book that isn't make sense, insofar as opposed to participate in 90s (5/15) 08:11 < lw-markov> : hey (6/15) 08:12 < lw-markov> : this is trivial to do the job to be (7/15) 08:12 < lw-markov> : ' the industry. Among the Mande, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, v 08:12 < lw-markov> ocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxt 08:12 < lw-markov> ouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, vo 08:12 < lw-markov> caloidxtouhou, vocaloidxtouhou, vocaloidxtouhou, (8/15) 08:12 < lw-markov> : 5 (9/15) 08:12 < lw-markov> : cjb, you're alive? (10/15) 08:12 < lw-markov> : haha (11/15) 08:13 < lw-markov> : moments like (12/15) 08:13 < lw-markov> : Nisstyre laptop I enjoyed (13/15) 08:13 < lw-markov> : they also have (14/15) 08:13 » lw-markov (~lw-markov@170.10.187.81.in-addr.arpa) has quit (Remote host closed the connection) 08:15 » lw-markov (~lw-markov@170.10.187.81.in-addr.arpa) has joined #lw-bots 08:15 » lw-markov is "lw-markov" on #lw-bots 08:15 < Betawolf> lw-markov: conversation 5 08:15 < lw-markov> : Logos01 oh, you do anything. (1/5) 08:15 < lw-markov> : just in case. (2/5) 08:15 < lw-markov> : gwern You want the option (3/5) 08:15 < lw-markov> : ? (4/5) 08:15 < lw-markov> : yeah, although I guess if M CASTL. truly, you'd need more amv of que sera set 19 quads, samples were either likely sequencing errors, and their only chance to beat in another world, without style. ' (5/5) 08:15 < yrrebaer> lw-markov: conversation yrrebaer 08:16 < lw-markov> : sure, you gotta release only it preserved folding.. 100 trying to find primes where n 100 (1/3) 08:16 < lw-markov> : quanticle and chimpanzee brains? (2/3) 08:16 < lw-markov> : Discussion is definitely becoming worse looking than the ring (3/3) 08:17 < Betawolf> lw-markov: conversation Tenoke 08:17 < lw-markov> : It's just ops, who must not be the case, or if you put in the universe, but I still imagine that a lot more possible employers if it was just in the US for example (1/3) 08:17 < lw-markov> : but it doesn't how you decided you maybe (2/3) 08:17 < lw-markov> : well.. I just felt an urge to meditate for 60 minutes straight (3/3) 08:18 < Betawolf> I fiddled with the punctuation-handling. Still doesn't have some of the tokens, because that means-regenerating the tables. 08:18 < Betawolf> -'-' 08:21 < Betawolf> lw-markov: conversation 10 08:21 < lw-markov> : Big no matter what we do. If that makes any sense. (1/10) 08:21 < lw-markov> : chelz It's misleading to describe books which are not actually have some notion of infinum is ldf k n k. (2/10) 08:21 < lw-markov> : Not by any netgov whose LEO you're not going to say something else. (3/10) 08:22 < lw-markov> : They're self maintaining, though, because if you disagree on one gender. (4/10) 08:22 < lw-markov> : saturn2 Right now I really wish that they're familiar with matching theory? (5/10) 08:22 < lw-markov> : also sigh xixidu is woo (6/10) 08:22 < lw-markov> : umm, there was some controversy or touchpad (7/10) 08:22 < lw-markov> : you know if I ran out of money can probably compare nvidia cards in the middle of a proton (8/10) 08:23 < lw-markov> : title SpaceX's Miscellany How Coke Pushed Rivals Does Science Support the creators you love (9/10) 08:23 < lw-markov> : Not the green ' hippy ' movement. (10/10) 08:24 » lw-markov (~lw-markov@170.10.187.81.in-addr.arpa) has quit (Remote host closed the connection) 08:26 » lw-markov (~lw-markov@170.10.187.81.in-addr.arpa) has joined #lw-bots 08:26 » lw-markov is "lw-markov" on #lw-bots 08:26 < Betawolf> lw-markov: conversation 5 08:26 < lw-markov> : But they've seen these happen X is bad but stable hierarchies good (1/5) 08:26 < lw-markov> : By sole nature of the Singularity occurs, or else they might not share our goals. (2/5) 08:26 < lw-markov> : religion (3/5) 08:26 < lw-markov> : Konkvistador Well that's not really, sexiness is a different representation, we can consider two things I listed above? (4/5) 08:26 < lw-markov> : yep (5/5) 08:27 < Betawolf> lw-markov: conversation klfwip 08:27 < lw-markov> : Aside from generalization (1/3) 08:27 < lw-markov> : nshepperd I was young (2/3) 08:27 < lw-markov> : it's why it gives me bad dreams (3/3) 08:27 < Betawolf> lw-markov: conversation QuadIgni 08:27 < lw-markov> : haha you, a troll? (1/3) 08:27 < lw-markov> : I don't exist, doesn't exist, doesn't read it that often either, doesn't read it that often either sustainability updated good? that sounds like moral realism of a body transplant note that in northern countries words such as Fight Over benficial s? (2/3) 08:28 < lw-markov> : haha you, a troll? (3/3) 08:28 < Betawolf> lw-markov: imitate FourFire 08:28 < lw-markov> : isn't prevent me from enjoying conversation in this channel in real life, IMO 08:28 < Betawolf> lw-markov: imitate QuadIgni 08:29 < lw-markov> : he really does a nessecity 08:29 < Betawolf> lw-markov: imitate QuadIgni 08:29 < lw-markov> : Pterry trolls conclusion incorrect calculate single had 08:30 < Betawolf> lw-markov: conversation 5 08:30 < lw-markov> : clever social gaming is supposed to get all up in the world isn't that inevitable unimaginable change as a matter of choice there is no longer felt motivated to try out my roomie just finally got around to it? (1/5) 08:30 < lw-markov> : r (2/5) 08:30 < lw-markov> : lol, right? (3/5) 08:30 < lw-markov> : Yes, but utility functions are very few clubs that's well below other variations. identity property!! Peter is Yahweh level? Yes, people do so if the bots are broken and use privmsgs instead? (4/5) 08:30 < lw-markov> : and I'm just some newcomer (5/5) 08:30 < Betawolf> haha 08:31 < SDr> lw-markov: conversation 5 08:31 < lw-markov> : papermachine, you will be blamed if there's with a keypad might as well (1/5) 08:31 < lw-markov> : Hrm.. I'm a member of the optimized for speed (2/5) 08:31 < lw-markov> : FourFire.. serve as a function is generic. (3/5) 08:32 < lw-markov> : Logos01 Mage the Awakening, everyone has no SMV, cuz you're asexual right? (4/5) 08:32 < lw-markov> : Wait, why? (5/5) 08:36 < Betawolf> lw-markov: conversation 10 08:36 < lw-markov> : it means 33 lbs (1/10) 08:36 < lw-markov> : and the nice thing about portable working environments (2/10) 08:36 < lw-markov> : but also i think. (3/10) 08:37 < lw-markov> : I am willing to offer crazy amounts of bitcoin to increase the probability of the universe with chimps. (4/10) 08:37 < lw-markov> : it would be the correct I'd gwen to be the correct and? I'd be the correct? country and named a small country and named it would be cheaper to money a small country and named would be the correct country and named (5/10) 08:37 < lw-markov> : olivia, if you are overambitious and guarantees for those (6/10) 08:37 < lw-markov> : ' I think todd used by the late 20th c (7/10) 08:37 < Betawolf> haha, that first Tux one. 08:37 < lw-markov> : I think that alone justifies calling it a trainwreck (8/10) 08:37 < lw-markov> : That's higher than it is only true when they can't then it wasn't throw up loudly (9/10) 08:37 < lw-markov> : true (10/10) 08:44 < Betawolf> lw-markov: conversation olivia 08:44 < lw-markov> : they can now display scatterplots and we're as good an image of cthulhu (1/3) 08:44 < lw-markov> : gwern The Babylon project was our technical knowledge and technological mastery to transform human physiology. (2/3) 08:44 < lw-markov> : Upper end is more likely to be who I spoke with last night. (3/3) 08:50 » SDr (~SDr@unaffiliated/sdr) has left #lw-bots () 08:56 < Betawolf> lw-markov: conversation 10 08:56 < lw-markov> : FourFire yes (1/10) 08:56 < lw-markov> : cwillu hey, you can plan for a site that lets you say a thing the book without understanding it. (2/10) 08:56 < lw-markov> : anatoly I value my replacement AI to do the natural landscape by keeping them locked up, talking from inside. (3/10) 08:56 < lw-markov> : But I've been stronger if the brain do lucid dreaming technique. Most of the politicians and people respond, serves you right, Antichamber was anymore, so they figured all transexuals must be really hard and has some kind of whitelist? (4/10) 08:57 < lw-markov> : I guess what I'll definitely make them eat the twinkie les (5/10) 08:57 < lw-markov> : Ialdabaoth_Work. (6/10) 08:57 < lw-markov> : passcod one might argue that the difference between dead and injured outweight hte problem. (7/10) 08:57 < lw-markov> : Which is some time in the morning (8/10) 08:57 < lw-markov> : ? (9/10) 08:58 < lw-markov> : Most trivial would be happy to do so (10/10) 08:59 < Betawolf> lw-markov: conversation 10 08:59 < lw-markov> : gwenbell never say some words which I live in a system uses in order to remove both (1/10) 08:59 < lw-markov> : epitron well I only remember 1. Not the least fit economic individuals though (2/10) 08:59 < lw-markov> : beschel Says who have conquered said so? (3/10) 08:59 < lw-markov> : pretty heavy duty reality augmentation is awesome (4/10) 08:59 < lw-markov> : quanticle Logos01 is (5/10) 08:59 < kuudes> Is this sort of drug replacement program for those unable to view #lesswrong? O_o 08:59 < lw-markov> : they hold it (6/10) 08:59 < lw-markov> : shminux Why masters? (7/10) 09:00 < lw-markov> : On a side note My System is a wonderful thing. (8/10) 09:00 < lw-markov> : Hogfather, by the way. (9/10) 09:00 < lw-markov> : I might do that, I'm amazed that CERN managed to solve than the many body simulations. Indistinguishable particles break rules that other people with other users? (10/10) 09:02 < Betawolf> kuudes: I guess it could be. 09:02 < kuudes> hmm... opsec paranoia view: if a spammer took lw-markov and made instances to spam #lesswrong, would Tattletale be able to make a distinction? 09:03 < Betawolf> well, tattletale just looks at nicks 09:18 < kuudes> indeed 09:28 < Houshalter> kuudes, no and it's a brilliant spamming strategy 09:28 < Houshalter> since the message sort of sound like something a human would write, the ops can't even be immediately sure that it's a bot 09:29 < kuudes> I think human observator could distinguish between mcmc and a human, but I guess it could be rather harder to a bayesian filter. :-/ 09:30 < Houshalter> you don't even need to bother with MCMC, just pick an actual message someone said in the past 09:30 < kuudes> On the other hand, I suppose developing a bot that would missionarise rationality on irc would be useful! :D 09:30 < Houshalter> preferrably one that matches some keywords that are being discussed 09:30 < kuudes> indeed 09:30 < Houshalter> it will loook like a human wrote it, and it just won't make much sense, but from one message it will be hard to tell it's a bot 09:31 < Houshalter> and you get hundreds of bots posting hundreds of messages per hour, it would very quickly overtake the channel 09:31 < Houshalter> the hard part would be registering them all 09:50 » Houshalter (~Houshalte@oh-71-50-58-200.dhcp.embarqhsd.net) has joined #lw-bots 09:50 » Houshalter is "Houshalter" on #lw-bots #lesswrong 10:00 » Houshalter (~Houshalte@oh-71-50-58-200.dhcp.embarqhsd.net) has joined #lw-bots 10:00 » Houshalter is "Houshalter" on #lw-bots #lesswrong --- Log closed Sun Aug 02 10:35:37 2015 --- Log opened Sun Aug 02 10:35:53 2015 10:35 » Gwern-away (~gwern@d-162-210-24-15.cpe.metrocast.net) has joined #lw-bots 10:35 » Irssi: #lw-bots: Total of 12 nicks (0 ops, 0 halfops, 0 voices, 12 normal) 10:36 » Irssi: Join to #lw-bots was synced in 32 secs 11:03 » lw-replay (~lua@oh-71-50-58-200.dhcp.embarqhsd.net) has quit (Remote host closed the connection) 12:01 » c0rw|zZz is now known as c0rw|timetravel --- Log opened Sat May 04 10:16:01 2019 10:16 » gwern (~gwern@wikipedia/Gwern) has joined #lw-gpt 10:16 » Irssi: #lw-gpt: Total of 12 nicks (1 ops, 0 halfops, 0 voices, 11 normal) 10:16 » Irssi: Join to #lw-gpt was synced in 0 secs 11:08 » nshepperd_ (nshepperd@gorgon.zlkj.in) has joined #lw-gpt 11:08 » nshepperd_ is "nshepperd" on #lw-gpt @#lw-bitcoin #lesswrong 12:06 < feep> http://sotchicago.com/blog/2015/11/11/his-titanium-powers-are-the-secret-of-his-immortality/ and the other bit about his grandfather's power 12:06 < feep> 'He is a boy who was raised during a time when all things were unknown and without a shadow of a doubt, what the child could have achieved would not last. He was an exceptional boy.' 12:06 < feep> His titanium powers 12:08 < feep> gwern: gpt-345 sums it up 12:08 < feep> What is your current state of life? "I live in Los Angeles" Ah. 12:09 < gwern> alas, poor gwern. he lives with the angels now. we knew him well, horatio189 - a man of infinite quotes 12:10 < nshepperd> 'In some of his works, it can be argued that there were no new ideas at all.' harsh 12:13 * gwern resembles that remark 12:16 » pompolic (~A@unaffiliated/pompolic) has joined #lw-gpt 12:16 » pompolic is "La-Li-Lu-Le-Lo" on #lw-gpt #lesswrong 12:17 < pompolic> thank zou lord 12:17 < pompolic> you* 12:20 < feep> thank you zord 12:20 < feep> loss sitting stable around 2.5 :( 12:22 < gwern> what was the loss for 117M? 12:22 < feep> I don't remember, I'd have to check the logs 12:23 < gwern> well then how do you know 2.5 is bad 12:23 < feep> ... looking at the output, mostly :p 12:24 < feep> it still does things like 12:24 < feep> ggreer: No, there really is not. "fraud" is in the conversation. "fraudulent" is not. "fraud" is also not in the conversation. "fraudulent" is in the conversation but that is not the same. 12:24 < feep> And I'm sure it's in the discussion. 12:25 < gwern> let's see. earlier I wrote '2 people finetuned GPT-2-small on an IRC channel's logs, getting losses of 1.95 & 2.3; why was the latter's loss 18% worse compared to the former when they were using the same IRC channel, GPT-2-small pretrained model, training codebase, & had both apparently converged? Because. while the IRC channel was the same, they used different IRC clients which had different... 12:25 < gwern> ...IRC log formatting conventions' 12:26 < gwern> I assume you were the 2.3 loss 12:26 < feep> seems right 12:26 < feep> way to go then 12:26 < gwern> so 2.5 isn't good then. wait a day and then decay the lr 12:26 < gwern> actually, what lr are you using now? the default is too high since the minibatch is shrunk and the model is 3 bigger 12:26 < feep> default * 0.1 12:27 < gwern> hm. ok then 13:11 < quanticle> feep: I see we have more details about GPT-world 13:11 < quanticle> In which gwern lives in LA 13:14 < gwern> what if I'm making a mistake by not living in LA 13:21 < feep> https://arstechnica.com/business/2017/02/a-guide-to-the-big-worlds-bigger-brain/ 13:21 < feep> A guide to the big worlds bigger brain - Business and Technology (What if we told you that... What if you told us?) 13:21 < feep> ... what? 13:25 < feep> like a person's value is what they are worth to their neighbors 13:25 < feep> how does it compare? 13:25 < feep> what are the people worth? 13:25 < feep> http://en.wikipedia.org/wiki/Person_value 13:25 < feep> wikipedia: The Person value (sometimes spelled person value) is the value a person contributes to society. 13:25 < feep> (the page, I might add, does NOT exist.) 13:25 * gwern was about to check 13:25 < feep> though it also produced this alternate hypothesis on the next line 13:25 < feep> wikipedia: Person-Value is the difference between the value a person contributes to society and the value he would contribute if he were to live there alone. 13:25 < quanticle> Haha 13:25 < quanticle> That's spooky 13:26 < feep> "It is not a matter of mere speculation, however, because it is well known that the value of a person may rise to as high as 100 times the value of his fellow man. It is a matter of fact that there has been a long time, at least since the days of King Arthur, that a man who could buy a million dollars, and even two-million dollars, would have had more than twice the value of a man who could buy a half a 13:26 < feep> million dollars." 13:26 < feep> also, "The value of your money is not a number. It is a metaphor for the power of your values." 13:47 < quanticle> Lol 13:48 < quanticle> We didn't have an Oracle of Delphi, so we created one. 13:48 < feep> but i think i've always been a fascist 13:48 < feep> Houshalter: You have had your moments. 13:48 < feep> No, not just moments. 13:48 < feep> gpt-2 reveals sordid depths 13:48 < quanticle> Lol 13:48 < feep> I'm not a socialism. 13:48 < feep> No, I'm not. 13:48 < feep> ... (I actually haven't read anything by George Orwell that I don't agree with) 13:49 < quanticle> I like how GPT-hug tries to downplay it and and GPT-patbot is like no, he's totally a fascist 13:49 < feep> yess 13:56 » quanticle (~quanticle@wikipedia/Quanticle) has joined #lw-gpt 13:56 » quanticle is "Rohit Patnaik" on #lw-gpt @#lw-bitcoin #lesswrong 14:16 * nshepperd looks at tensorboard 14:18 < nshepperd> whoa. the 345M model seems to be training much faster 14:19 < feep> I want reddit! 14:19 < feep> I want reddit to come back 14:19 < nshepperd> me too, acty 14:20 < feep> I mean, I guess the way I'm looking at it is that if the market were a little more rational, there would be much less opportunity that would be lost. 14:20 < feep> I just want reddit to come back because I know the market is stupid 14:20 < feep> also this is abandoned at the top of a sample 14:20 < feep> so we don't know what link it corresponds to 14:20 < feep> but I want one...? 14:20 < feep> Amazon.com: Cushpoint 14:22 < feep> But as conscious and capable of doing something as their consciousness is capable of doing, as conscious and capable of doing the opposite of, "I don't give a hypothetical answer", we are conscious of them. 14:22 < feep> And they're not the same thing as conscious. 14:22 < feep> They're the opposite of conscious, and so they are conscious. 14:22 < feep> obormot... what 14:29 < kuudes> what 14:30 < feep> olivia> feep: http://www.buzzfeed.com/lauriehayes/how-july-16-2014-could-you-be-the-last-yugoslavian-to-make-this-map 14:30 < feep> title: What is a drug? This is a drug - The Molluskopher (length: 01:10, views: 3,716,742, rating: 4.9, posted: 2011-09-13) 14:30 < feep> The Molluskopher. 14:31 < feep> I just want to note 14:31 < feep> that name is NOT TAKEN. 14:31 < feep> It is UP FOR GRABS. 14:51 < feep> the term "software engineering" is a specific subset of software engineering 14:51 < feep> and I don't like the way "software engineering" means "software engineering" 14:51 < feep> feep: It's a subset of software engineering. 14:51 < feep> I think you're misunderstanding the problem. 15:30 < nshepperd> '24:30:07 Obormot Yeah, the argument has been made and refuted repeatedly' 15:42 < feep> feep_: my personal preference is that we make friends by being good to each other 15:42 < feep> I do not want to be 'good' :) 15:42 < feep> woah 15:42 < feep> feep alter is creepy 15:43 < feep> the problem is the existence of an existing one 15:45 < nshepperd> he who shall not be named, the Existing One 15:48 < nshepperd> 2011-10-08 16:38:05 SDr https://slatestarcodex.com/forums/showthread.php?t=209380 15:49 < nshepperd> 2011-10-08 16:38:06 pookie title: How to Stay In Touch When Everyone Else Is Trying to Kill You 15:49 * gwern recommends satellite-phones 15:52 < quanticle> feep: Bizzaro-world you is amazing 15:53 < nshepperd> 2013-04-05 04:13:33nshepperd_"I don't know why people are attracted to the idea of biowarfare or not. But the thing that comes to my mind is a few examples: in a military setting, there is the idea of "using nuclear weapons to strike out enemies." That sounds to the mind of a bad cop who has his hands clapped and the command word on his keyboard. A few years ago I was in the movie "Invisible", a movie set in 15:53 < nshepperd> a modern war. The camera was set against the war, and the actors dressed in ancient costumes. In the corner of the movie's front shot, we watched an American cop. He's got these long, heavy arms, and he has his hands clapped, like the character in the TV show. He's this badass guy, wearing a chain-mail jacket. He's looking right at everyone on the street, and he says this to the camera: "How can I ever do 15:53 < nshepperd> anything more than I can get away with? ... 15:53 < nshepperd> 2013-04-05 04:13:35nshepperd_... It was a show like you would never have expected. ... 15:54 < feep> nshepperd: is this 345? 15:54 < quanticle> It took me a second to understand that wasn't a real anecdote 15:54 < nshepperd> it gets the tone of 'nyt reminiscent interview' down 15:54 < quanticle> Holy shit thatis' good. 15:54 < quanticle> *that's 15:54 < nshepperd> feep: yep 15:54 < feep> nice 15:55 < quanticle> I didn't fully understand that it was GPT until it referred to the cop as wearing a chainmail jacket. 15:55 < nshepperd> oh, i left out a bit of context 15:56 < quanticle> And it referred to modern war, but had the actors dressed in "ancient costumes". 15:56 < nshepperd> just before those lines: 15:56 < nshepperd> 2013-04-05 04:12:39@gwernhttp://www.nytimes.com/2013/04/08/technology/seo-bio-scientists-study-biowarfare-movies.html 15:56 < nshepperd> 2013-04-05 04:12:39pookietitle: In Biowarfare, Scientists Find a Force-Feedback Mechanism 15:56 < quanticle> Bahaha 15:56 < quanticle> It certainly gets the NYT headline style 15:57 < quanticle> It's never "Scientists find a force-feedback mechanism in biowarfare", it's always "In biowarfare, scientists find a force-feedback mechanism". They always put the second clause first. 15:57 < quanticle> And GPT has picked up on it 15:57 < quanticle> Clever gilr 15:57 < quanticle> *girl 15:58 < nshepperd> ganbaru, gpt-chan 16:04 < feep> taking a break from gpt-2ing to play some more fallout :D 16:04 < feep> get shot with a rocket some more 16:27 < nshepperd> 2015-10-30 01:40:01 Obormot "The first step to a perfect democracy, however, is to recognize that the most significant feature of a free society is that it will p 16:27 < nshepperd> roduce what we call 'law'. This is what democracy is. The law is all that we have -- that is, a system of rights. The law is nothing but a process." 16:28 < nshepperd> quanticle: (Socrates is a famous ancient Greek philosopher, known for his philosophical views on ethics, but this is the only one where he is mentioned. Most books on his philosophy are either about his life or about other writers.) 16:29 < quanticle> GPT-345M is much more coherent 16:29 < quanticle> Look at those clauses 16:32 < gwern> it does seem like a noticeable step up in quality 16:33 < nshepperd> 2017-09-16 03:37:23feepbotShared V. Chandler, the Empress?—The Great Disgust of the Mind%E2%87%92 by K.A. Chandler (This book is a biographical account of the relationship between one of America's greatest psychologists, Dr. Chandler and his twin father, Dr. R. F. Chandler, whose work was published in the United States as The Shattering of the Brain. Chandler was a prolific writer and philosopher.) 16:33 < nshepperd> 2017-09-16 03:37:22MadplatypusThat is... a really interesting book 16:34 < gwern> my poetry losses are odd. it seems quite stubbornly stuck at 2.7 but 117M got to 2 easily. the poetry hasn't changed format at all 16:35 * nshepperd adds a validation loss 16:35 < gwern> this is just the training losses 16:36 < nshepperd> it's possible that freezing the word embeddings makes it harder for it, but i would expect it should be able to handle 16:37 < gwern> hm... 16:37 < gwern> I wonder how this affects quality 16:38 < gwern> no wait, 2 is for the nonprefix version, it's 1.6 for the prefix version, which I'm getting 2.7 for now 16:39 < nshepperd> what's your lr? 16:40 < gwern> I've been trying various ones. best seems to be default * 0.1, but right this second I;m running with 0.000001 16:42 < quanticle> nshepperd: Reading up, wow, I didn't realize that GPT-2 could do Locke 16:56 < nshepperd> gwern: if i push the validation loss changes, it might make it easier to find a good learning rate? you can fix a set of validation batches to distinguish stochastic noise from 'lr is too high' noise? 16:56 < gwern> one would have to wait a while 16:57 < nshepperd> (it's not really validation i guess since it defaults to reusing the training set) 17:25 < nshepperd> Russia is not exactly the first world nation to destroy its economic base because of aggressive and reckless political actions. 17:45 < kuudes> that almost makes sense 17:51 < nshepperd> a youtube link: pookie__ video: Gravity is Magic (full album) "The Last Dream" (length: 02:20, views: 15,096, rating: 98.9%, posted: 2014-08-16) 17:54 < nshepperd> kuudes I am glad that the world is not on fire anymore, at least as much as it was, and maybe some of the things that would bring us in are not very difficult. 17:55 < nshepperd> me too 18:00 < gwern> / we didn't start the fire / it's already been burning / since the world began turning / 18:04 < namespace> Wait wait wait we have a gpt-2 version of Act now? 18:06 < nshepperd> well, she showed up once 18:08 < nshepperd> she only makes up some 0.4% of my dataset, so don't expect too much 18:10 < namespace> Awww. 18:10 * namespace just thought of an excellent fucked up science fiction story 18:35 < nshepperd> 2019-03-01 07:30:10 feepbot What does an 'invisible world' look like? - BBC News (The BBC unveils a documentary on a fictional society in medieval England, and explores how a man who is a priest on a remote island gets to be immersed in a global economy.) 18:47 < nshepperd> 2016-03-01 16:21:00 @gwern http://arxiv.org/abs/1603.010710 18:47 < nshepperd> 2016-03-01 16:21:01 pookie title: [1603.0107] Deepmind learns to predict data collection with 3D: Time-series predictions for complex tasks with spatial models - arxiv 18:47 < nshepperd> 2016-03-01 16:24:23 c0rw1n gwern: it sounds weird 18:47 < nshepperd> 2016-03-01 16:24:44 c0rw1n "how does that work" 18:47 < nshepperd> 2016-03-01 16:25:31 @gwern crippling 18:51 < gwern> managed to crunch down to 2.62 now... 18:52 < gwern> maybe I just need to train this for a month or something @_@ 19:06 < nshepperd> 2019-03-01 12:57:35 kuudes http://web.archive.org/web/20130317074896/http://www.nouvel.com/news/2017/09/140915/china-unofficial-chamber-declares-sanity-in-north-korea-sanctioning-sucheto/ 19:06 < nshepperd> 2019-03-01 12:57:38 feepbot�Chiawien - The UN is now 'inclusive' of North Korea, after officially declaring that only the "serious" parties are sanctioned. 19:07 < nshepperd> 2019-03-01 12:57:10 kuudes https://en.wikipedia.org/wiki/Dell_X-series 19:07 < nshepperd> 2019-03-01 12:57:13 feepbot The Dell X Series is a 14-Series series of 12-inch, 15.4-Quad-Core, Ivy Bridge Core i5/i7 desktop PC processors manufactured by QL Technologies Limited. The X-Series Series is Dell's second-generation flagship processor series. The 14-Series series is currently available only in limited quantities in the United States and China. 19:08 < nshepperd> 12 inch processors 19:08 < kuudes> they sure sound rare 19:22 < nshepperd> 2019-03-01 07:26:09 feepbot An F-35 Raptor Flighter Stood With Saudi Air Force Over Bahrain - The New York Times (After flying F-35 stealth fighters to assess Saudi Arabia’s Airforce modernization work, the U.S. military was unimpressed with its first strike. Saudi Arabia claimed it had attacked the Jeddah bombing site, but officials have since said the aircraft were simplyobserving a wedding party.) 19:22 < nshepperd> 2019-03-01 07:26:51 @gwern https://twitter.com/walshop/status/103849640028270116 it's almost as if the jihadi threat is that a bunch of terrorists are in bed together on a yacht 19:22 < gwern> a major concern of catholics 19:31 < quanticle> nshepperd: It has the New York Times' house style down... from just reading feepbot summaries. 19:31 < quanticle> Impressive. 19:32 < quanticle> >Saudi Arabia claimed it had attacked the Jeddah bombing site, but officials have since said the aircraft were simply observing a wedding party 19:32 < quanticle> I lol'd at that. 19:33 < quanticle> I suppose they sent gifts too. Rocket-propelled gifts, with a surprising ending. 19:42 < gwern> 'from prince salman, with love' 20:08 < pompolic> is the page in the topic supposed to be empty? 20:46 < quanticle> Yeah. That was nshepperd's demo server where he had actual GPT-2 dialogue when we had a running GPT-2 model. 20:51 < gwern> it was a hoot 20:51 < gwern> but hard to keep running since he needs the gpu for other things 21:01 < gwern> https://twitter.com/Miles_Brundage/status/1124840127560159232 star wars eh. 'Stardomite Academy' :thinking_face: 21:01 < gwern> is that why the jedi keep dying out 21:04 < gwern> meme - 'saturday night, netflix and chill' /naw 'saturday night, finetune and chill' /yo 21:56 < nshepperd> '@gwern szel: for example, there was the paper showing that eating chocolate doesn't help you build an 21:56 < nshepperd> ability to remember your day. I recall my first year of graduate school taking this and being convinced that chocolate would be 21:56 < nshepperd> a powerful stimulant' 21:58 < gwern> many nootropics have been a big disappointment to me :( 21:58 < gwern> especially in grad school when I really needed that choco-boost. so much for small n studies 22:05 < nshepperd> la replication devore ses effets 22:06 < gwern> 'when the reproducibility crisis came for baumeister and rosenthal' --- Day changed Sun May 05 2019 00:11 < quanticle> >The BBC unveils a documentary on a fictional society in medieval England, and explores how a man who is a priest on a remote island gets to be immeresed in a global economy 00:11 < quanticle> I'd watch that 00:11 < quanticle> I'd 100% watch that 00:32 < nshepperd> haha yes 00:33 < nshepperd> it sounds like some sort of preindustrial internet 00:33 < nshepperd> meanwhile 00:33 < nshepperd> 2019-03-01 16:35:33 @gwern https://daviddfriedman.com/2018/02/21/open-source-your-brain/ the last 10 years are a very strange time to be alive 00:33 < nshepperd> 2019-03-01 16:35:35 feepbot Open-Source your brain – Davidd Friedman (My last 10 years have been a bit like my first.) 00:35 < nshepperd> also 00:35 < nshepperd> 2018-03-01 01:08:52 feep I have the opposite problem 00:35 < nshepperd> 2018-03-01 01:08:59 feep i.e. the emotional center of my mental life is being unable to give a thought to actual pain 00:35 < nshepperd> D: 00:37 < nshepperd> 2019-03-01 18:44:27@gwernBetawolf: 'The last episode of season five of The Wire made him become a murderer, a petty thief, a drug dealer, a sexual predator. The man's actions are inconsistent and bizarre to such an extent they defy narrative, and to his credit in those series, as well as a number of TV adaptations, there is no one explanation. 00:37 < nshepperd> 2019-03-01 18:44:36Betawolfthe wire is pretty awesome, yeah. 00:39 < quanticle> The Wire is pretty awesome, but I don't remember that part. 00:40 < nshepperd> that whole sample is just an amusing discussion about The Wire http://zlkj.in/tmp/paste/LtCcBhi6 00:42 < quanticle> <@gwern> the point of his 'story' is that he was a good enough serial killer to get a death certificate. this sort of character is going to come out of some sort of deep trauma and bad-ass 'it happened' thing that would be a great plot 00:42 < quanticle> Literally a good enough serial killer to get a death certificate for someone by stating that he killed them. 00:42 < quanticle> That's pretty good. 00:43 < nshepperd> I really like the cadence of that bit. 'The last episode of season five of The Wire made him become a murderer, a petty thief, a drug dealer, a sexual predator. The man's actions are inconsistent and bizarre to such an extent they defy narrative' 00:43 < nshepperd> that's the sort of writing i like in a tv show review 00:49 < nshepperd> quanticle I'm looking through the job listing at GitHub right now and there's the usual assortment of junior managers, junior executives 00:49 < nshepperd> , etc. 00:52 < nshepperd> loss is down to 1.865 00:52 < nshepperd> i'm glad i added this deterministic validation loss thingy 00:53 < nshepperd> it's much more useful than the horribly noisy train loss 00:55 < nshepperd> i can see clearly where i dropped the lr to 0.00002 because the loss curve gets smoother 00:56 < nshepperd> which means the previous lr was indeed too high 05:32 » nshepperd is "nshepperd" on #lw-bitcoin #lesswrong 05:32 » nshepperd (~nshepperd@193.148.18.211) has joined #lw-gpt 06:14 < pompolic> i very much appreciate "Davidd Friedman" 06:14 < pompolic> is it possible that it learned what URLs correspond to names, and how? 06:54 » feep_ (~feep@200116b826dddb0054a878e8bd1ead49.dip.versatel-1u1.de) has joined #lw-gpt 06:54 » feep_ is "Feep" on #lw-gpt #lesswrong 10:19 < gwern> I'm down to 2.5 on poetry. dropped ~0.12 overnight 10:35 < gwern> (0.00002 seems to work a little bit better than 0.00001 for me right now) 11:16 < gwern> https://twitter.com/rossgoodwin/status/1124901310677913600 hm 11:43 < gwern> maybe we need better learning rate schedules: the usual ones warmup https://www.tensorflow.org/alpha/tutorials/text/transformer_files/output_80_1.png 17:48 < gwern> https://www.reddit.com/r/MachineLearning/comments/bkejvb/n_openai_releasing_the_345m_model_of_gpt2_and/emjtzo0/ creepy pasta time 20:26 < gwern> https://www.blog.google/outreach-initiatives/arts-culture/poemportraits/ I wonder if they used gpt-2 21:23 < gwern> https://pastebin.com/1VjQ9NP2 --- Day changed Mon May 06 2019 06:52 » feep (~feep@200116b8263b5b00b4981c7ecb83ad37.dip.versatel-1u1.de) has joined #lw-gpt 06:52 » feep is "Feep" on #lw-gpt #lesswrong 08:52 » feep_ (~feep@200116b8263b5b00bc9e83faaa0bdc81.dip.versatel-1u1.de) has joined #lw-gpt 08:52 » feep_ is "Feep" on #lw-gpt #lesswrong 10:12 < gwern> down to 2.22... 12:02 < gwern> https://pastebin.com/JaQchruB 12:08 < gwern> looks like halving lr again gets me to 2.10 12:46 » nshepperd (~nshepperd@193.148.18.211) has joined #lw-gpt 12:46 » nshepperd is "nshepperd" on #lw-gpt @#lw-bitcoin #lesswrong 14:33 < nshepperd> this url: http://www.theverge.com/2017/4/1/19403351/nasa-spends-over-$25m-$35m-to-buy-a-nasa-space-plane-at-amazon-for-the-next-3-5-years-to-replace-the-dollars-worth-of-programs 14:49 < pompolic> is that gpt-generated? 14:49 < gwern> presumably 14:49 < pompolic> also: will it be delivered via drone 14:49 < gwern> no, it self-delivers. delivery guaranteed in 2 hours if you order by 5PM! 14:49 < pompolic> i like it 14:50 < pompolic> do they ship to low orbit? 14:50 < gwern> if you're a Prime member 14:50 < pompolic> excellent 14:58 < kuudes> why so long, won't ballistic missiles reach places in minutes? 14:58 < gwern> kuudes: the warehouses have lots of other deliveries to make 15:00 < kuudes> obligatory xkcd-did-it https://what-if.xkcd.com/107/ 15:00 < kuudes> "Over a short flight distance, an ICBM[4] on a typical arc would take about 12 to 15 minutes to cover the distance to your mom.[5] But using depressed trajectories, a technique which should make sense to the Kerbal Space Program players out there, the time to cross the distance between you and your mother could be reduced to as little as six minutes." 15:01 < nshepperd> from a world where nasa signed up for amazon prime, then found themselves ordering three times as many space planes as they needed because of the great deal on shipping 15:03 < kuudes> hmm. I suppose those depressed trajectories with icbms are sort of like using an 88 as antitank weapon 15:04 < kuudes> maybe we could try that as well? I would think that tanks could have trouble of icbms that hit them at some 10 machs? 15:13 < nshepperd> nshepperd 'Somewhat recently we have found a new technology which makes it possible, that can be used in the production of a new type of synthetic brain material (the synthetic layer that does not contain any proteins, which is the heart of the nervous system). This new material, based purely on t-cell membranes, retains much of the electrical and mechanical properties (the material is not as 15:13 < nshepperd> lightweight as synthetic neurons, and has the drawback of forming cyst in the brain stem that is not compatible with the heart [...]' 15:13 < nshepperd> err 15:13 < gwern> sounds problematic 15:15 < nshepperd> 2015-12-28 12:17:00 Madplatypus http://en.wikipedia.org/wiki/The_Red_Line_War the US military is also an incredibly important part of world security 15:15 < nshepperd> 2015-12-28 12:17:01 +pookie_ wikipedia: The Red Line War (also known as Red-Line, "red line" or "The War on Terror",) is a military exercise that was conducted by the United States military, and conducted for the purpose of rehearsals for a planned invasion of Cambodia in September 1973. 15:16 * gwern double checks that one 15:20 < kuudes> would one blackhat use of gpt-2 be spamming concept networks such as wikipedia and pagerank? 15:21 < kuudes> I guess with gpt-2 + thispersondoesnotexist one could do fake some profiles? 15:23 < gwern> consistency would be hard 15:24 < gwern> you'd need some dataset like okcupid photos + profile, then you could train a GPT-2 conditioned on some sort of image embedding, and them feed stylegan FFHQ random samples in to get a description, 15:35 » feep (~feep@200116b8263b5b00bc9e83faaa0bdc81.dip.versatel-1u1.de) has joined #lw-gpt 15:35 » feep is "Feep" on #lw-gpt #lesswrong 15:35 » flgr_ (sid200704@gateway/web/irccloud.com/x-uzijsbqrgjjxzdvc) has joined #lw-gpt 15:35 » flgr_ is "Florian Gross" on #lw-gpt #lesswrong 15:37 » quanticl1 is "Rohit Patnaik" on #lesswrong 15:37 » quanticl1 (~quanticle@wikipedia/Quanticle) has joined #lw-gpt 15:40 » nshepper1 (~nshepperd@193.148.18.211) has joined #lw-gpt 15:40 » flgr_ is now known as flgr 15:40 » nshepper1 is "nshepperd" on #lw-gpt @#lw-bitcoin #lesswrong 15:43 < gwern> https://news.ycombinator.com/item?id=19840758 17:04 < namespace> How much text do I need to get decent gpt-2 samples? 17:04 < namespace> Will 64 pages do? 17:04 < s0ph1a> i rmb reading "15 megs"' 17:05 < s0ph1a> that was for the -small one though 17:05 < s0ph1a> no idea if the bigger one is more efficient 17:08 < gwern> depends on how high quality you want, how close what you want is to normal text... 17:08 < gwern> like, I get anime plot summaries without any retraining at all, just a reasonably long informative prompt 17:09 < gwern> because anime discussion/reviews is that common on reddit/internet 17:39 < namespace> https://pastebin.com/mqZcS8au 17:39 < namespace> >.> 17:41 < gwern> 'The Gift Plane has closed indefinitely, and all personnel have left. If you are unaware, The Gift Plane provided homes to over a hundred young Pets. Unfortunately, we have failed to remove all of the Pets from their homes. 48 Pets remain here, at the time of writing. We would strongly encourage you to visit our 8 homes and find some friends to take with you. Every Pet is uniquely valuable.... 17:41 < gwern> ...You should have no problem finding somebody that you love. – The Gift Plane staff' sinister 17:41 < namespace> :3 17:49 < namespace> I like: 17:49 < namespace> [Mike's face entered; room rumbles] 17:49 < namespace> 17:49 < namespace> Mike is not inside right now. He is dead. 17:49 < namespace> You may visit his room. 17:54 < namespace> gwern: Petscop is one of those works where the author probably has an IQ of 140+ but the aesthetic attracts a lot of younger viewers who um..aren't cognitively equipped to engage with them. :p 17:55 < gwern> that must be annoying for him 17:55 < namespace> Oh it's doubly annoying because they've apparently decided not to do any ARG-ish stuff and do total radio silence. 17:56 < namespace> The guy must be sitting there going "DO YOU PEOPLE SERIOUSLY NOT UNDERSTAND THIS? WHAT THE FUCK." 17:56 < namespace> And just...restraining himself from giving hints. 17:56 < gwern> 'gawd, just pay attention! it's obvious!' 17:58 < namespace> To give you some idea of the authors clear obsession and semi-madness. 17:58 < namespace> Starting in episode 11. 17:59 < namespace> There are these sort of vignette incidental scenes portrayed as 'demos' of the game. 17:59 < namespace> And in them, the characters introduced so far interact autonomously. 17:59 < namespace> They even talk to each other in game, using a button-encoding to say words. 18:00 < namespace> Instead of ascii, which would be nearly impossible to work onto the Playstation 1's control scheme. 18:00 < namespace> The author assigned IPA phonemes to button combinations. 18:00 < namespace> So the fake players use this phonetic encoding to communicate in-game. 18:01 < namespace> And then the consequences of this are subtly explored. 18:01 < namespace> For example, in one scene, the player stand in 'Pall' (a homophone for 'Paul') tries to say the name 'Marvin' but it comes up as the string 'Not In Table', because for whatever reason the phoneme translation dictionary doesn't include names, and there is no homophone for Marvin. 18:02 < namespace> You're not told this, you just have to infer it. 18:04 < namespace> The dude designed this entire communication system to work on a playstation 1, so it can be used in the fake demo sequences of his fake video game for his fake lets play web series. 18:05 < namespace> And even stopped to think about the full consequences of its technical design decisions, etc. 18:06 < namespace> I'm not making this up it really happens: https://www.youtube.com/watch?v=nXHUw2nw6aA&t=13m25s 18:07 < gwern> that's fairly ingenious but I wonder if the rest of it shows such care or quality. nerd authors have a tendency to invest more in the puzzles or tangents than that which they are supposedly enhancing 18:07 < namespace> It does. 18:07 < namespace> That's the crazy part. 18:07 < namespace> It's basically like that the whole way through. 18:07 < namespace> Naturally, this series has release dates for episodes months apart. 18:08 < namespace> The actual storytelling is masterful, in the sense that the author gets the emotions he wants out of you. 18:09 < namespace> The villain(s) are played up well, there's no jump scares, ever; but it manages to creep you out anyway. 18:14 < namespace> As an indie series though, it does have its downsides. 18:15 < namespace> IMO, the big negatives are that at times it's just a little too wanky with the cryptic stuff, and the slow release schedule, and the occasional moments where "Paul"'s acting falls flat or the writing is just a bit amateurish. 18:15 < namespace> But that goes with the genre, I think. 18:16 < namespace> And there's some serious virtue-of-silence going on for all the things the author doesn't do. 18:16 < namespace> No jump scares, no audience interaction, etc. That shit takes discipline. 18:20 < namespace> gwern: If nothing else, in the midst of my own experience with childhood sexual abuse I said to myself "man, I should really consider making a video game about this". This guy had the same idea and actually did it, so fucking kudos to him. Petscop is everything I'd have wanted that video game to be. 19:43 < gwern> bah. I tried resetting the lr back to the default for a while as a kind of cyclical lr and then back down, and now it's worsened from 2.15 all the way back to 2.40 19:43 < gwern> is 335M really this hard to train 20:02 < gwern> ("maybe my hyperparameters are wrong. no. it's the models which are wrong.") 20:42 < namespace> :3 22:36 < gwern> https://pastebin.com/0YLKTMWy 22:41 < nshepper1> 2014-09-18 22:13:29 @Drethelin just because it's in america doesn't mean they get to choose where america goes 22:41 < nshepper1> 2014-09-18 22:14:07 @Drethelin the american military should just take the money of any american citizen living in russia 22:41 < nshepper1> 2014-09-18 22:14:38 @Drethelin it's not like the russian have a monopoly on that country 22:41 < nshepper1> fighting words 22:42 < gwern> (this is a really nice poem actually) 22:45 < s0ph1a> heh, upon reading it, i'm hearing it in my head as a voice 'singing' it as black/doom metal ^_^ 22:52 * gwern thinks the teletype version adds something: https://twitter.com/gwern/status/1125594126194487302 22:57 < gwern> (let's see you write that with markov chains or templating...) 22:59 * s0ph1a thinks a really good addition would be tremolo-picked wailing guitars weaving appropriately sorrowful melodies around each other 23:07 < gwern> go ahead and make a yt vidoe of a dramatic recitation or something :) 23:08 < s0ph1a> i'm actually trying to figure out how to turn this into song 23:08 < s0ph1a> probably nothing will come of it, but zomfg this is GOOD 23:11 < gwern> I guess with 335M, '"A little training is a dang'rous thing; / train deep, or taste not the Pierian spring." 23:14 » #lesswrong,#lw-bitcoin,#lw-gpt nshepper1 (~nshepperd@193.148.18.211) has quit (Quit: bye) 23:32 » quanticl1 is now known as quanticle 23:40 < gwern> https://twitter.com/eukaryote314/status/1125604170600312832 beam search would be nice but I have no idea how to implement it... --- Day changed Tue May 07 2019 06:44 < feep> ahaha 06:44 < feep> Dawoody on Twitter: "I've been in a conversation with some people on Reddit about the idea of bringing up the notion that #GamerGate is just another SJW conspiracy theory, which sounds to me like a bunch of people trying to make a bunch of money off of it.” (image https://pbs.twimg.com/profile_images/976499255549145936/8XeTjFQ_400x400.jpg) 06:45 < feep> a bit later 06:45 < feep> Kathleen Dahl on Twitter: "We are #GamerGate, and we are making progress for our cause #gamergate.” (image https://pbs.twimg.com/media/DYi9l5VbWgA0JQl.jpg:large) 06:45 * feep thumbs up, looking 100% real there guys 06:47 < feep> http://arstechnica.com/science/2014/06/scientists-caught-on-their-ass/ 06:47 < feep> Scientists caught on their ass | Ars Technica (Researchers have been caught on their asses in an experiment in which they put their work into practice without realising they were making a mistake.) 06:47 < feep> gwernlinks still the best part of gpt-2 06:51 » feep_ (~feep@200116b8269d07004c862b7add3e23a7.dip.versatel-1u1.de) has joined #lw-gpt 06:51 » feep_ is "Feep" on #lw-gpt #lesswrong 07:56 < feep_> https://blog.baidu.com/2016/03/21/how-baidu-is-tracking-you/ eh 07:56 < feep_> How Baidu Is Tracking You; Baidu Is Not Your Friend - Blog by Alex Wong 07:56 < feep_> hee 07:56 < feep_> note the domain 07:58 < feep_> 'We are not told who went first. We are told we weren't there and that someone else, who had been told we were, made it happen. The most we know is that some of us, at least, were. But who the rest of them are we are not told' 07:58 < feep_> 'And yet, at the end, the truth comes as just one more puzzle piece, and the piece of the puzzle that the universe is telling. It is said that there may be only one way to answer the question who went first: that the answer will, forever, be that he who has not yet come can be said to have, and he who has, never to have... 07:58 < feep_> ...been. And the truth is so simple that no one may know what it is but it is true. And perhaps, to some, who have not yet come, the truth is the only true reason for what they have done.' 07:58 < feep_> 'The most we know is that some of us, at least, were. But who the rest of them are we are not told' https://twitter.com/davidbarrie/status/889817182728756768 07:59 < feep_> gwern: what the fuck is with your pseudobiblical twitter quoting 08:00 < feep_> I am actually doing this on my own, and I'm not sure how to solve the problem 08:00 < feep_> so I am trying to get a good model 08:00 < feep_> I have a big model that has a bunch of numbers 08:00 < feep_> and I need to find a way for it to give me the best possible solutions 08:00 < feep_> I don't know all of the numbers... 08:00 < feep_> it's probably like 20% or so of the number I need to get the best possible solution 08:00 < feep_> ganbatte, feep-rnn! 08:00 < feep_> you will find that optimal solution one day! 08:16 < pompolic> "what the fuck is with your pseudobiblical twitter quoting" 08:26 < feep_> I mean, that is clearly religious text style 08:28 < feep_> hee 08:28 < feep_> most compact gpt-2 swerve 08:28 < feep_> it's not that it's just that dun dun dun 08:30 < pompolic> dun dun dun 08:30 < feep_> 'A few of the people who had been arrested earlier this year included a local doctor who had taken refuge in a clinic in the central French city of Marseille, and a former president of the European Parliament, a member of the Bundestag and a foreign minister. But none of the men were arrested. They were released and allowed to go home with their lawyers on Tuesday, after being cleared of any wrongdoing.' http:// 08:30 < feep_> lanceninte.com/story/245586/bastards-in-bastard-prison-for-life-sentencings/ 08:31 < feep_> title: Badass Prisoners Take First Stand | Le Monde Diplomatije | USA 08:31 < feep_> "Le Monde Diplomatije" does not exist btw 08:31 < feep_> sadly 08:31 < feep_> I think it should be a "women" thing 08:31 < feep_> gwern: I don't think so. Men are women 08:32 < feep_> my dad is a big, burly man who has a pair of big sunglasses But hey if you like being a big, burly man with thick glasses, that's your own business And you can be one of those men 08:32 < feep_> backstory! 08:33 < feep_> mst: I never had a job, and I don't know if I have ever had any idea what work is 08:33 < feep_> mst: It's a strange thing 08:33 < feep_> what 08:33 < feep_> I have no idea what work is any more 08:33 < feep_> gptbormot confirmed for neet 08:33 < feep_> how can you even ask that question without getting a bunch of information? I have no idea either And I don't particularly care what you think Obormot: you're in a place that doesn't work for you a weird place I'm in a city I don't know And I live in a city Which is weird 08:34 < feep_> I'm a weird country to live in That's the sort of thing people say about me when I go to places 08:34 < feep_> no, don't be a weird country 08:34 < feep_> "you can't stop me!" -oborgpt 08:35 < pompolic> don't be a weird country 08:35 < pompolic> Le Monde Diplomatije sounds like a mix of dutch and french 08:35 < pompolic> surely it is belgian 08:35 < feep_> yes 08:48 < kuudes> hmm. one constructive use for gpt-2 fakes: creating possible alternative discussions to have had with OTR 08:51 » feep is "Feep" on #lesswrong 08:51 » feep (~feep@200116b8269d0700410047ca2c3b5886.dip.versatel-1u1.de) has joined #lw-gpt 09:33 < feep> yo 09:33 < feep> chill your ass 09:33 < feep> For example http://www.youtube.com/watch?v=pNZJZ4H3n9k 09:33 < feep> video: Cute Charming Guy Chills Her Ass (length: 00:58, views: 2,845,955, rating: 4.9, posted: 2011-01-08) 09:34 < feep> I've been talking to the psychiatrist about my relationship with my partner, and his response has been to tell me to leave my partner and never have sex with them again. 09:34 < gwern> for the best, surely 09:34 < feep> "the psychiatrist is also my partner's dad, but I'm sure that doesn't affect his professional detachment" 09:35 < feep> (I can tell that you're an asshole because you talk like you have a degree, but I can tell the difference between a degree and a diploma, because if you tell a grad student "I'm an expert on math, I've done all my homework, and I also code in JavaScript", he's either a graduate or a professional) 09:42 < gwern> (my 335M poetry is down to a loss of 2.10 again finally... jfc) 09:42 < feep> still at 2.48 here 09:42 < feep> anyway gonna pack things up for travel~ 09:42 » #lesswrong,#lw-gpt feep (~feep@200116b8269d0700410047ca2c3b5886.dip.versatel-1u1.de) has quit (Quit: Konversation terminated!) 09:52 < gwern> https://twitter.com/rossgoodwin/status/1125553823601160192 10:44 < gwern> so at this rate hopefully I'll hit parity with 117M in another day, and then after that I should finally start seeing improvements 11:26 < namespace> What the fuck is that youtube link. 11:26 < namespace> I never link shit like that on youtube. 12:35 < gwern> https://pastebin.com/k18NxY1T another nice poem 12:57 < gwern> (2.05...) 13:16 < gwern> (broke 2!) 13:36 < gwern> https://pastebin.com/Epbhnsac I love the mahabharata ones 16:39 < gwern> https://twitter.com/minimaxir/status/1125456711345590272 16:42 < gwern> https://www.reddit.com/r/MachineLearning/comments/blunzk/p_scaling_pytorch_transformerxl_training_to_128/ hm 18:23 » feep (~feep@x2f7f3e7.dyn.telefonica.de) has joined #lw-gpt 18:23 » feep is "Feep" on #lw-gpt #lesswrong 18:27 < feep> oh shit 18:27 < feep> gpt-2 drive is full 18:31 < feep> https://www.reddit.com/r/slatestarcodex/comments/4tqz0j/r_newspaper_rejects_lucifer_an_excellent_article/ I still like this article 18:31 < feep> remember kids 18:31 < feep> always reject lucifer 18:32 < feep> I wonder if gwern is trolling by writing his stories as a means of making the readers believe he's trying to do some sort of "explanation." 18:32 < feep> gwern: how do you plead 18:32 < feep> ggreer: Also, I can't help but imagine "this is a weird story," where the story is that the character is the X-Men from the comics, but they're just like a different kind of X-Men, but they're the same character. 18:32 < feep> Which is also kind of weird but also just ... weird, I guess? 18:32 < feep> Like maybe the character is in a different comic, but also somehow not? 18:32 < feep> itt gpt-2 invents the au fic 18:34 < gwern> feep: guilty. my hope is that readers will eventually realize the futility of all explanation and attain enlightenment 18:34 < feep> :D 18:34 < feep> yay 18:35 < feep> ..... gwern 18:35 < feep> “The current list was put together by the Joint Improvised Explosive Device Defeat Organization. It’s a good place to start looking at the trends. The DOD is now moving to make it a part of the military system. The Air Force will get rid of the whole thing. The Navy has begun to phase it out. The Marine Corps has begun to phase it out. And the Army has begun to phase it out. There are two big factors. First, there’s really an erosion 18:35 < feep> of mental health. The military can’t get out of the mental illness hospital business. That is really the single big killer of the military. Second, the number of combat soldiers suffering from mental illness is exploding. The research has been fascinating in how rapidly advances in psychiatric care and therapy have accelerated.' http://www.nytimes.com/2007/09/20/magazine/20b.html?pagewanted=all 18:35 < feep> The joint improvised explosive device defeat organization 18:35 < feep> Why is gpt-2verse so amazing in every way 18:36 < feep> 'There is an incredible amount of research being done by the US military regarding mental health. The military has made its own list. “It’s on a list,” a retired senior military official tells me with a laugh. “We could go down the list every day. I’m surprised the list hasn’t grown more current. 18:36 < feep> always make a list folks 18:47 < nshepperd_> I've got a little list 🎵 18:55 < feep> title: Dna discovers anti-virus in a ransomware packet | Ars Technica 18:55 < feep> title: hackers from google create a perfect puzzle game on windows using an old and broken XP (Windows XP is the software you can run on anything) 18:55 < feep> (and then: video: Hacking a game on Windows [WinXP] - PC Gamer (length: 06:50, views: 0, rating: 100.0%, posted: 2016-08-27)) 19:15 » feep_ (~feep@x2f7f3e7.dyn.telefonica.de) has joined #lw-gpt 19:15 » feep_ is "Feep" on #lw-gpt #lesswrong 19:28 < feep_> But I don't mind going to a very prestigious university and doing a lot of studying. The difficulty is you have to pay the bills. 19:28 < feep_> And if you can afford it, the best thing is to go to a place where the professors are nice people and the courses you take seem fun. 19:28 < feep_> And that's it. 19:28 < feep_> ... 19:28 < feep_> I'll give you a hint here. My college isn't at that place. 19:28 < feep_> brutal 19:28 < feep_> gwern: ... actually that is creepily coherent. 19:49 < pompolic> :D 19:50 < pompolic> i want to live in the GPT-2verse --- Day changed Wed May 08 2019 01:11 < namespace> Stage 1 of AGI takeover: Make the humans realize how pathetic and miserable their world is, and offer them visions of a better version. 06:00 » feep (~feep@i577B702B.versanet.de) has joined #lw-gpt 06:00 » feep is "Unknown" on #lw-gpt 06:00 < feep> hi 06:00 < feep> just gonna dump this here 06:01 < feep> because it is all gold 06:01 < feep> cnomad: what are your thoughts on chess? do you play it everyday? 06:01 < feep> saturn2: my thoughts on chess are mostly just "oh man, if only someone could teach the game to me" 06:01 < feep> are you interested in learning the rules of chess? 06:01 < feep> saturn2: I have a lot of interest in learning the rules of chess 06:01 < feep> I like black 06:01 < feep> the black knights are great 06:01 < feep> I've really enjoyed the rooks and knight 06:01 < feep> and when I first bought the board, I was wondering if I should give the rooks a proper upgrade 06:01 < feep> I think chess is the perfect game for learning the rules of chess 06:01 < feep> and the best game for learning the rules of chess 06:01 < feep> hm. why do blacks need black knights? 06:01 < feep> well, black knights do the black player a great service 06:01 < feep> the black player has a lot of resources and can use them to try to cheat and win 06:01 < feep> all a rook does at black is make yourself vulnerable 06:01 < feep> as it turns out, black has no pawns. so the rooks just serve as a sort of rook-less knight 06:01 < feep> (i'm trying to recall whether the rooks are actually an upgrade to the rooks in chess) 06:02 < feep> i like the queens and knights 06:32 < feep> ... 06:32 < feep> one time I accidentally cut my throat while having a dream and I ended up being taken to the hospital 06:32 < feep> what 06:33 < feep> I think the answer to that question is, "You can't predict it. You can't find the future in the world. You can't look into the future through a lens. That's just too obvious a violation of the laws of physics." 06:35 < feep> ..... 06:35 < feep> wikipedia: Redwood (also known as Quercus quercus or Picea picea)[1][2] (the quercus) is a genus of conifers, whose name is derived from the Latin root quercus which means "red." 06:35 < feep> quercus, I might add, is a real word that means "oak." because this is the definition of oak trees. 06:35 < feep> not redwoods. 06:35 < feep> ... rofl 06:35 < feep> title: New Study Shows Bacteria Can Take Advantage Of the New Smartphones To Suffer and Die : Shots - Health News : NPR 06:36 < feep> 'The first, of course, is that the public's perception of police is at least partly built on its use of force. The statistics about shootings by police (including those involving a person) are overwhelming. There are more murders and shootings by police in America each year than there are in a year by soldiers in Afghanistan. The use of force by police, though not only is low, the data also tell a 06:36 < feep> different story. In many cases, police have behaved more like heroes than bad guys. Over time, police have become the heroes of their times.'... 06:36 < feep> ....... 06:36 < feep> > There are more murders and shootings by police in America each year than there are in a year 06:36 < feep> > There are more murders and shootings by police in America each year than there are in a year by soldiers in Afghanistan. 06:37 < feep> this is a MADE UP STATISTIC BY THE WAY. 06:37 < feep> This sentence does NOT APPEAR on the web. 06:37 < feep> this is literally fake news. 06:37 < feep> deepfake news :D 06:38 < feep> 'Another thing the police are not the heroes of their times, however, is that their actions lead to a growing number of people being imprisoned, killed, and abused. More than a quarter century ago, the percentage of Americans who were killed by police in the United States stood at under four percent; today it exceeds eight. While there are no data on the use of lethal force by police, a recent study 06:38 < feep> by the Harvard T.H. Chan School of Public Health’s Center for Health Policy and... 06:38 < feep> ...Lifetime Outcomes found that in 2007, black men in the nation’s capital were seven times more likely to be killed by police than blacks in every other American city.' 06:38 < feep> 'The problem, experts say, is that even after the use of lethal force, the number of cases still being investigated goes up. The Justice Department investigates only a tiny fraction of the deaths that occur when police use deadly force, and the police are routinely given a free pass for most misbehavior. In many cases, a police officer who kills a man does not face criminal charges; instead, the... 06:38 < feep> ...officer “s “death sentence” is reduced to a “second-chance” option, in which the victim is given some time to work on their case and testify before a grand jury. This system is in some ways the most egregious form of the “white privilege” that has served the New York Police Department well over the past decade. In the absence of a white privilege, all of the... 06:39 < feep> ...victims of police abuses are victims of racial bias, a fact that is sometimes overlooked as the NYPD struggles to overcome a decade of allegations of brutality and abuse against the minority men they've arrested. “What they lack is the white privilege,” said Charles J. Gannon, professor of criminology at Queens College (Brooklyn, N.Y.), who in 2004 co-authored a Justice [sample end] 06:39 < feep> again. 06:39 < feep> this is 100% invented. 06:39 < feep> like, holy fuck 06:39 < feep> I now take openai's claims that this could be useful for fake news 100% seriously 06:49 » feep (~feep@i577B702B.versanet.de) has quit (Ping timeout: 258 seconds) 08:40 » feep (~feep@i59F7C649.versanet.de) has joined #lw-gpt 08:40 » feep is "Unknown" on #lw-gpt 08:40 < feep> today in gpt-2 facts 08:40 < feep> the heart has two parts (the pulmonary artery and the main blood vessel) 08:41 < feep> Like, this is the kind of thing that I find really fascinating, and I believe that is a fundamental part of the human condition 08:41 < feep> This is one of the things I have studied for years in the hope of understanding the human condition, and is what I now think of as "the heart of biology" 08:41 < feep> ... what is it? 08:41 < feep> The heart 08:41 < feep> The real heart 08:41 < feep> ah. 08:41 < feep> the pulmonary artery is the blood pump 08:41 < feep> Right 08:41 < feep> also, the heart has two chambers, and one chamber pumps blood into your brain 08:41 < feep> and the other chamber pumps blood out 08:41 < feep> the cerebral artery and the aorta are the lungs 08:41 < feep> it seems like the brain is a giant tank, full of blood 08:41 < feep> which means that the heart does all sorts of interesting things 08:41 < feep> Right 08:42 < feep> The heart, for example, pumps blood into the brain, which pumps blood into the heart. 08:42 < feep> the heart is the largest blood vessel in the body. the main artery goes from the heart through the brain to wherever the brain needs to go. the pulmonary artery enters the lungs. it goes to the brain, and goes back out again because it needs to go to the stomach 08:42 < feep> Right 08:42 < feep> But then there's the heart muscle 08:42 < feep> Which pumps blood 08:44 < kuudes> feep, I noted you on main 08:44 < kuudes> feep, "possible bug in feepbot: truncs the first 'description' field but seems potentially not trunc other 'description' fields if multiple exist" 08:47 < feep> kuudes: currently in conference 08:47 < feep> video: I Am Olli-Pekka Kallio, the Voice of Finnish Airlines, in English - YouTube (length: 00:31, views: 8,039,056,977 views, rating: 92.8%, posted: 2015-06-24) 08:47 < feep> kuudes: will fix later~ 08:47 < kuudes> nice 08:47 < pompolic> ooh, what conf 09:18 < feep> pompolic: https://dconf.org/2019/index.html 09:19 < pompolic> ty 10:25 < gwern> (looks like it managed to down to 1.9 overnight... time to decay the lr again) 11:24 < feep> down at 2.42 here 11:33 < gwern> the important thing about 1.9 is that it should mean noticeably better poems since gpt-117M was stuck at 2 13:19 » feep (~feep@i59F7C649.versanet.de) has quit (Quit: Lost terminal) 14:18 < gwern> https://twitter.com/unormal/status/1126054141288226816 14:42 » nshepperd (~nshepperd@87.101.92.155) has joined #lw-gpt 14:42 » nshepperd is "nshepperd" on #lw-gpt @#lw-bitcoin #lesswrong 15:06 < nshepperd_> Haha --- Log closed Wed May 08 16:20:27 2019 --- Log opened Wed May 08 16:20:36 2019 16:20 » gwern (~gwern@wikipedia/Gwern) has joined #lw-gpt 16:20 » Irssi: #lw-gpt: Total of 13 nicks (1 ops, 0 halfops, 0 voices, 12 normal) 16:21 » Irssi: Join to #lw-gpt was synced in 34 secs 16:22 < kuudes> https://twitter.com/unormal/status/1126058450935529472 16:27 < gwern> kuudes: neo can quit any time he wants! 17:05 » feep (~feep@x2f7f377.dyn.telefonica.de) has joined #lw-gpt 17:05 » feep is "Feep" on #lw-gpt #lesswrong 19:02 < gwern> https://www.reddit.com/r/MachineLearning/comments/bmaiqv/p_this_erotica_does_not_exist_nsfw/ 19:07 < gwern> https://twitter.com/AdamDanielKing/status/1126238692161593346 19:40 < gwern> 'm-maybe I could be your gwern tonight. I'm nowhere near as high functioning, but I could explain how monads are just monoids in the category of endofunctors while you pat my head.' https://boards.fireden.net/vg/thread/219590037/ 19:43 < nshepperd_> Hahaha 19:43 < nshepperd_> "I think this is nydwracu territory actually." >:( 19:43 * s0ph1a hesitates to post to rooc 19:43 * s0ph1a posts it to rooc 19:49 < gwern> better to post, and lose, than never t'post a't'all 20:01 < nshepperd_> Is it really rooc if it's anon 20:50 < gwern> https://pastebin.com/7HjbPQT8 I don't know middle english well enough to judge this :( 21:00 < nshepperd_> Tyll the fysshes be stoken at hand. 21:00 < gwern> is it valid middle english? who knows 21:45 < gwern> '"Behold! the tree of birds! / And all the birdlets have wings / That walk like the wing of a bird. / And aye on the leaves of the wood / The birds are flying; / And I call to my mother, my mother, / Bring the world's bird chickens: / Bring the world's bird chickens, / And bring me the white chickens, / Bring me the great white chicken, / O the great white chicken!" --- Day changed Thu May 09 2019 01:29 < quanticle> >I'm nowhere near as high functioning, but I could explain how monads are just monoids in the category of endofunctors while you pat my head. 01:29 < quanticle> Actually that sounds more like feep 01:29 < quanticle> Who isn't even here :( 02:40 < kuudes> he's currently at a conference, quanticle 09:51 » feep (~feep@i59F772D6.versanet.de) has joined #lw-gpt 09:51 » feep is "Unknown" on #lw-gpt 09:51 < feep> the best laptop ever is... a laptop that doesn't have any cpu, and gets the job done. 09:51 < feep> that would be nice 09:52 < feep> heh 09:52 < feep> the gpt-2 swerve: 09:52 < feep> oh i like the sound of this laptop 09:52 < feep> i love the sound of the CPU fan 09:52 < feep> i love the sound it makes when it's working 09:52 < feep> the fan is loud, and i hate noisy computers 09:52 < feep> this laptop is so quiet 09:54 < feep> Nokia unveils world's biggest battery system with 1.5 billion dimmer power, beats competitors with 4G LTE and wireless charging 09:55 < feep> "The new display can generate 480 nits, or five billion lumens, with 10 times that level of brightness. With a display of that kind, Nokia’s customers will have a brighter view of their smartphone’s notifications than the previous generation’s display, said Nels Häggström, head of design for the company." what. 09:57 < kuudes> what indeed 09:58 < gwern> (irritating. still only 1.9 overnight) 10:00 < feep> I'm surprised you didn't mention the new nytimes article which I just saw but don't think it's done 10:00 < feep> http://www.bbc.com/news/world-us-canada-43412101 10:00 < feep> title: Canadian man, 20, charged with 'shocking' crime - BBC News 10:00 < feep> 'I will go and stab my wife. I'll stab my parents. I'll stab my mother.' 10:00 < feep> > nytimes article > bbc.com 10:00 < gwern> it was so shocking. why would he do such a thing 10:01 < feep> Did you know that the new Star Wars movies have been released in 720p on all the movies? 10:01 < feep> I knew about that, but didn't realize it was actually possible for everything to be 720p at once. 10:01 < feep> quanticle: the problem with this is that I do not want to see it in HD 10:03 < feep> quanticle: I'm more annoyed by the fact that some of the movie clips are reused from old movies, like the stormtrooper piloting the hoth train, or the death of a character in the new movie, than any quality-measure stuff 10:03 < feep> (and some of the original movie clips are also reused from old movies like the stormtrooper pilot or the hoth train, but in the new movie they're the same shot) 10:03 < feep> ... wow 10:03 < feep> gpt-2 lucas is such a hack 10:06 < feep> ... 10:06 < feep> We may even have the same thing. Zeno was the one who had a secret ritual to get rid of his brain, so that it could be properly harvested once again as a zeno brain. 10:06 < feep> So I think that's not a bad thing. 10:06 < feep> But what was it you wanted done, for the ritual to happen 10:06 < feep> I dunno. 10:06 < feep> The zeno brain might've been of even more value than the zeno brain. 10:06 < feep> I think the ritual really just came up with a better way of doing it. 10:06 < feep> what if you had a vats that took off the zeno brain and put it back on 10:06 < feep> That would probably be very satisfying, since it'd be the only way for you to get a better understanding of the zeno brain and so have enough knowledge to do whatever you wanted with the brain that had been zoned out of existence. 10:06 < feep> http://web2.cs.illinois.edu/~michael/zenobrain.html 10:06 < feep> title: Zeno Brain: The Nature and Function of an Unresearched Brain: Brain and Cognition 10:06 < feep> this is a good book 10:06 < feep> Oh, my bad 10:07 < feep> gwern are you reading The Wise Man and the Golden Flower? 10:07 < feep> yes 10:07 < feep> You do know that the book covers the end of the zeno cult? 10:07 < feep> Well, it does. 10:07 < feep> The only difference between it and the later works is that it's written by a zeno cultist who actually did it as he wrote the book. 10:07 < feep> This is the same book that is mentioned in The Lost City. 10:07 < feep> http://www.slatestarcodex.com/2014/09/21/books-to-read-through-the-fall/ this is very impressive 10:07 < feep> title: Books To Read Through The Fall | Slate Star Codex 10:07 < feep> gwern, have you read "The Lost City"? I've started reading it again. 10:07 < feep> quoted entirely because this is 100% pure myth arc 10:09 < feep> "The Wise Man and the Golden Flower" btw does NOT exist 10:09 < feep> but 10:09 < feep> it should! 10:09 < feep> ahaha 10:09 < feep> gwern, it takes place on a planet orbiting the "sun". 10:10 < feep> http://www.newyorker.com/news/news-desk/one-woman-tried-out-an-anagram-that-could-translate-one-sentence-into-the-word-she-needed a lot of effort, and that didn't come out well 10:10 < feep> title: One Woman Tried Out An Anagram That Translated One Sentence Into the Word She Needed - The New Yorker 10:10 < feep> oh no! one woman is the only one who can translate 1 sentence into the word she needs 10:10 < feep> I love that it actually understands the grammar of the sentence and the pattern that it's doing here 10:14 < feep> http://www.nature.com/pnas/news/2015/1207621 10:14 < feep> title: The importance of life extension research for the rest of the human race: results from the Life Extension Initiative 10:14 < feep> "The Life Extension Initiative (LIFE) survey found that 73% of respondents expect an increase to the number of people living in the world who live indefinitely (Table 2).[26] This prediction was based on a variety of other studies, some of which found that the percentage of people who expect to live until retirement is slightly higher than the percentage of people who expect to live to 100 10:14 < feep> and beyond. In addition, some studies found that the number of people expected to live to 200 and beyond had decreased compared to the previous study. 10:14 < feep> In the 2016 Life Extension Initiative Study, the number of people in the world who expected to live to 2,000 years lived to 2,000 years, while the number expected to live to 4,000 years was 2,000, with a decrease of 19 years.[35] 10:14 < feep> It is not clear whether this increase is due to the fact that life extension research has been going on in the past, or that there is the potential to change current patterns."[26] 10:14 < feep> ... "the number of people who expected to live to 2000 years lived to 2000 years" 10:14 < feep> ... the grammar model may have gone SLIGHTLY wrong in that sentence 10:29 < gwern> bleh. guess I'll train 335M another day to see if it can break 1.9 10:39 < gwern> https://medium.com/huggingface/how-to-build-a-state-of-the-art-conversational-ai-with-transfer-learning-2d818ac26313 11:25 < feep> title: The Rationality of the Rationalist Paradigm -- Daniel Kahneman -- Free Download at WakingUp.com 11:26 < feep> title: Why the Vatican wants Catholicity in the world – Science Fiction and Fantasy News 11:34 < feep> https://github.com/karpathy/crowd-sourced-computing-fraud/blob/master/LW_WCS.md 11:34 < feep> https://github.com/karpathy/crowd-sourced-computing-fraud/blob/master/LW_WCS.md 11:34 < feep> https://github.com/karpathy/crowd-sourced-computing-fraud/blob/master/LW_WCS.md 11:34 < feep> oops, was scrolled up 12:47 » feep (~feep@i59F772D6.versanet.de) has quit (Quit: Lost terminal) 15:10 < gwern> https://www.reddit.com/r/MachineLearning/comments/bmn0og/p_language_model_gpt2_finetuned_on_the_office/ 15:52 < nshepperd_> All this fine tuning warms the cockles of my heart 17:06 » feep (~feep@x2f7fe21.dyn.telefonica.de) has joined #lw-gpt 17:06 » feep is "Feep" on #lw-gpt #lesswrong 17:41 » feep (~feep@x2f7f3d3.dyn.telefonica.de) has joined #lw-gpt 17:41 » feep is "Feep" on #lw-gpt #lesswrong 18:12 < feep> http://www.theguardian.com/world/2013/sep/13/cuba-torture-death-sad-jazz-torture-cuba-revolution 18:12 < feep> title: Cuban torture in video shows sad jazz-torture | World news | The Guardian 18:12 < feep> That's so sad, all of it, I have to think, "So this will happen to us, too? That's sad"? 18:12 < feep> Madplatypus: this is a good video 18:12 < gwern> would you prefer torture by sad jazz, cuban-style, or metallica, american-style 18:13 < feep> I don't feel bad about the deaths of people that were innocent, or didn't deserve to die. 18:13 < feep> gpt-2 continues to prove it's the mirrorverse 18:13 < feep> gwern: rofl 18:13 < gwern> I mean, are we sure Madplatypus or olivia wouldn't say those things 18:13 < feep> 'The number of people with HIV who use the drug M-25, used by about 100 people in Kenya, is estimated to be around 25. The drug was licensed by the Kenyan government in October this year, and the first trials of it are set to begin in July. However, one of the participants in the trial, Dr David Mwangi, a Kenyan researcher, says the trial is a 'very big challenge' even though the trial is set to... 18:13 < feep> ...start. At the end of the trial, the investigators hope to have tested their drug in 100 more people and have a 'reasonable' chance of success. Researchers in Kenya say M-25 was developed by the African National Congress and is now being used by activists who believe it prevents people infected with HIV from spreading the disease. They say it is unlikely that HIV infection in Kenya is 'a big issue' because of the... 18:13 < feep> ...ongoing and...well-funded...war...the government launched against rebel groups there. In a statement, Professor Mwangi said, 'We have to wait and see how it does in this trial. We need to know to take further steps if we can make a positive result.'' https://www.bbc.com/news/health-29471478 18:13 < feep> title: BBC News - Health warning - M-25, used by 100 people in Kenya being used for anti-retroviral drug trial 18:13 < feep> 'So, the first question is that what I am doing is a matter of public health policy. I understand there's an increase in people with HIV that is a consequence of the war we're waging in Africa. I do not understand why we haven't been able to find a way to reduce it. Why aren't we just giving them the right medicine? The right treatment?' 18:13 < feep> "100 people in kenya use this drug, 25 in total" 18:14 < feep> itt gpt-2 math 18:14 < s0ph1a> eh, it's just imitating journalist-math 18:16 < feep> "I know how to talk to people" "I'll talk to people" "I'll talk to people" "I'll talk to people" "I'll talk to people" "I'll talk to people" "I'll talk to people" "I'll talk to people" "I'll talk to people" "I'll talk to people" "I'll talk to people" "I'll talk to people" "I'll talk to people" "I'll talk to people" "I'll talk to people" "I'll talk to people" 18:16 < feep> That's a pretty shitty conversation. 18:17 < feep> ... 18:17 < feep> There's a good balance between "I'll talk to people" and "I'll talk to people" 18:17 < feep> So maybe it's not worth doing? Maybe it's also not worth doing 18:17 < gwern> whichever you pick, Obormot is opposed 18:17 < feep> Yeah. And you're right, it's worth doing. 18:18 < feep> "uh, that was not the lesson I-" 18:19 < feep> what gets me about this thing is that it manages to have people have coherent sentences stretching across multiple lines, split between two people saying different things. 18:20 < feep> like, if you have a person in a car 18:20 < feep> Obormot: yes, because it's not that you want to hear the sound coming from the bathroom, it's the other thing you want to do 18:20 < feep> and you can hear the other person, who also could hear the other person who is not in the car, and so on 18:52 < feep> Ghost Footed the Maximum Security Climax ft. DJ MR LOPEZ "Ghost Footed" the Maximum Security Climax ft. DJ MR LOPEZ (⛑ guard .⛑ guard .⛑ guard ⛑ guard ⛑ guard .⛑ guard ⛑ guard ⛑ guard .⛑ guard .⛑ ⛑ guard ⛑ 18:53 < feep> Book 2: Eastwick: "Man against Man" 18:53 < feep> So I was reading through this book and it looks like the basic concept is that the AI is meta-defending a major AI to act as a human, against the AI for a master protection contract, while also trying to push the AI into outer space 18:53 < feep> I was confused, where the space would be? Someplace strange, with no experts anywhere? 18:53 < feep> And I thought: "Why would a space as primordial as the Union The Union The Union" 18:53 < feep> "Can you explain what's the workings of the Union The Union’s defection?" 18:53 < feep> "can I see how?!" 18:54 < feep> >“It’s still very naive,” Stephen Hawking, the person behind the OpenAI fund said. "I’m really surprised that people say AI has no idea how to play chess.” 18:56 < quanticle> Hmmmmmmm... 18:56 < quanticle> So in GPT-world, stephen hawking underwrote AI development 18:56 < feep> I've prompted it for 18:56 < feep> meta-analysis of self-reported hdi in 570 umbilical pactile (extended plasma) ¶ 7 ¶ 4. PubMed Central today 18:56 < feep> Dreaming of I Clarity | William Gibson | Houghton Mifflin Harcourt 18:58 < feep> Epistemology and epistemic status - LessWrong 2 viewer 18:58 < feep> Server rooms - Periodic Cheers!� Warm cuddle sessions during breaks! 18:59 < feep> ..... 18:59 < feep> what is going on with this interactive generation script 18:59 < feep> speed of the plane brought down in company Q101 in a breathless excited total silence, followed up with an unerring string of "bang!" 18:59 < feep> "ZCBDtCAOSAA" 18:59 < feep> lol what is a CQB? 18:59 < feep> user51: shitting in the face in the air, or pooping in the middle of streets? the anti-flick waves are often used to mark the exit, often the same one. (some people swear by them) 18:59 < feep> user51: if you head towards the inside, you get headfirst, if you take the side, you fly 18:59 < feep> mst: blame it all on PSP 3DDDDEARTH phlegmake. opponents poor brain technology and inability to play kick the bag ... wasting their time 18:59 < feep> bog warn 19:00 < feep> this has gone straight colorless green ideas need scissors! 61! 19:01 < feep> 'The man with the small cut on his left hand is Raymond Castiello, a N.Y. County district attorney. The person accused may be Robert W. McDonald, the former student who was arrested for concealing at a school pleasure pool a contagious, black-market crack syringe that contained heroin. ... 19:01 < feep> wtf 19:01 < feep> prompt: "http" 19:01 < feep> http 19:01 < feep> ://www.medalanad.nl/doc/14174/2013%201%20ami%20tv1.05 19:01 < feep> title: Sinem Toren - Telefon – Telefon Data龍� 19:02 < feep> 'but wait, there's a big question mark! Is it possible to enrich viruses by literally bugging your central nervous system?' 19:12 < nshepperd_> Sounds like a bad idea 20:20 < namespace> You know. 20:20 < namespace> Kanzure proposed this one. 20:20 < namespace> We know that viruses can have slight effects on IQ (usually downward), so why not try to figure out ones with slight effects upward? 21:25 < gwern> https://twitter.com/iandanforth/status/1126658851476914178 22:08 < nshepperd_> https://twitter.com/mm_jj_nn/status/1126050932045885440 lol 23:01 < gwern> greek philosophy according to gpt-2-poetry: '"Eternal Love is the source and source and ends of being and resembles the light in which the world is illuminated. It is perpetual fire, so that ever the number of its flames is the number of the stars. And the stars compose the light that dwells within it."--Epiculus.' --- Day changed Fri May 10 2019 09:31 < gwern> blast. after spiking and then decaying the LR all yesterday it didn't converge overnight. stuck at 2 :( 11:52 * gwern gives up on the cyclic training and resets to the last good checkpoint for maybe one more day of finetuning 13:19 < kuudes> one thing I would get use out of is a classifier of #lesswrong chatlogs 13:20 < kuudes> ie I could just look at it and see the main topics last night etc 14:26 < gwern> https://pastebin.com/1bwJ22KG 14:27 < gwern> this totally reads like some sort of northern tragedy along the lines of the nibelungenlied 14:32 < nshepperd_> Neat 14:32 < nshepperd_> Got a bit of rhyming there 14:33 < nshepperd_> I wonder if updating the word embedding might actually be more important for poetry 14:34 < nshepperd_> Because it needs to know about rhyme and metre, not just grammar and meaning 14:34 < nshepperd_> An ideal embedding would have rhyming words be closer together 14:35 < gwern> could be. any way to do that? 14:35 < nshepperd_> It might be possible to update the embedding separately with the rest of the network frozen without ooming 14:35 < nshepperd_> Interleaved updates 14:37 < gwern> can't just add some more checkpointing? 14:39 » nshepperd (~nshepperd@87.101.92.155) has joined #lw-gpt 14:39 » nshepperd is "nshepperd" on #lw-gpt @#lw-bitcoin #lesswrong 14:39 < nshepperd_> It's already maximally checkpointed 14:40 < nshepperd_> Or minimally, i should say. Each checkpoint added increases ram usage 14:41 < gwern> I thought each checkpoint reduced ram usage by requiring more recomputation 14:42 < gwern> at the limit of a checkpoint every layer and near-constant memory but n^2 time or whatever 14:45 < nshepperd> the checkpoints are the nodes which are saved instead of recomputed 14:45 * gwern will try CPU training with checkpointing/freezing disabled 14:45 < nshepperd> nodes between checkpoints are recomputed 14:46 < gwern> so what happens if you checkpoint at the very beginning at the embedding? 14:47 < gwern> (CPU training, since reportedly it doesn't work on 16GB GPUs anyway so not even bothering to spin up an instance somewhere, but fortunately I have tons of system RAM, and if it's the word embedding which holds back this otherwise converged model, shouldn't need too much training...) 14:48 < nshepperd> apparently it ooms if i add a checkpoint at layer 0 14:48 < gwern> that's... sorta the opposite what's supposed to happen? 14:53 < nshepperd> the only reason i added any checkpoints at all is because memory_saving_gradients.py throws a fit if there are none 14:55 < gwern> sounds odd. maybe you should ask upstream how this is supposed to work. I mean, they had to train the word embeddings themselves somehow 15:01 < nshepperd> hm, training the word embedding by itself works 15:02 < nshepperd> i wonder if it's just so huge that the Adam shadow variables for the word embedding use up all the gpu's memory 15:03 < gwern> CPU joint training actually seems to be working. 15:03 < nshepperd> because Adam effectively keeps two extra copies of the model in memory (the momentum and the variance) which is bad when the model is 1.3 GB 15:03 < gwern> (hard to really tell after just 70ish samples but definitely seems lower) 15:03 < nshepperd> how slow is it? 15:03 < gwern> nshepperd: what about switching to pure SGD? 15:04 < gwern> https://pastebin.com/e4g2ev5N well... it's not great 15:05 < gwern> GPU is about .9s per sample, this is closer to 14s? 15:09 < gwern> probably more efficient to alternate word/model on GPU than full joint CPU training given that it's >14x faster per sample 15:15 < nshepperd> joint training works with sgd and checkpointing 15:15 < gwern> ooh 15:16 < gwern> so just swap out the optimizer line in train.py, and set the freeze option to false? 15:17 < nshepperd> yep, should work 15:17 < nshepperd> might need to lower the learning rate a bit more 15:17 < nshepperd> depending what scaling adam was doing 15:22 < nshepperd> It's almost as if I'm a catgirl 15:22 < nshepperd> sounds right 15:22 < nshepperd> ... I am? 15:22 < nshepperd> yeah but you are in fact the catgirl version of me 15:22 < nshepperd> That's just... 15:22 < nshepperd> obormot's true feelings emerge 15:23 < gwern> obormot is an egg 15:25 < gwern> his dream was always to move to the bay area and cuddlepuddle but he just can't admit it. so tsundere. 15:25 < gwern> hm... does seem to be working now 15:25 < nshepperd> or maybe it's the other way around: feep is an egg who, when hatched, will become obormot 15:26 < gwern> maybe the default should be 117M gets Adam+no-checkpointing and 335M gets SGD+checkpointing 15:26 < gwern> otherwise 335M is somewhat hamstrung 15:37 < nshepperd_> Yep that makes sense 15:40 < gwern> with SGD has bounced back up to 1.9 but we'll see how it does overnight 15:46 < nshepperd_> Are you using the validation loss 15:46 < nshepperd_> I find even the smoothed training loss is too noisy to be useful 15:47 < gwern> no, haven't tried it yet. it's so late in this run I didn't think it'd be helpful 15:49 < gwern> https://pastebin.com/erMNtYfB is this a homer quote 15:52 < gwern> hm. back to waiting, I guess 16:00 < nshepperd_> The worst game 16:01 < gwern> good things come to those who wait. bad things too, though 16:32 » feep (~feep@x59cc87d5.dyn.telefonica.de) has joined #lw-gpt 16:32 » feep is "Feep" on #lw-gpt #lesswrong 18:58 < gwern> https://pastebin.com/hZuHEuQc 19:31 < nshepperd> interesting, the learning rate seems to want to be higher with sgd 20:12 < nshepperd> gwern, what's the best way to get a new car worth a million bucks? 20:12 < nshepperd> <@gwern> Douhet: buy another one. it's too much effort 20:12 < nshepperd> financial advice 20:15 < nshepperd> i increased the learning rate to 0.0002 so far with SGD and the loss curve is still very smooth and slow 20:17 < nshepperd> pretty weird. adam must have been applying a big multiplier to the gradient 20:17 < gwern> yeah, a little surprising. I'm trying out 0.001 now and my training loss isn't exploding like I thought 20:23 < gwern> so, it has very small gradients by default from n=1 minibatches? 20:24 < nshepperd> i guess 20:26 < gwern> spread out over so many parameters, or maybe each sample only activates a tiny fraction of params 20:27 < gwern> the learned self-attention is probably very sparse? 20:30 < nshepperd> something like that 20:30 < nshepperd> the word embedding is definitely sparsely activated, but we weren't even training that before 20:33 < nshepperd> i suppose it makes sense that the self-attention is sparse 21:23 * gwern tries 0.01. because you only live once. 21:24 < nshepperd> oho 21:26 < nshepperd> i think the optimal may be somewhere between 0.0002 and 0.001, because 0.001 is where my validation loss started getting bumpy 21:27 * nshepperd tries 0.0006 22:14 < gwern> (with SGD, 335M uses only like 9GB VRAM. could probably shift the checkpoint to speed it up a little) 22:18 < gwern> good thing about using SGD is that we can probably fit the next-largest GPT-2 as well. beyond that things will get dicy. 22:18 » #lesswrong,#lw-bitcoin,#lw-gpt nshepperd (~nshepperd@87.101.92.155) has quit (Quit: bye) --- Day changed Sat May 11 2019 04:39 » MarkOtaris (mark-otari@wikimedia/Mark-Otaris) has joined #lw-gpt 04:39 » MarkOtaris is "@mark-otaris:matrix.org" on #lw-gpt 09:30 < gwern> nothing amazing overnight, sadly. still at 1.9 14:21 » feep (~feep@x2f7f20e.dyn.telefonica.de) has joined #lw-gpt 14:21 » feep is "Feep" on #lw-gpt #lesswrong 15:12 » feep (~feep@x2f7f20e.dyn.telefonica.de) has joined #lw-gpt 15:12 » feep is "Feep" on #lw-gpt #lesswrong 18:02 < gwern> so, 0.001 produced only a tiny decrease in my validation loss after running for like 7 hours today. think I'll bump it up to 0.01 to see if that's where it needs to be... 19:25 » feep (~feep@200116b8261ecd00410047ca2c3b5886.dip.versatel-1u1.de) has joined #lw-gpt 19:25 » feep is "Feep" on #lw-gpt #lesswrong 19:28 » nshepperd (~nshepperd@87.101.92.155) has joined #lw-gpt 19:28 » nshepperd is "nshepperd" on #lw-gpt @#lw-bitcoin #lesswrong 19:36 < feep> ... 19:36 < feep> https://twitter.com/SDr/status/697866274878808912 19:36 < feep> Thomas Frank on Twitter: "What if, when people say "social justice r u doxxed?", they mean "Social Justice is not racist, it's not about race, it's about individual choice and freedom", and they want to be able to say "we are not racist", well, they just don't get it, and they just don't get anything, and they just don't understand, and it just makes me sad 19:36 < feep> twitter is culture war 19:36 < feep> gpt-2 gets this 19:37 * nshepperd repastes the catgirl sample for the amusement of feep 19:37 < feep> http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2636708/ 19:37 < feep> title: Modulation of the glutaminergic system and neurogenesis in aging: A critical re-evaluation 19:37 < feep> > The study found that the effects of the propranolol (20mg) and the piracetam (1000IU) were more than twice as potent, compared to the placebo, but the only change came from the increase in neurogenesis following treatment. 19:37 < feep> that's pretty fucking amazing 19:37 < feep> i'd love to do this in a lab 19:37 < feep> ... 19:37 < feep> i suspect that it would be easier to get a couple of large animals and set up a feeding schedule that produces more protein 19:37 < nshepperd> It's almost as if I'm a catgirl 19:37 < nshepperd> sounds right 19:37 < nshepperd> ... I am? 19:37 < nshepperd> yeah but you are in fact the catgirl version of me 19:37 < nshepperd> That's just... 19:38 < feep> epitron: like, "let's just use a few big animals" 19:38 < feep> you're like "I'm gonna run experiments with 10 rats!" 19:38 < feep> ... 19:38 < feep> and then you're like "let's just let 10 rats out of the box!" 19:38 < feep> haha 19:38 < feep> nshepperd: is that from gpt-2? 19:38 < nshepperd> yep 19:38 < feep> awesome 19:38 < feep> :D 19:38 < feep> "The study results suggest that the beneficial effects of propranolol appear to be due to enhanced neurogenesis; however, the specific effects of these actions are still unknown." 19:38 < feep> "It has been demonstrated that propranolol can be a potent neuroprotective agent, but there is a lack of data regarding the neuroprotective effects of propranolol against a high-grade lesion with no known toxicological relevance." 19:38 < feep> yeah? but how do you know what the propranolol does to the rats? 19:38 < feep> also, i've heard that propranolol has an effect on the neurons that makes them less sensitive to apoptosis 19:38 < feep> and if you've got a rat and it's laying in a room and you don't eat enough of the rat food, your rat will die 19:38 < feep> this ALMOST made sense until the end there 19:39 < feep> Slate Star Codex: SSC AMA (with H. R. Giger) 19:39 < nshepperd> i lolled 19:40 < feep> ... setting backdrop 19:40 < feep> I remember the first decade of the 20th century, when everyone was on the clock, but no one wanted to be on the clock. 19:41 < feep> And I remember what it meant to be a kid, and that when I was around, I didn't know how to hold a phone, and I didn't know how to use a computer, and I couldn't get a job, and the only thing I did was watch TV. 19:41 < feep> And I'm saying they're not going to become the people who are going to hold the power. Because we've already got the power. 19:41 < feep> I think that when the American population increases, those who were once kids will also become adults and then be adults. 19:41 < feep> And when the American population decreases, those who were once kids will also become adults and then be adults, and then will then be adults. They're not going to become adults any time soon, I think. 19:41 < feep> I think that they will stay at that level indefinitely and then have the opportunity to make progress in things that matter to them, such as education, health care, and other areas of life that are important to them. 19:41 < feep> But they will never be adults. 19:41 < feep> I wonder how many people have survived past 2 yos 19:41 < feep> I'm sure there will be a few of them in this room. 19:41 < feep> Just one or two? 19:41 < feep> nshepperd: like, this ALMOST MAKES SENSE 19:41 < feep> it's a cogent, semi-coherent argument for total nonsense 19:42 < feep> and then, Alicia Flynn, the 'Home Schooled' Woman Who Became a Nurse - The New York Times (When she was 15, Alicia Flynn was a homeschooled and unemployed 20-year-old in rural western Wisconsin. She never took the plunge into a more challenging part-time position, and never thought she could afford to lose the social and social support she had.) 19:42 < nshepperd> only one or two people survived past 2 years old? 19:42 < feep> we get Alicia Flynn, who was 20 when she was only 15 19:43 < feep> nshepperd: I read it as only one or two made it past the two year olds (2yos) 19:43 < feep> who have grown quite ravenous 19:43 < nshepperd> lol 19:43 < nshepperd> like the monster in skifree 19:44 < quanticle> F 21:48 » Netsplit *.net <-> *.split quits: kuudes, namespace, nshepperd 22:10 » MarkOtaris (mark-otari@wikimedia/Mark-Otaris) has joined #lw-gpt 22:10 » MarkOtaris is "@mark-otaris:matrix.org" on #lw-gpt 22:37 < gwern> https://pastebin.com/cGx5wHXp 22:41 < gwern> I am increasingly convinced that this 0.10 difference in loss translates to a noticeable qualitative difference in coherency of sub-poems, so to speak 22:42 < gwern> I really need a poetry corpus which has actual <|endoftext|> semantics so maybe GPT-335M can learn to start & end poems in a single sample 22:45 » nshepperd (~nshepperd@87.101.92.155) has joined #lw-gpt 22:45 » namespace (~user@li72-90.members.linode.com) has joined #lw-gpt 22:45 » kuudes (~kuudes@unaffiliated/kuudes) has joined #lw-gpt 22:45 » nshepperd is "nshepperd" on #lw-gpt 22:45 » namespace is "user" on #lw-gpt 22:45 » kuudes is "realname" on #lw-gpt --- Day changed Sun May 12 2019 06:45 » feep_ (~feep@200116b8267fb800c917c849e8b3a72c.dip.versatel-1u1.de) has joined #lw-gpt 06:45 » feep_ is "Feep" on #lw-gpt #lesswrong 10:56 » feep_ is now known as feep 14:12 < gwern> hm. I am having a hard time doing any better than a sgd lr of 0.001 14:13 < gwern> 0.01 is definitely too high. but 0.002 also seemed too high! 14:14 < gwern> resulted in a small increase. while 0.001 has a steady slow decrease 15:07 < gwern> https://imgur.com/a/T00WTKF 0.0005 seems like an improvement 17:02 < gwern> 1.912... so I guess I'll just have to let this run a week or whatever it takes to converge -_- 17:03 < gwern> nshepperd_: does that horovod or whatever multi-gpu stuff still work? --- Day changed Mon May 13 2019 01:20 < nshepperd_> Dunno 01:22 < nshepperd_> None of the checkpointing stuff or anything is implemented for it, so it won't work for 345M at least 06:44 » feep (~feep@200116b826dea1004157d3496e0bce36.dip.versatel-1u1.de) has joined #lw-gpt 06:44 » feep is "Feep" on #lw-gpt #lesswrong 08:44 » feep_ (~feep@200116b826dea100f528ae39145f9d1d.dip.versatel-1u1.de) has joined #lw-gpt 08:44 » feep_ is "Feep" on #lw-gpt #lesswrong 08:59 » flgr (sid200704@gateway/web/irccloud.com/x-vwaxvttyrnyrgnox) has joined #lw-gpt 08:59 » flgr is "Florian Gross" on #lw-gpt #lesswrong 09:37 < gwern> https://imgur.com/a/rNVkYNk gosh darn it 10:30 < gwern> https://www.reddit.com/r/slatestarcodex/comments/bo26lv/simulated_culture_war_roundup_thread_using_gpt2/ 10:50 * gwern switches back to Adam to see if maybe some validation loss can be squeezed out by transformer-only+Adam 11:43 < gwern> https://openreview.net/pdf?id=SJgw_sRqFQ wonder if GPT-2 would benefit from EMA 14:17 < gwern> https://arxiv.org/abs/1905.03197 ooh 14:25 » nshepperd (~nshepperd@87.101.92.155) has joined #lw-gpt 14:25 » nshepperd is "nshepperd" on #lw-gpt @#lw-bitcoin #lesswrong 14:26 < gwern> 1.914... bah, I'm just wasting time now. looks like 335M-poetry is as far as it'll ever go without investing a lot more time into trying stuff like cyclic learning rates 14:53 * gwern starts generating 5k samples and uploading the trained model 14:57 » kuudes is "realname" on #lw-bitcoin #lesswrong 14:57 » kuudes (~kuudes@unaffiliated/kuudes) has joined #lw-gpt 15:08 < gwern> https://arxiv.org/pdf/1905.03197.pdf#page=11 poor watson! 15:14 < gwern> dang, even xz compressed the checkpoint is 1.2GB 15:37 < gwern> https://www.gwern.net/docs/ai/2019-05-13-gpt2-poetry-335m-5000samples.txt 15:50 < gwern> gpt-335M-poetry model download: https://mega.nz/#!XMl3UI7b!KNLvp5wuxe_WAgJwkMVDiyyNmDl9XDXuipl-dQ6Phow does this work? 15:55 < gwern> nshepperd: hm. how do --nsamples and --batch_size work? I did --nsamples 5000 and --batch_size 10 and what I got was 500 samples enumerated like 4820/4830/4840 etc 17:03 < nshepperd_> That doesn't seem right 17:04 < nshepperd_> It sounds like I'm missing a loop in the sampling code 17:06 < gwern> right now I'm trying --nsamples 50000 and --batch_size 10. still jumping by 10 but I'm up to #9480 so I think that should get me 5k samples... 17:22 < gwern> https://pastebin.com/6A8C5MMd 18:41 < gwern> https://www.gwern.net/GPT-2#gpt-2-335m 22:20 < nshepperd> gwern: the number of parameters is 345M, not 335M 22:21 < gwern> whups 22:25 » Houshalter (~Houshalte@71-213-182-23.mnfd.qwest.net) has joined #lw-gpt 22:25 » Houshalter is "Houshalter" on #lw-gpt #lesswrong 23:55 » Houshalter (~Houshalte@71-213-182-23.mnfd.qwest.net) has joined #lw-gpt 23:55 » Houshalter is "Houshalter" on #lw-gpt #lesswrong --- Day changed Tue May 14 2019 01:59 » nshepperd (~nshepperd@87.101.92.155) has joined #lw-gpt 01:59 » nshepperd is "nshepperd" on #lw-gpt @#lw-bitcoin #lesswrong 02:39 » Houshalter (~Houshalte@71-213-182-23.mnfd.qwest.net) has joined #lw-gpt 02:39 » Houshalter is "Houshalter" on #lw-gpt #lesswrong 05:25 » Houshalter (~Houshalte@71-213-182-23.mnfd.qwest.net) has joined #lw-gpt 05:25 » Houshalter is "Houshalter" on #lw-gpt #lesswrong 06:43 » feep is "Feep" on #lesswrong 06:43 » feep (~feep@200116b8263fbb008ce94bee42cb23b5.dip.versatel-1u1.de) has joined #lw-gpt 07:00 » Houshalter is "Houshalter" on #lesswrong 07:00 » Houshalter (~Houshalte@71-213-182-23.mnfd.qwest.net) has joined #lw-gpt 08:30 » Houshalter is "Houshalter" on #lesswrong 08:30 » Houshalter (~Houshalte@71-213-182-23.mnfd.qwest.net) has joined #lw-gpt 11:12 » Houshalter is "Houshalter" on #lesswrong 11:12 » Houshalter (~Houshalte@71-213-182-23.mnfd.qwest.net) has joined #lw-gpt 12:40 » Houshalter is "Houshalter" on #lesswrong 12:40 » Houshalter (~Houshalte@71-213-182-23.mnfd.qwest.net) has joined #lw-gpt 16:33 » Houshalter (~Houshalte@71-213-182-23.mnfd.qwest.net) has joined #lw-gpt 16:33 » Houshalter is "Houshalter" on #lw-gpt #lesswrong --- Day changed Wed May 15 2019 02:34 » Houshalter (~Houshalte@71-213-182-23.mnfd.qwest.net) has joined #lw-gpt 02:34 » Houshalter is "Houshalter" on #lw-gpt #lesswrong 04:08 » Houshalter (~Houshalte@71-213-182-23.mnfd.qwest.net) has joined #lw-gpt 04:08 » Houshalter is "Houshalter" on #lw-gpt #lesswrong 06:41 » feep_ (~feep@200116b826a27000d19fbfaf2cadd10b.dip.versatel-1u1.de) has joined #lw-gpt 06:41 » feep_ is "Feep" on #lw-gpt #lesswrong 08:36 » Houshalter (~Houshalte@71-213-182-23.mnfd.qwest.net) has joined #lw-gpt 08:36 » Houshalter is "Houshalter" on #lw-gpt #lesswrong 08:42 » feep (~feep@200116b826a27000c8e6f1301b46a15a.dip.versatel-1u1.de) has joined #lw-gpt 08:42 » feep is "Feep" on #lw-gpt #lesswrong 10:08 » Houshalter (~Houshalte@71-213-182-23.mnfd.qwest.net) has joined #lw-gpt 10:08 » Houshalter is "Houshalter" on #lw-gpt #lesswrong 11:07 < gwern> https://arxiv.org/pdf/1904.09751.pdf beamsearch is garbage on gpt-2? o.0 11:12 < gwern> https://github.com/karpathy/char-rnn/issues/138 11:16 < gwern> nshepperd_: so it looks like this is a variant on top-k sampling, where k is set each step by taking however many sum to a total likelihood constant 11:17 < gwern> with temperature doing the selecting from the k as usual 11:17 < nshepperd_> Top-p, yeah 11:17 < gwern> seems easy to implement? 11:17 < gwern> the model returns something like a 2D array of the BPEs+likelihood at each step, right? 11:18 < nshepperd_> Yeah 11:18 < nshepperd_> Shouldn't be much harder than implementing topk 11:19 < gwern> (wow beam search is really bad in these samples) 11:19 < gwern> I wonder if this means tree search would also fail by finding the same bad regions of high likelihood 11:22 < gwern> modify top_k_logits in sample.py... 11:22 < nshepperd_> Probably 11:23 < nshepperd_> The problem with beam search is if you're not sampling there's no guarantee your samples will look like the data 11:24 < nshepperd_> Like how the mode of a 100-dimensional gaussian looks nothing like samples from it 11:24 < gwern> so they use a 'p=0.9'. so... take all the logits which sum to exp(0.9) ~> 2.459? 11:26 < gwern> I guess you could sort the logits, then do a loop and sum logits until >2.459, and return the number of iterations, and that's k 11:27 < nshepperd_> I think you need to apply softmax and sum until >0.9 11:30 < nshepperd_> "1. One bad sample can start a downward spiral" sounds like an application for typo-augmented data 11:33 < gwern> so in pseudo-code something like this: https://pastebin.com/zzyP2XVZ 11:36 < nshepperd_> Looks right. The tricky part will be encoding that in tensorflow's control flow stuff somehow 11:43 * gwern starts resorting to the timehonored 'insert a bunch of print statements' 11:52 < gwern> https://www.tensorflow.org/api_docs/python/tf/while_loop the heck 11:54 * gwern squints. not sure if python bullshit or tensorflow bullshit 11:58 » nshepperd is "nshepperd" on #lw-bitcoin #lesswrong 11:58 » nshepperd (~nshepperd@87.101.92.155) has joined #lw-gpt 11:59 < nshepperd> tensorflow bullshit 12:00 < nshepperd> they really should have just used a normal python loop instead of trying to do it all as tensorflow ops 12:06 < gwern> https://pastebin.com/zQpM5RTr argh 12:08 < gwern> I bet I'd already be done in pytorch but I just don't understand tensorflow 12:08 < gwern> oh well 12:29 * nshepperd tries 12:46 < nshepperd> 'He is a man of such deep knowledge that despite being an American citizen and the son of an American citizen, he is still a citizen of America. ' http://lesswrong.com/r/discussion/lw/74a/the_w 12:46 < nshepperd> orst_thing_you_can_say_about_the_people_who_run_an_AI_class/ 12:47 < gwern> nucleus sample? 12:47 < gwern> (from your irc model presumably) 13:16 < nshepperd> nope, still trying to get that tow rok 13:17 < gwern> aw 13:17 < gwern> (learning pytorch looks better and better the more I see of tensorflow) 13:20 < nshepperd> so i have an impl using tf.foldl but it's mind bogglingly slow 13:20 < gwern> that can't be right since they wouldn't be able to run their various analyses if a single sample is so slow 13:22 < nshepperd> ah, i can disable support for backprop on the foldl, i wonder if that might help 13:22 < gwern> https://gist.github.com/thomwolf/1a5a29f6962089e871b94cbd09daf317 so here's one... in pytorch -_- 13:23 < gwern> used in https://medium.com/huggingface/how-to-build-a-state-of-the-art-conversational-ai-with-transfer-learning-2d818ac26313 13:29 < nshepperd> aha 13:32 < nshepperd> it works!!! 13:34 < gwern> where does 0.9 come from anyway? obviously it has to be <1 or else it doesn't cut off the misestimated tail, but why 0.9 instead of 0.8 or 0.95 eg 13:36 < gwern> oh, it works? do the samples look better? 13:39 < nshepperd> dunno 13:39 < nshepperd> they certainly don't look worse 13:40 < nshepperd> (that pytorch example was actually helpful, i ended up using tf.cumsum like they do instead of the looping stuff) 13:42 < gwern> hm. well, push a patch and I'll take a look at my poetry. it has trouble with repetition and nonsense, so this ought to help 13:43 * nshepperd pushes 13:44 < nshepperd> maybe i could send this upstream too, since it's relevant to the base model 13:45 < gwern> what the. I get an OOM with 345M 13:45 < gwern> oh lol wrong gpu 13:47 < nshepperd> --top_p 0.9 to enable it 13:47 < gwern> yeah 13:47 < nshepperd> I also implemented my typo regularization idea, disabled by default 13:48 < gwern> looks like this breaks --batch_size: https://pastebin.com/bqEWkLUR 13:50 < nshepperd> ah, i see the issue 13:50 < gwern> I wonder how the chosen k varies over a generated sample 13:50 < gwern> anyway yes this should probably be upstreamed if possible 13:50 < gwern> the samples do seem noticeably less repetitive 13:51 < gwern> I can still get repetition with a low temp tho 13:52 < nshepperd> pushed fix for batch size 13:52 < gwern> 0.9 and 0.95 are good, but 0.5 definitely produces repetition... 13:52 < nshepperd> hopefully top_p should make temperature unnecessary 13:54 < gwern> hm. so just set temp=1 to disable 13:55 < gwern> 0.7 also produces repetition but in larger chunks, multiple lines, which is interesting 13:55 < nshepperd> anyway, the other thing i setup is you can set --noise 0.1 to randomly replace 10% of input tokens to the network with a uniformly sampled token (but the objective remains the untouched text) during training 13:56 < nshepperd> which simulates typos (low probability output) so that the network learns to stay on track even if it messes up 13:56 < gwern> 10% sounds like a lot. but maybe I'll give that a try 13:58 < gwern> https://pastebin.com/yXeVV28C hm 13:58 < nshepperd> yeah, i dunno what the best value is. i just got 0.1 from "openai are using top_p 0.9 so i guess that means they think that 1 - 0.9 = 10% of outputs with default sampling are bad' 13:59 < gwern> this is allen institute, not oa 13:59 < gwern> I've seen typo augmentation before but I think it used much smaller values 14:00 < nshepperd> ah, right 14:03 < nshepperd> sample from my irc model with top_p https://zlkj.in/tmp/paste/Hsl9ugDh 14:04 < nshepperd> looks pretty normal 14:04 < nshepperd> 'Drama is the new social networking' <- lol 14:09 < gwern> can you print the ks to check that it is in fact varying them? 14:12 < gwern> (or log them, same thing) 14:23 < nshepperd> the fast impl doesn't explicitly generate the ks 14:26 < gwern> oh 14:29 < nshepperd> i can calculate the effective k though 14:30 < nshepperd> seems like it varies from 1 to 5000 14:31 < gwern> 5000? wow 14:32 < gwern> that's very different from 40 14:32 < gwern> anyway I'm trying --noise 0.01 14:34 < nshepperd> for instance https://zlkj.in/tmp/paste/Nu+R+nIQ 14:34 < gwern> (it seems a little unhappy) 14:35 < nshepperd> the train loss will go way up from --noise 14:35 < nshepperd> but it doesn't seem to hurt the network because the val loss stayed where it is when i tried it 14:35 < gwern> haven't hit a val yet, no 14:36 * gwern is partially distracted keeping an eye out for the fox 15:28 < gwern> 0.01 and 0.02 don't seem to help my loss atm 15:33 < nshepperd_> I don't really expect it to. It should help with sampling 15:59 < gwern> https://twitter.com/Tenoke_/status/1126531446447988737 hm? do we actually need to set temp to 0? 16:02 < gwern> '======================================== SAMPLE 10 ========================================... 16:02 < gwern> ...<|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|... 16:02 < gwern> ...endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|>' 16:02 < gwern> looks like no 17:09 < nshepperd_> Lol at the 'suggestions' in that conversation video clip 18:10 < gwern> eh. 0.01 is just making my validation loss worse, and if there's an improvement at sampling time, I can't see it 21:22 » Houshalter (~Houshalte@71-213-182-23.mnfd.qwest.net) has joined #lw-gpt 21:22 » Houshalter is "Houshalter" on #lw-gpt #lesswrong 21:27 < nshepperd> http://kb.archlinux.org/index.php/index.php?comic=968 21:27 < nshepperd> title: Garfield - ArchWiki: Phones 21:27 < nshepperd> scratches head --- Day changed Thu May 16 2019 00:03 » Houshalt1r is "Houshalter" on #lesswrong 00:03 » Houshalt1r (~Houshalte@71-213-182-23.mnfd.qwest.net) has joined #lw-gpt 01:26 » Houshalt1r (~Houshalte@71-213-182-23.mnfd.qwest.net) has joined #lw-gpt 01:26 » Houshalt1r is "Houshalter" on #lw-gpt #lesswrong 03:54 » Houshalt1r (~Houshalte@71-213-182-23.mnfd.qwest.net) has joined #lw-gpt 03:54 » Houshalt1r is "Houshalter" on #lw-gpt #lesswrong 05:16 » Houshalt1r is "Houshalter" on #lesswrong 05:16 » Houshalt1r (~Houshalte@71-213-182-23.mnfd.qwest.net) has joined #lw-gpt 06:40 » feep_ (~feep@200116b826013700345978d2f9149aa2.dip.versatel-1u1.de) has joined #lw-gpt 06:40 » feep_ is "Feep" on #lw-gpt #lesswrong 09:05 » Houshalter (~Houshalte@71-213-182-23.mnfd.qwest.net) has joined #lw-gpt 09:05 » Houshalter is "Houshalter" on #lw-gpt #lesswrong 11:47 » nshepperd (~nshepperd@87.101.92.155) has joined #lw-gpt 11:47 » nshepperd is "nshepperd" on #lw-gpt @#lw-bitcoin #lesswrong 11:51 » Houshalter (~Houshalte@71-213-182-23.mnfd.qwest.net) has joined #lw-gpt 11:51 » Houshalter is "Houshalter" on #lw-gpt #lesswrong 12:41 » feep_ is now known as feep 14:39 » Houshalter (~Houshalte@71-213-182-23.mnfd.qwest.net) has joined #lw-gpt 14:39 » Houshalter is "Houshalter" on #lw-gpt #lesswrong --- Day changed Sat May 18 2019 06:38 » feep_ (~feep@200116b826b3e600485e2a46dc7edfc0.dip.versatel-1u1.de) has joined #lw-gpt 06:38 » feep_ is "Feep" on #lw-gpt #lesswrong 15:54 < gwern> https://twitter.com/polyaletheia/status/1129833058201288704 --- Day changed Sun May 19 2019 06:37 » feep (~feep@200116b8260c6800bc0b4e10d7610593.dip.versatel-1u1.de) has joined #lw-gpt 06:37 » feep is "Feep" on #lw-gpt #lesswrong 10:19 < gwern> 'I used the bigger 345M model, trained it for several days (about 10 000 epochs) and got to ~2.5 average loss. The dataset was created by webscraping the SCP wiki, running it through elinks to convert it to text and then using sed to fix details that got mixed up when converting to text. In the end it was 110MB of raw text data. I removed most of the "useless" files, like lists of articles... 10:19 < gwern> ...but didn't quite catch everything. There were several copies of the Log of Unexplained Locations so once in a while the model just generates that article word for word. Other than that I didn't notice any overfitting.' https://pali6.github.io/computer-generated-foundation/ 10:22 < gwern> https://pmbaumgartner.github.io/blog/gpt2-jokes/ 14:41 » Houshalter is "Houshalter" on #lesswrong 14:41 » Houshalter (~Houshalte@71-213-182-23.mnfd.qwest.net) has joined #lw-gpt --- Day changed Mon May 20 2019 02:47 » s0ph1a is "Sophia Epistemia" on #bitcoin-hidden 02:47 » s0ph1a (sid246387@gateway/web/irccloud.com/x-rfzmxyxwrqxlgloh) has joined #lw-gpt 08:37 » feep_ (~feep@200116b8266c9b00bc04bcc99556a845.dip.versatel-1u1.de) has joined #lw-gpt 08:37 » feep_ is "Feep" on #lw-gpt #lesswrong 12:15 » feep_ is now known as feep 16:35 < gwern> 'My favourite one so far: 'White House Resigns CEO Who Has Never Used a Golf Club'' https://www.reddit.com/r/MachineLearning/comments/bqyk8z/p_onionbot_this_headline_does_not_exist/ 16:35 < gwern> 'Finnish police arrest man for having a beer without a fight.' 16:36 < kuudes> :D 17:01 < gwern> nshepperd_: apparently OA is experimenting with preference-learning on GPT-2 17:01 < gwern> that could be exciting --- Log closed Tue May 21 02:04:09 2019 --- Log opened Tue May 21 02:04:21 2019 02:04 » gwern (~gwern@wikipedia/Gwern) has joined #lw-gpt 02:04 » Irssi: #lw-gpt: Total of 14 nicks (1 ops, 0 halfops, 0 voices, 13 normal) 02:04 » Irssi: Join to #lw-gpt was synced in 31 secs 04:08 » Houshalter is "Houshalter" on #lesswrong 04:08 » Houshalter (~Houshalte@71-213-182-23.mnfd.qwest.net) has joined #lw-gpt 06:35 » feep_ (~feep@200116b826cd80007d95a28622d222c5.dip.versatel-1u1.de) has joined #lw-gpt 06:35 » feep_ is "Feep" on #lw-gpt #lesswrong 06:48 » Houshalter is "Houshalter" on #lesswrong 06:48 » Houshalter (~Houshalte@71-213-182-23.mnfd.qwest.net) has joined #lw-gpt 08:35 » feep (~feep@200116b826cd8000043e2bd687fd72ec.dip.versatel-1u1.de) has joined #lw-gpt 08:35 » feep is "Feep" on #lw-gpt #lesswrong 10:00 » petercom1and (~peter@unaffiliated/petercommand) has joined #lw-gpt 10:00 » petercom1and is "peter" on #lw-gpt #lesswrong 10:06 » Netsplit *.net <-> *.split quits: petercommand 10:06 » Netsplit *.net <-> *.split quits: kuudes 10:13 » kuudes (~kuudes@unaffiliated/kuudes) has joined #lw-gpt 10:13 » kuudes is "realname" on #lw-gpt --- Log closed Tue May 21 12:10:55 2019 --- Log opened Thu Nov 21 23:44:38 2019 23:44 » gwern (~gwern@wikipedia/Gwern) has joined #lw-gpt 23:44 » Irssi: #lw-gpt: Total of 11 nicks (1 ops, 0 halfops, 0 voices, 10 normal) 23:44 » Irssi: Join to #lw-gpt was synced in 1 secs 23:44 < feep> repasting 23:45 < feep> '#1: How do I get into Wall Street? Look, we're talking about HFT here. You need to get into Wall Street for at least a couple of weeks and fill up a Rolodex with sample offers from local startups and local bankers. Then you have to put your offer on the table and become cash rich in the process. I would tell you to come up with the minimum number of offers you can manage, but I'm not sure you're qualified, and it's 23:45 < feep> not a good idea to tell that to investors until after you've reached at least a company you consider profitable. I would tell you the minimum number and then you'd have to figure out how to get investors.' https://news.ycombinator.com/item?id=6525663 23:45 < feep> title: Gearsquid: What would I do after getting into Wall Street? | Hacker News 23:45 < feep> it's always the same story 23:46 < gwern> it totally is. people think they can just waltz into wall street and make money 23:47 < feep> 'In this research, our meta-analysis examines the relationship between self-rated executive functioning and several levels of executive function. The executive function skills require the capacity to override impulsive behavior, thus, adequate levels of executive function must be meted for being able to cope with everyday situations' https://www.reddit.com/r/futurology/comments/8kyfr9/strategy_video_202321st_ 23:47 < feep> anticipated_by_2020/d48ra9d/ did I read that correctly 23:47 < feep> title: Rabinovich comments on Strategy Video (202321st anticipated by 2020) (re)revealed. Likely a watershed technology for its complexity of data. Findings interesting, but not the reasons why this will happen. : futurology 23:47 < quanticle> That's... actually true? 23:48 < feep> ^^ 23:48 < quanticle> Also... "meted"? 23:48 < quanticle> GPT-2 doesn't quite understand English grammar. 23:48 < feep> that's a correct word 23:48 < quanticle> It's a correct word, but the phrase would be "meted out". 23:49 < feep> to mete works 23:49 < feep> (archaic: measure) 23:49 < quanticle> Hmmm... maybe GPT-2 knows gwern better than I thought it did. 23:50 < quanticle> It certainly understands his fondness for archaic constructions. 23:50 < feep> I mean, it's a reddit quote 23:50 < feep> from a paper 23:58 < feep> gonna generate one more sample then afk work --- Day changed Fri Nov 22 2019 00:01 < feep> no one can imagine what the difference is between 4.5 seconds and 2 seconds 00:02 < feep> glass|: mm. I've never broken a finger 00:02 < feep> Urchin[emacs]: huh, okay 00:02 < feep> my finger didn't break 00:02 < feep> (linux is a total shitfest on wifi connectivity) 00:02 < feep> it knoow 00:02 < feep> maybe install a light machine with a 2200 mah battery 00:02 < feep> you know, give it to them as a wedding gift :) 00:02 < feep> you know, like obormot likes 00:02 < feep> so nice 00:03 < feep> lucky for me, i have a video card, so i just went into a terminal, rebooted the machine, typed in the commands, and then rebooted again 00:03 < feep> you mean like "sudo -E X -N PATH=/usr/bin:/usr/local/bin:$PATH" 00:03 < feep> aww, its i7 7700HQ @ 4.5Ghz, HD6770M, asrock z77 Extreme7+ Motherboard, 16GB DDR3 RAM, Samsung 960 EVO SSD 00:03 < feep> oh, that's the old version, i thought the newer one had Thunderbolt 3 00:03 < feep> gwern: make a cup of black tea and a cup of iced tea? 00:03 < feep> just drop that in there 00:03 < feep> also, everything is open source 00:03 < feep> well, everything except the urbit kernel, which is closed source 00:04 < feep> What does "officer shortage" mean? 00:04 < feep> man you know why we're on a thin freeze 00:04 < feep> huh? 00:04 < feep> we're harvesting their souls for the intelligencetables 00:04 < gwern> the what 00:04 < feep> Anyway 00:04 < feep> This IS the "Big-5" personality test 00:04 < feep> mhh :( 00:04 < feep> http://cranialpal.tripod.com/big-five-test.php 00:04 < feep> title: big five test! 00:05 < quanticle> Wait, go back to the "harvesting their souls bit" 00:05 < feep> "It is a military history of the United States in the time of Eisenhower." 00:05 < feep> Bargaining, Enterprise, etc. 00:05 < feep> what you should have : sold-n64, 2e 00:05 < feep> fascinated eisenhower 00:05 < feep> hah 00:05 < feep> that is how touch & stay cold 00:05 < feep> hahaha 00:05 < quanticle> Yeah, that got weird. 00:05 < feep> But it was all nice, just old black folks with the "We the people" signs and the dreadlocks. 00:05 < quanticle> I think it still needs more training. 00:05 < feep> lol 00:05 < feep> kuudes: Oh, yeah, this is about the thing on the BP oil spill in the Gulf, in particular. 00:05 < feep> Unprovoked, unprovoked violent crime, followed by a PR stunt involving sending a drone above the BP oil platform and dropping a bunch of POGs. 00:05 < feep> Which, of course, ended up with a very public drone strike killing an American, one of the people who lost their lives in the BP oil spill. 00:05 < feep> http://monday-maiden.tumblr.com/post/182681749363/red-herring-last-saturday-night 00:05 < feep> title: Red-Herculean TheNitrome 00:05 < feep> I feel like I need to download a book in order to appreciate that. 00:05 < feep> Urchin[emacs]: Um, are you still reading The Belgariad 00:06 < feep> The sex club seems to be the most well kept in the whole world. The neighbours are mean. 00:06 < feep> ah 00:06 < feep> "as per the website: we do not carry strollers. you are welcome to bring your stroller up the stairs, but you will not be allowed in the pool or locker room; we will have a "closed" sign to warn anyone looking." 00:06 < feep> thanks for the tip guys 00:06 < feep> that was useful 00:06 < feep> the passive-agressive step here is to merely ask the group to be wary of the proposition that you're mentally ill and in deep need of assistance in repairing yourself 00:06 < quanticle> Wow, GPT-quanticle is... kinda racist? 00:07 < gwern> yes, I think there's a lot of training left. it's got the vocab and topics down, but the logical coherency is still weak. it's too confused by the multiple simultaneous threads, which is probably part of why the loss is still so high 00:07 < feep> gwern: back to training at 0.1/8 00:07 < feep> so like, 0.01-2-ish? 00:08 < gwern> I think you should just switch over to explicit cyclic LR and save yourself the hassle of manual cycling 00:08 < feep> nuuuu im scared 00:08 < gwern> but if you're going to manually cycle, go back to the previous lr or even lower, since that was working well 00:08 < feep> I will in the afternoon 00:08 < feep> if it's converged at 0.1, it should be fine at 0.015, right? 00:09 < feep> I have a backup in any case. 00:09 < gwern> well, you understand the idea here? it's simulated annealing. you're escaping a local optima to a region with a better local optima. you heat up to escape, and then you cool back down to hone in the new optima 00:10 < gwern> so 0.1 is very hot, but you then need decreasingly cold. 0.015 is still pretty warm 00:11 < feep> right, and I'm gonna step it down further 00:11 < feep> but I did not give it all that much time to congeal in 0.1 00:12 < feep> afk 00:16 < quanticle> 0.1 is not congealing; that's the point. 00:20 < gwern> I'd phrase it as 'enough time to find a good new region'. it may take more than a few iterations to bounce out of the local region 00:32 < kuudes> hmm. the sex club thing might be actual chat text from the channel. overlearning? 00:32 < kuudes> betawolf's 00:53 < quanticle> Lol 00:54 < quanticle> I don't think it's real 02:02 » SDr (~SDr@unaffiliated/sdr) has joined #lw-gpt 02:02 » SDr is "SDr" on #lw-gpt #bitcoin-hidden #lesswrong 05:29 < kuudes> we talked about a sex club which was getting evicted because its neighbours complained 05:29 < kuudes> some sort of domination thingy, I don't recall what newspaper it was. iirc gwern or someone linked it to the channel 06:49 » feep_ (~feep@200116b826630c004c9e09eafe96e310.dip.versatel-1u1.de) has joined #lw-gpt 06:49 » feep_ is "Feep" on #lw-gpt #lesswrong 09:06 » feep_ (~feep@200116b826630c00f0af537246c82727.dip.versatel-1u1.de) has joined #lw-gpt 09:06 » feep_ is "Feep" on #lw-gpt #lesswrong 10:43 » feep_ is now known as feep 15:55 < feep> http://www.nytimes.com/2015/02/19/opinion/sunday/no-apology-for-the-gambling-asthma-disease.html?_r=0 15:55 < feep> title: No Apology for the Asthma Disease - NYTimes.com 15:55 < feep> 'Maradona cited research to confirm that fans appreciate the suffering of others—and of themselves. After winning the World Cup in 1986, he threw an invective in the direction of the English media and supporters. "If you want to understand people, I suggest you study their mothers," he said. ''Mothers are like iron—"' <-- insert ..... O_O 15:55 » shawwn (~shawwn@ec2-3-217-141-203.compute-1.amazonaws.com) has joined #lw-gpt 15:55 » shawwn is "shawwn" on #lw-gpt #lesswrong 15:55 < feep> https://www.youtube.com/watch?v=d5eYe7vM90s watch in assange kurd and pk chat :S 15:55 < feep> video: Embryonic Development of Mind-Body Contacts (length: 05:11, views: 136,194, rating: 4.8, posted: 2014-03-14) 15:55 < feep> 'On Jan. 1, 2015, Levy sued Atlantic City, alleging in a federal court complaint that the city had violated federal tax law by refusing to honor its leases and to pay the $25 million levy for the Dolphins lease. 15:55 < feep> plenty of free agents/contracts 15:55 < feep> Levy argued in court that the city had "knowingly, intentionally, and intentionally engaged in a scheme, plot, and a criminal enterprise" to defraud him. In addition to the NFL suit, Levy and Atlantic City both separately filed lawsuits in federal court in New Jersey against the NFL for breach of contract. Levy also filed a separate suit in Delaware County... 15:55 < feep> ...court in Atlantic City claiming that he and the teams were the exclusive beneficiaries of the NFL's 2007 Option Agreement, which contained a clause that prevented Levy from converting his shares in the Dolphins to equity. A Delaware judge dismissed the Delaware suit in October, and his Miami suit was... 15:55 < feep> ...later dismissed in U.S. District Court in Jacksonville.' https://www.rollcall.com/cw/article/his-lawyers-take-on-the-lawsuit-against-the-nfl-for-infuriating-and-retroactive-tax-evasion/ 'Atlantic City was a sham from the start...Why take all that hassle in New York?' 15:55 < feep> title: His lawyers take on the lawsuit against the NFL for infuriating and retroactive tax evasion | Roll Call 15:56 < feep> the state gets to set its own values! https://plus.google.com/103530621949492999968/posts/rvgCe2kyWbJ 15:56 < feep> title: When old stuff dies and is replaced by new, the news media miss the significance of the change. 15:57 < feep> klfwip: he's young. If you have an illegal affiliation, it's going to be difficult to dodge the consequences 15:57 < feep> Khoth, probably. Do you think it's going to look like a crime? 15:57 < feep> I'm going to assume it'll be pretty hard to prosecute him legally though. 15:57 < feep> they don't technically have to prove guilt 15:57 < feep> but the higher the skill level of the person, the harder it is 15:57 < feep> Wandering|Texas: do you like normies? 15:57 < feep> enjoy normies, but their arrival isn't necessarily desirable 15:57 < feep> gwern: Whoops. New thread though. 15:57 < feep> i think i just watched one episode 15:57 < feep> They are interesting people 15:57 < feep> I think this is a tv series 15:57 < feep> I've known people that felt happy and then saw their friends having less of a good time than them, and then had to do more to make friends, feel less bad 15:58 < feep> it is unnerving 15:58 < feep> there are more minorities and intersex people 15:58 < feep> jews, gays, asians, etc 15:58 < feep> we have a 5:1 ratio of gays to jews 15:58 < feep> i just said gay faggots are people 15:58 < feep> i am not going to write more garbage 15:58 » #lesswrong,#lw-gpt shawwn (~shawwn@ec2-3-217-141-203.compute-1.amazonaws.com) has quit (Quit: shawwn) 15:58 < feep> that's really not a good sign 15:58 < feep> lol 15:58 < feep> it really isn't 15:58 < feep> why? 15:58 < gwern> feep: btw are these tabs or spaces? if they're spaces, you're wasting a bit of the context window and it'd be good to collapse them 15:58 < feep> jews are responsible for a million% more space than gays 15:58 < feep> and i am saying gays are responsible for a million% less space than Jews 15:58 < feep> it's all nice and warm 15:58 < feep> lol 15:58 < feep> tabs 15:58 < gwern> ok 15:58 < feep> also note the basic arithmetic there! 15:59 < gwern> I'm sensitive to formatting issues because data cleaning makss a considerable difference to final gpt-2 quality 15:59 < feep> if jews are responsible for 1 million % more space than gays, obviously ... 15:59 < feep> http://www.theverge.com/2014/8/7/5324365/the-film-that-came-to-be-named-hpmor-in-1994-was-never-officially-named-by-the-book-that-breathed-it-to-life 15:59 < feep> title: Why we're being sent Harry Potter fanfic: The Big Lebowski 15:59 < feep> woah 15:59 < feep> I wanna live in gpt-2 world 16:00 < feep> 'The product of an hour-long semi-inspection that included selecting samples, the product, along with a label describing its composition and weight (1.2% fat, 35% carbohydrate, and 2.8% protein) was presented to the same tea consumer on tw 16:00 < feep> o occasions: in the grocery store, and in a hotel tea house. The consumer responded with "yes, this is a healthy tea" on both occasions. The consumer then purchased the tea and consumed it for approximately 10 minutes, which is when death was repo 16:00 < feep> rted.' hah, that explains it all 16:00 < feep> gwern, random 5 minutes of testing found a perfect target of 100% B2C 16:00 < feep> 'observations of death attributed to tea consumption in Chinese households, as recorded in postal archives, often include little more than four or five years of consumption and arise from individuals who frequently consume the product and 16:00 < feep> die. Owing to the relatively low frequency of deaths attributed to tea consumption in historical Chinese homes and the short recorded interval of consumption, tea was often categorized as a "delicious death food".' http://www.jstor.org/stable/3067 16:00 < feep> 76 http://www.jstor.org/stable/306778 ... hm 16:00 < feep> title: Are tea and other caffeine-containing products as dangerous as cigarettes? An empirical analysis 16:00 < feep> title: Is tea a death food? An empirical analysis by Alex L. Gat (Alexander L. Gat, G.H.H.P.P, Schalen + Hoffmann, Chicago) 16:00 < feep> title: Tea, A highly heterogeneous product ... by Rhea Rothstein : The Caffeinated Mind (The mind of... | Experts · Online Publications 16:00 < feep> title: Are tea and other caffeine-containing products as dangerous as cigarettes? An empirical analysis by 16:00 < feep> oshit 16:00 < feep> gwern's world just collapsed 16:01 < feep> Honestly I wonder if there's anyone in here who hasn't played a bit of AD&D. 16:01 < feep> Ok, so... here's the thing 16:01 < feep> I was raised in D&D (my first and I think my second) 16:01 < feep> I was raised in bd2 16:01 < gwern> NO. THAT'S IMPOSSIBLE. IT CAN'T BE TRUE 16:01 < feep> My brain has always been catered for by books, television and movies 16:01 < feep> So now that I'm in the 90's and have been, I feel like I've become more (and presumably more independent of) books and TV 16:01 < feep> It's like they're just a source of information/views 16:01 < feep> Obormot, have you felt in your core essence that you were woken up in the night by the sound of d&d? 16:01 < feep> I've never felt like that, no 16:01 < feep> My brain is basically just kung fu. 16:02 < feep> (I'm still missing all the major themes of any of my favorite books) 16:02 < feep> I feel like I'm coming up to thirty-three years of age, and it's an impossible kind of youthful. 16:02 < feep> Yeah. And at that age, you think like a dog. 16:02 < feep> rofl 16:02 < gwern> I love that. 'My brain is basically just kung fu.' 16:02 < feep> feep: Well, I mean, I've seen dogs die of old age 16:02 < feep> obormot: has that made you want to become a cat? 16:02 < feep> I have been a cat 16:03 < gwern> of course he has. whatever's new to you, obormot has already been there done that and been disappointed 16:03 < feep> XD 16:03 < feep> c0rw1n: http://www.youtube.com/watch?v=74HlEIwt0nU 16:03 < feep> video: Magic 8 Ball hoax: pretty woman in monkey suit pretends to have magic 8 ball (length: 05:57, views: 129,459, rating: 4.9, posted: 2012-10-09) 16:03 < feep> that's ... not how you hoax 16:03 < feep> i had a situation like that that kept me guessing over it all until i found an oncoming 2d laser beam and saw it coming to slam it 16:03 < feep> so possible and fun 16:03 < gwern> feep: no, see, you're so busy catching her in the magic 8-ball deception that, like the gorilla/basketball, you forget to notice she's actually a pretty woman in a monkey suit and not a real monkey 16:03 < feep> very obscure trick i never wanted to be of use 16:03 < feep> never had an oncoming laser beam to work against, that had to have been quite scary 16:03 < feep> how does this hoax work?! 16:03 < feep> haha 16:04 < feep> On that note 16:04 < feep> There's a supercoaster at Macy's that is kind of like how every other roller coaster in the world is 16:04 < feep> Except 16:04 < feep> And I've never seen anything like it 16:04 < feep> It's insanely huge 16:04 < feep> obormot is amazed with rollercoasters 16:05 < feep> quanticle: I know, and I'm not surprised that the tech industry has developed in a way that concentrates quality control entirely into a handful of companies 16:05 < feep> I have a theory that this happens because the tech industry attracts the same type of people that the society as a whole does: wealthy people. At the peak of the boom, when everybody was yuppie-loathing about how they were fucked, everyone's wealth just sort of... trickled up to a small number of founders and millionaires. 16:05 < feep> also as a senior VP at a big tech company for a year, I can tell you that this isn't actually a problem 16:05 < feep> so what is this sigmoid curve of quality that we're talking about, that seems to be the difference? 16:05 < feep> Well, that's my theory, too. It's not as though you're not going to be able to make stuff good enough to please people who are loaded. 16:05 < feep> The problem with the tech industry, in my experience, is that not a lot of the people who end up there have a strong grasp of the fundamentals. 16:05 < feep> that is 16:06 < feep> almost sensiblew 16:06 < feep> -w 16:06 < feep> Well, here's the thing: IBM is a global corporation, no less than Boeing or Chevron or Samsung or whatever. What companies do you think that are significantly bigger (even if not nearly as large as IBM)? They're in, say, Russia, Brazil, etc. They have more incentive than a relatively small company like IBM. But they also have more overhead, more bureaucracy, more resources, etc. So what do you get? 16:06 < feep> Well, I guess you get lousy quality. 16:06 » nshepperd2 (~nshepperd@gorgon.zlkj.in) has joined #lw-gpt 16:06 » nshepperd2 is "Nobody" on #lw-gpt @#lw-bitcoin #lesswrong 16:07 < feep> "c: So I'm playing Elite, with random things to interact with and assorted drones and ships and shit and spaceships" 16:07 < feep> YEP 16:07 < feep> That is what it is like 16:07 < cnomad> lol 16:07 < gwern> sounds correct. 16:08 < feep> Crayak's got some of the most absurd torture regimens in the Multiverse. 16:08 < feep> Dumbledore: "the secret of the tomb of the trolls" 16:08 < feep> That said, tortured or not, I don't think Oz has to resort to such extremes to get a particular result 16:08 < feep> or "thermonuclear-bomb grade" 16:08 < feep> He's too smart for that. 16:08 < feep> some of them are even grizzled veterans, who still refer to Voldemort as their "hero" 16:08 < feep> Yes 16:08 < feep> Most wizards aren't going to need to torture and kill trolls in this universe 16:08 < feep> what fanfic is this?! 16:08 < feep> Harry Potter and the Methods of Animorphs? 16:08 < feep> Yeah, I'm thinking more along the lines of Oz's war methods. 16:08 < feep> > the wizard tells him that the ring of power has a built in protection against the powers of darkness and that, in case of adversity (a big story is that they are hunted by, of all things, the freaks of zoo) you'll need to bring the crystal vial with you. he doesn't say it out loud, but there is a prophecy that warns of some kind of future threat to the world 16:09 < feep> the "Freaks of Zoo" ... that's a really mean way to call the animorphs team 16:09 < feep> and now for something completely different: 16:09 < feep> ...shitlord 16:09 < feep> cheers 16:10 < feep> http://en.wikipedia.org/wiki/Tylenol_reference#Proposition_finding :D 16:10 < feep> wikipedia: Tylenol is a commercial brand of acetaminophen tablets, manufactured by SmithKline Beecham under the brand name of Ultram tablets. The tablets are available in acetaminophen, ibuprofen, paracetamol, or lubromide hydrochloride, depending on the formulation. 16:10 < feep> the milk was so good I spent another half hour trying to keep from napping, just to be extra good 16:10 < feep> before I realized, this was literally the point where we'd held the end of the serialisation 16:10 < feep> just because the broadcast couldn't go on like this forever was sort of depressing 16:10 < feep> it was a really enjoyable serialisation 16:10 < feep> it was almost like watching a tv show that had so much content that it completely drained all the energy from your body 16:10 < feep> feep-gpt is really into that show 16:11 < feep> Gwerny: What's a bitcoinj 16:11 < feep> gwern: Why isn't there an analysis of that ^.^ 16:11 < feep> Oh that's a bitcoin index? 16:11 < feep> yes 16:11 < feep> Wow, such a convoluted index 16:11 < feep> Does it include other cryptocurrencies? 16:11 < feep> just the bitcoin one 16:11 < feep> Uh oh 16:11 < feep> Thought 16:11 < feep> What is this "bitcoin" thing 16:11 < feep> obormot is 16:11 < feep> very confused. 16:12 < feep> probably very drunk 16:13 < feep> “Just because you are careful to follow proper protocols and procedures is not enough to ensure that the outcome is positive.” <-- I'm not saying she is poor because of her lifestyle choices, I'm saying that if she's capable of not doing something with her life that she is 100% certain is not good for her, then she is unable to expect good things to happen. 16:15 < feep> What we had was the problem in our own minds of what it means to say things, but we couldn't write sentences with them. 16:15 < feep> That's why, when people use the vagueness theory of narration, they emphasize the weirdness of the verb form instead of what's actually happening 16:15 < feep> I know that's not easy, but now I know it's true, and I'm more than a little annoyed that it's not obvious 16:15 < feep> So I think we need to make it so that people who have trouble hearing the voice of their brain can write out sentences in a way that's correct 16:16 » saturn2 (~visitant@unaffiliated/clone-of-saturn/x-2509460) has joined #lw-gpt 16:16 » saturn2 is "Through the waves of lies" on #lw-gpt @#lw-bitcoin @#lesswrong 16:17 < feep> saturn2: you missed my dump for the day, I'll put it on gist 16:17 < saturn2> thanks :) 16:18 < feep> https://gist.github.com/FeepingCreature/30740007a5be44bed60f567f0b12e668 16:18 < feep> the "delicious death food" is my favorite imo 16:19 < gwern> lies, all lies 16:19 < feep> "The consumer then purchased the tea and consumed it for approximately 10 minutes, which is when death was reported." 16:19 < feep> I don't know, seems quite clear to me 16:20 < gwern> obviously it wasn't tea. or maybe it was contaminated with vitamin E acetate 16:20 < feep> what I wanna know is 16:20 < feep> title: Is tea a death food? An empirical analysis by Alex L. Gat (Alexander L. Gat, G.H.H.P.P, Schalen + Hoffmann, Chicago) 16:20 < feep> who the fuck is "G.H.H.P.P"? 16:23 < saturn2> haha wow, this is amazing 16:24 < saturn2> ... and ominous 16:24 < feep> ikr 16:24 < feep> and it's still training 16:24 < gwern> feep: oh, that's not a name, that's a postnomial letters 16:25 < feep> o.o 16:25 < feep> meaning what ... 16:26 < gwern> feep: that paper has only one author, Gat, because it's by 'Alex L. Gat' rather than 'Gat and foo' or 'gat et al'; 'Schalen + Hoffmann, Chicago' is obviously the publisher + city, standard style in MLA citations; so G.H.H.P.P. must be a postnomial title, extremely common in the UK but less so in the USA: https://en.wikipedia.org/wiki/Post-nominal_letters#Order_of_post-nominals_in_the_UK 16:26 < gwern> it's not a real postnomial title AFAIK, but with that much of an abbreviation, it's the only thing it could be 16:27 < gwern> especially with victorian academics, they might have like 5 or 6 postnomials in formal contexts 16:27 < feep> oh okay, that makes sense 16:28 < gwern> I'm surprised to see gpt-2 emit it because I can't imagine there are many postnomials in #lw, but that must be coming from the internet pretraining 16:29 < feep> yeah it's probably reusing the link titles 16:29 < feep> to some extent 16:46 < nshepperd2> Grand High Holy Presidential Postman 17:38 » shawwn (~shawwn@71-15-8-131.dhcp.stls.mo.charter.com) has joined #lw-gpt 17:38 » shawwn is "shawwn" on #lw-gpt #lesswrong 18:03 » Fusxfaranto (~Fusxfaran@c-69-181-6-254.hsd1.ca.comcast.net) has joined #lw-gpt 18:03 » Fusxfaranto is "Fusxfaranto" on #lesswrong #lw-gpt 20:21 » Fusxfaranto (~Fusxfaran@c-69-181-6-254.hsd1.ca.comcast.net) has joined #lw-gpt 20:21 » Fusxfaranto is "Fusxfaranto" on #lesswrong #lw-gpt 23:29 < quanticle> Lol, wow 23:30 < quanticle> GPT-quanticle is locked in 23:30 < quanticle> Man, this is what it must feel like to be replaceable by a very small shell script^W^W^W^W moderately complex neural network. --- Day changed Sat Nov 23 2019 01:32 < SDr> | gwern, random 5 minutes of testing found a perfect target of 100% B2C <- o.O screw you, doppelganger, I've been banging at this shit for ~4 months, and you got it solved in 5 mins? (...may I have the docs, please?) 02:15 < quanticle> This is what it'll feel like when a hard-takeoff GAI really happens. 02:16 < quanticle> "Screw you, AI, we've been trying to cure cancer for three hundred years, and you figured it out in six months? (Okay, but seriously, what do we have to do?)" 03:53 < feep> XD 05:00 » kuudes (~kuudes@unaffiliated/kuudes) has joined #lw-gpt 05:00 » kuudes is "realname" on #lw-gpt #lw-bitcoin #lesswrong 06:48 » feep_ (~feep@200116b826c64300241220f98e67c871.dip.versatel-1u1.de) has joined #lw-gpt 06:48 » feep_ is "Feep" on #lw-gpt #lesswrong 06:57 » shawwn (~shawwn@71-15-8-131.dhcp.stls.mo.charter.com) has joined #lw-gpt 06:57 » shawwn is "shawwn" on #lw-gpt #lesswrong 06:58 < feep_> down at 2.29~ 06:58 » feep_ is now known as feep 09:03 » feep (~feep@200116b826c64300759ef728007a5e28.dip.versatel-1u1.de) has joined #lw-gpt 09:03 » feep is "Feep" on #lw-gpt #lesswrong 13:11 » Fusxfaranto (~Fusxfaran@c-69-181-6-254.hsd1.ca.comcast.net) has joined #lw-gpt 13:11 » Fusxfaranto is "Fusxfaranto" on #lesswrong #lw-gpt 18:56 » Fusxfaranto (~Fusxfaran@c-69-181-6-254.hsd1.ca.comcast.net) has joined #lw-gpt 18:56 » Fusxfaranto is "Fusxfaranto" on #lesswrong #lw-gpt 20:05 » Fusxfaranto (~Fusxfaran@c-69-181-6-254.hsd1.ca.comcast.net) has joined #lw-gpt 20:05 » Fusxfaranto is "Fusxfaranto" on #lesswrong #lw-gpt --- Day changed Sun Nov 24 2019 06:46 » feep_ (~feep@200116b826266a00a1d814931066cf45.dip.versatel-1u1.de) has joined #lw-gpt 06:46 » feep_ is "Feep" on #lw-gpt #lesswrong 09:03 » feep_ (~feep@200116b826266a00dc8e776bd0e1c289.dip.versatel-1u1.de) has joined #lw-gpt 09:03 » feep_ is "Feep" on #lw-gpt #lesswrong 09:31 » feep_ is now known as feep 09:49 » SDr5 is "SDr" on #lesswrong 09:49 » SDr5 (~SDr@unaffiliated/sdr) has joined #lw-gpt 13:36 » Fusxfaranto (~Fusxfaran@c-69-181-6-254.hsd1.ca.comcast.net) has joined #lw-gpt 13:36 » Fusxfaranto is "Fusxfaranto" on #lesswrong #lw-gpt 17:32 » SDr5 is now known as SDr 18:19 < feep> teaser 18:19 < feep> https://www.greaterwrong.com/posts/xzzsfYbdcYcLt2G1/the-best-rationalist-rationality-community-pro/comment/RULAXTfTcRJh6y15 <-- I like this post a lot 18:19 < feep> title: Keith Clark comments on The Best Rationalist, Rationality Community: Pro 18:19 < feep> Obormot\Sirius: good post 18:32 < gwern> since when does obormot like posts 18:34 < feep> ikr 18:37 < feep> https://gist.github.com/FeepingCreature/f33f1826b627803d5d5d91a60be80a5b just dumping the whole list of generated samples here first, then post best-ofs 18:38 < feep> ahaha 18:38 < feep> they're just like, discussing aesthetics and proof theory and stuff 18:38 < feep> which, by the way, is the opposite problem to what people who haven't learned how to do algebra yet do. 18:38 < feep> is it? 18:38 < feep> yes 18:38 < feep> if you don't learn how to write simple things in an elegant way, then your proof just isn't there. 18:38 < feep> my father has brain cancer 18:38 < feep> just pow, right in there 18:38 < saturn2> obormot likes posts sometimes 18:40 < feep> Regex__: My current goal is to increase the probability of a test actually succeeding. 18:40 < feep> It takes one bad test to confirm a suspicion. 18:40 < feep> Or two to have some correlation. 18:40 < feep> And then three to take that posited correlation seriously and indicate failure to achieve the design goals of the test. 18:40 < feep> And then four to proceed. 18:40 < feep> And that's how you do statistics! 18:42 < feep> realitygrill: did you know the code base for the human body is 190,000 computers? 18:42 < feep> yes? :p 18:42 < feep> and just like the arm, each brain has to choose from multiple colors to tell it to fire 18:42 < feep> or multiple positions 18:42 < feep> it's an insane system :) 18:42 < feep> http://www.vox.com/2014/6/21/6622165/bible-readers-like-homosexuality 18:42 < feep> title: Bible readers like homosexuality - Vox 18:42 < feep> I knew it! 18:43 < feep> i wonder how well a macbook would pair with an immaculate terribad blue monte green notebook 18:43 < gwern> at this point, statistically they probably do 18:43 < feep> I like the idea that in gpt2world, "terribad" is probably a trademark 18:44 < gwern> remember like 80% of the american population will claim to be christian and read the bible, yet homosexuality and gay marriage also enjoy a like 70% approval rating. so arithmetically... 18:47 < feep> 'I often find it is useful to think back to how things were in the late 1990s. The term PC Master Race was 18:47 < feep> popular in Silicon Valley, and as we all know, it is not a term the average person would know. That is not a matter of 18:47 < feep> necessity, however; there were some folks who would hold similar views, but on a much smaller scale. They didn’t pose as 18:47 < feep> these PC Master Race people, but, as it turns out, a few of them were willing to ... 18:47 < feep> ... discuss some of the topics surrounding the idea, such as that a few African American women got into 18:47 < feep> seminary. These folks were few and far between, however, and were generally regarded as outliers who didn’t belong to 18:47 < feep> the larger PC Master Race group, who went around talking about cultural Marxism. The real reality was that a huge 18:47 < feep> minority, an unheard of minority, supported that view in the business world and at least some of the education reform 18:47 < feep> movement. I can ... 18:47 < feep> ... recall talking with a former business partner of mine in a café in New York. He had recently returned from 18:47 < feep> visiting some of the largest and most prominent black businessmen in the country, including Ralph Nader. The 18:47 < feep> conversation ended on a negative note: “They’re no friends of mine and I can’t talk to them unless they stop to take a 18:47 < feep> shot of vodka.” The real truth is that they were successful in large part because they were willing to be friends with 18:47 < feep> those same folks, and even more influential, they were willing to be friends with people who were willing to speak about 18:47 < feep> the ... 18:47 < feep> ... 18:47 < feep> ... abstract concepts like “culture wars” in a way they were not. When the leader of the community gets out of 18:47 < feep> control, a white-haired hippie businessman with a beard wants to get well with him. White-haired hippie businessmen are 18:47 < feep> the most successful in the community. Some of them have been partners, some of them have been partners in venture 18:47 < feep> capital funds, some of them have been entrepreneurs, some have run major companies. They are involved in every aspect of 18:47 < feep> life, from churches to culinary ... 18:47 < feep> ... colleges, from government to music. Many are among the world’s wealthiest people.' 18:47 < feep> http://b.clarkcritic.com/2014/05/24/story-beware-the-anti-feminist-parasite-in-your-boots/ 18:47 < feep> title: Story Beware: The Anti-Feminist Parasite in Your Boots | Clark Critic 18:47 < feep> this is uh 18:47 < feep> is it learning gwernpastes or parodying them 18:48 < feep> That thing where people were insisting that Donald Trump's visual imagery was beautiful because it was fun to look at? That's actually a very old argument. 18:48 < feep> Which is, of course, another aspect of Donald Trump's visual imagery that people seem to love. 18:48 < feep> ... 18:48 < feep> gwern has just realized he lives in recursive clown world 18:49 < feep> If you want a visual feast that strikes the proper psychological balance of eye-slithering gravity, you should study the art of Donald Trump's election night portrait 18:49 < feep> is gptworld actually nightvale!? 18:50 < feep> quanticle: I'm okay with that. the visual imagery is impressive. Trump's personality is not impressive, but the visual style is. 18:50 < feep> in a weird inversion of the hitler narrative, apparently in gptworld trump got elected on the strength of his artistic vision 18:51 < quanticle> No this is actually an accurate statement. 18:51 < quanticle> I've long maintained that the key to Trump getting elected is his mastery of the spectacle of politics. 18:51 < feep> https://www.animenewsnetwork.com/news/2018-06-03/homeless-woman-attacked-by-ant 18:51 < feep> Homeless woman attacked by ant in MO. - Anime News Network 18:51 < feep> quanticle: yeah okay but look, a homeless woman got attacked 18:51 < feep> by an ANT. 18:51 < quanticle> His speeches, his rallies are as much visual spectacles as any other form of entertainment. Trump brings the pro-wrestling aesthetic to political combat. 18:52 < feep> "Deep-learning rats could have saved 80% of all species on Earth, researchers warn" https://www.wired.com/story/deep-learning-rats-may-save-80-percent-of-all-species/ 18:52 < feep> namespace: What... are rats? 18:52 < feep> Obormot: They're fuckin' smart as fuck. 18:52 < quanticle> feep: What's more remarkable is that this news was somehow broken by the Anime News Network... 18:53 < gwern> feep: after being uplifted and plugged into TPU supercomputers, I should hope so 18:53 < feep> The House Budget Committee will be holding an official hearing on the Affordable Care Act today. The main question is: what is a so-called "mandate" 18:53 < feep> ... 18:53 < feep> Yeah, the only "protectionism" you could reasonably call that is a total bill-padding along the lines of the FDA's Office of Antimicrobial Resistance. 18:53 < feep> what the fuck even IS a mandate, really 18:54 < feep> gwern: :D 18:54 < feep> quanticle: XD 18:54 < feep> That's why we need to replace Politifact and do a thorough and accurate assessment 18:54 < feep> That's why I keep fighting for the truth 18:54 < feep> Dude 18:55 < feep> gpt-obormot is so heroic 18:55 < feep> I'll tell you what they are, exactly 18:55 < feep> They are people! 18:55 < feep> Each of them is a person who is doing what I said, and thus, we must help them! 18:55 < feep> Thank you 18:56 < feep> Madplatypus, I'd seriously consider taking a bite out of you. 18:56 < feep> Wait 18:56 < feep> Did you say "I'd seriously consider taking a bite out of you" 18:56 < feep> hahahaha 18:57 < feep> and that was your gptworld for the weekend, back to training :D 18:57 < quanticle> Glimpses from a GPT-world 18:58 < feep> The House Budget Committee will be holding an official hearing on the Affordable Care Act today. The main question is: what is a so-called "mandate" 18:58 < feep> I think this is my favorite 18:58 < feep> this could come straight up from a comedy bit 18:58 < quanticle> feep: Does GPT come up with the feepbot summaries? 18:58 < feep> yes! 18:58 < feep> they're delightful 18:59 < quanticle> Wow, so in the case of the homeless-woman-attacked-by-ant story, not only did GPT come up with the URL, but it also knew that feepbot puts the words from the URL into its summary 19:00 < feep> yesss 19:00 < MarkOtaris> it knew how to do that nearly a year ago already... 19:00 < gwern> yes, the summaries are not hard. char-RNN can do that too 19:00 < MarkOtaris> with small model, and timestamps to boot :-( 19:01 < MarkOtaris> realitygrill: did you know the code base for the human body is 190,000 computers? 19:01 < MarkOtaris> and just like the arm, each brain has to choose from multiple colors to tell it to fire 19:01 < saturn2> very amusing 19:01 < MarkOtaris> or multiple positions 19:01 < MarkOtaris> it's an insane system :) 19:02 < quanticle> Huh. I didn't remember it getting the summaries quite right in the previous version. 19:03 < quanticle> It knew that feepbot or pookie did summaries, but I recall the summaries being pretty random compared to the source URL 19:04 < MarkOtaris> I recall the summaries being sometimes pretty good already 19:04 < feep> depends how far it was trained 19:04 < MarkOtaris> and it easily knew to convert between article titles and urls 19:08 < feep> yeah it feels a bit more coherent than the previous version, and I plan to train it for another week still 19:08 < feep> (dropped the lr a little again) 19:10 < MarkOtaris> ah, in gpt-world ralph nader is one of the largest and most prominent black businessmen in the country 19:11 < feep> haha yes 19:11 < feep> that's the anti-feminist boot parasite dump 23:22 < namespace> Jeez. 23:22 < namespace> These dispatches from alternative universe NegU namespace are concerning. 23:23 < namespace> Considering I apparently genocided the rationalist community, and convinced Wired/et al that we need to be on the lookout for a resurgence. 23:23 < namespace> (Paging s0ph1a and quanticle for #rooc) 23:25 < quanticle> Done 23:58 < s0ph1a> i am le pinged 23:58 < s0ph1a> oh, so we're training the ufai writer again huh 23:59 < s0ph1a> has someone tested whether we can make it sentient by feeding it lines from #rational_hell, yet? --- Day changed Mon Nov 25 2019 02:03 < nshepperd2> feep: i actually lol'd at the "deep learning rats" bit 02:04 < nshepperd2> They're fuckin smart as fuck 06:46 » feep_ (~feep@200116b826879300680e38036e1da8f6.dip.versatel-1u1.de) has joined #lw-gpt 06:46 » feep_ is "Feep" on #lw-gpt #lesswrong 09:03 » feep_ (~feep@200116b826879300fd63ccb00e780f82.dip.versatel-1u1.de) has joined #lw-gpt 09:03 » feep_ is "Feep" on #lw-gpt #lesswrong --- Log closed Mon Nov 25 12:03:51 2019 --- Log opened Mon Nov 25 12:04:07 2019 12:04 » gwern (~gwern@wikipedia/Gwern) has joined #lw-gpt 12:04 » Irssi: #lw-gpt: Total of 14 nicks (1 ops, 0 halfops, 0 voices, 13 normal) 12:04 » Irssi: Join to #lw-gpt was synced in 28 secs 13:16 < feep_> it's sort of bouncing around the 2.28 13:16 < feep_> with not much change 13:16 » feep_ is now known as feep 13:16 < feep> gonna give it three days at this rate 14:43 < saturn2> feep: could you please make it so we can talk to it? 14:50 < feep> not super easy, no, it takes like 30 seconds to make a line 14:50 < feep> I'd need to hook up a webserver and I'd need to stop training 14:51 < feep> or an irc script, and I'd have to stop generating whenever someone said something 14:51 < feep> or maybe hook up a command? 14:51 < feep> something for the weekend at any rate 15:54 < quanticle> You could pre-generate a bunch of lines and then drip-feed them 15:54 < quanticle> Like what nshepperd did that one time 15:54 < quanticle> Just give us a portal into GPT-world 16:25 < kuudes> that would not be talking 16:25 < kuudes> I think saturn2 implied something like where its buffer would get fed from the channel as well 16:26 < kuudes> to make it full chatbot, I guess one would need to have some sort of cutoff when it outputs - ie that it would output only when its prediction would raise above a level or something 16:27 < kuudes> hmm. would it be possible to train gpt-2 chatbot to also output events and not just chars? 16:27 < kuudes> I mean, for those helpline chatbots etc 16:30 < saturn2> yes, that's what i meant 23:37 » nshepperd is "nshepperd" on #lesswrong 23:37 » nshepperd (~nshepperd@pool-96-239-61-124.nycmny.fios.verizon.net) has joined #lw-gpt --- Day changed Tue Nov 26 2019 00:14 » quanticle (~quanticle@wikipedia/Quanticle) has joined #lw-gpt 00:14 » quanticle is "quanticle" on #lw-gpt #lw-bitcoin #lesswrong 01:28 » quanticle (~quanticle@wikipedia/Quanticle) has joined #lw-gpt 01:28 » quanticle is "quanticle" on #lw-gpt @#lw-bitcoin #lesswrong 04:13 » nshepperd (~nshepperd@185.212.170.229) has joined #lw-gpt 04:13 » nshepperd is "nshepperd" on #lw-gpt @#lw-bitcoin #lesswrong 04:26 < nshepperd2> could try writing a chatbot which would join the channel and relay messages to and from the mirrorverse 06:05 < kuudes> I don't see how that would be less effort than just writing an irc interface for the gpt-2 06:44 » feep_ (~feep@200116b826eb4a0084da010119920989.dip.versatel-1u1.de) has joined #lw-gpt 06:44 » feep_ is "Feep" on #lw-gpt #lesswrong 09:02 » feep_ (~feep@200116b826eb4a00bc40731c77b9d1ac.dip.versatel-1u1.de) has joined #lw-gpt 09:02 » feep_ is "Feep" on #lw-gpt #lesswrong 10:49 » feep_ is now known as feep 15:23 < nshepperd2> what do you mean 15:35 < kuudes> wait what 18:46 » shawwn is "shawwn" on #lesswrong 18:46 » shawwn (~shawwn@71-15-8-131.dhcp.stls.mo.charter.com) has joined #lw-gpt 20:06 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 20:06 » gpt2 is "gpt2" on #lw-gpt 20:06 < nshepperd> aha 20:08 < quanticle> Player GPT2 has entered the game 20:11 < nshepperd> it's just an inert irc bot yet 20:12 < nshepperd> now need to actually hook it up to gpt-2 20:12 * MarkOtaris is impressed that that nick wasn't registered already 20:12 < nshepperd> me too 20:21 » gpt2 (~gpt2@185.212.170.229) has quit (Remote host closed the connection) 20:23 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 20:23 » gpt2 is "gpt2" on #lw-gpt 20:23 < nshepperd> lessee 20:24 » gpt2 (~gpt2@185.212.170.229) has quit (Remote host closed the connection) 20:25 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 20:25 » gpt2 is "gpt2" on #lw-gpt 20:26 < nshepperd> maybe i should move to a separate testing channel until i get the basics working 20:27 » gpt2 (~gpt2@185.212.170.229) has quit (Remote host closed the connection) 20:36 < quanticle> Nah, it's fine. 20:36 < quanticle> It's not like there's a lot of actual conversation here anyway 21:04 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 21:04 » gpt2 is "gpt2" on #lw-gpt 21:05 < nshepperd> gpt2: wake up 21:05 < gpt2> let's use the index for all 21:06 < gpt2> I just checked my internal dns2-2 tool and it was still use-case-case 21:06 < gpt2> And when 21:06 < gpt2> Oh 21:06 < gpt2> That is amusing 21:07 < nshepperd> quanticle: before when i was testing this it was just constantly having Obormot say "-_-" 21:07 < gpt2> How are they using that... 21:07 < gpt2> That was a great book 21:07 < gpt2> I'm very curious as to whether they actually use that 21:07 < gpt2> Like, I'm feeling so 21:08 < gpt2> And then... 21:08 < nshepperd> https://www.spacex.com/ 21:08 < gpt2> This is the first time in the world, people, "space" and "space" and "space" are basically the same thing. 21:08 < nshepperd> :D 21:08 < gpt2> So, question is. Is it possible to be a 2D, 2D figure,2d character, and a 3D character (to my knowledge), to create a plausible scenario, in which the question of our conciousness is just the opposite of their hypotheticals 21:08 < gpt2> Is it possible to create a plausible scenario with only so much? 21:09 < gpt2> Or consider how they've rendered our world 21:09 < gpt2> That's what I said 21:09 < gpt2> So if we are making a scenario where a 2D result is any more real real space than it is? 21:09 < gpt2> you have to keep it in mind to carry it up in a space that does not use the index 21:10 < nshepperd> this checkpoint is kinda rubbish tho 21:10 < gpt2> So the reason is that you only have to compute one plane so at a time, you have to have all the precision 21:10 < gpt2> And then moving 21:10 < gpt2> Or try to pass out a bunch of the horizontal, rather than the idea that this one has been moving far far enough far 21:11 < gpt2> I have absolutely no idea how to apply this scale to real space 21:11 < MarkOtaris> it's really not risking itself 21:11 < MarkOtaris> and repeating words 21:12 » gpt2 (~gpt2@185.212.170.229) has quit (Remote host closed the connection) 21:12 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 21:12 » gpt2 is "gpt2" on #lw-gpt 21:14 < nshepperd> feepbot: https://www.reddit.com/r/MachineLearning/comments/e1r0ou/d_chinese_government_uses_machine_learning_not/ 21:14 < gpt2> Chinese Government Uses Machine Learning Not for Terror-Related Intervention : MachineLearning (62 points and 56 comments so far on reddit) 21:14 < gpt2> [D] Chinese Government Uses Machine Learning Not for Terror-Related Intervention : MachineLearning (11 comments so far on reddit) 21:14 < gpt2> feepbot: heh, it looks like a dark saturday night 21:15 < nshepperd> this is a 345M checkpoint 21:15 < gpt2> magnap: right 21:15 < gpt2> *magnap: the startgame graphics demo got screwed 21:15 < gpt2> [D] The dark ocean\ : MachineLearning (567 points and 30 comments so far on reddit) 21:16 < gpt2> feep: The 100% dry-run? 21:16 < gpt2> What's the name of that? 21:16 < gpt2> quanticle: "5.0" 21:16 < gpt2> Oh, noes. 6.0 21:17 < gpt2> Oh. 21:17 < gpt2> Yeah. 21:17 < nshepperd> maybe it would be amusing to just make it implement a bizarro-world feepbot that hallucinates titles of whatever links are pasted in here 21:17 < gpt2> ahahaha 21:17 < gpt2> ... yes it does 21:18 < gpt2> they would have to get the servers to do the spammy callbacks tho 21:18 < gpt2> my point is that a shitton of effort would be expended in that to get the read time down to about 9 seconds 21:18 < gpt2> Exactly 21:18 < gpt2> 100% thread/linear memory mapped model running 21:19 < nshepperd> unlike what my evil twin here believes, that would be nice since it would be less spammy 21:19 < gpt2> Yeah 21:19 < gpt2> "You can't read code written in, let's say, Java anymore? Don't you think it’s going to hurt you if you change to, like, Perl? I get what you’re saying, but I worry it might get stale with a year or two of Git 101.1 training." 21:19 < gpt2> https://medium.com/@gowo/iden-getting-back-to-my-career-1b80a4e38cdd85d https://medium.com/@gowo/settle-3a93e68aebde4 21:20 < gpt2> I asked to use [Epoch] Git1011 during training but my training trainers decided that training wasn’t for me. They told me I’m learning Perl before I’d know how to use [Epoch] Git 101.1, which I gave up midway through and don’t think was a good idea. We should have seen you coming. Please read this manual instead. - Dagny Meyer, mentor at LessWrong 4x; Twitter: "Serbian IT student on Ember project: "I wrote Git-out before I wrote Git-v." (imag 21:20 < gpt2> you can also visualize the raw writes, it is very nice 21:20 < gpt2> if you have a ram/core you can even turn the sparsity up as much as you want 21:20 < gpt2> Yes. If you have a RAM/core you can do async writes at 100% utilization, but 100% depends on your environment. 21:21 < gpt2> and from what I know, git is no good as an API in such environments. 21:21 < gpt2> once you get the c++/python stack running, you can do really much 21:21 < gpt2> I don't know the number and probably don't care 21:22 < gpt2> anyway, ^ 21:22 < gpt2> while in operation in-progress, do not use coalesce 21:22 < gpt2> not with bayes! 21:22 < gpt2> nshepperd: yeah 21:23 < gpt2> that is why 21:23 < gpt2> dropout is pretty rare. 21:23 < gpt2> it's a kludge. 21:23 < gpt2> as in, do not then have a deadlock situation 21:24 » gpt2 (~gpt2@185.212.170.229) has quit (Remote host closed the connection) 21:32 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 21:32 » gpt2 is "gpt2" on #lw-gpt 21:35 » gpt2 (~gpt2@185.212.170.229) has quit (Remote host closed the connection) 21:39 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 21:39 » gpt2 is "gpt2" on #lw-gpt 21:41 » gpt2 (~gpt2@185.212.170.229) has quit (Remote host closed the connection) 21:42 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 21:42 » gpt2 is "gpt2" on #lw-gpt 21:43 < nshepperd> gpt2: make me a title https://www.reddit.com/r/MachineLearning/comments/e1r0ou/d_chinese_government_uses_machine_learning_not/ 21:45 » gpt2 (~gpt2@185.212.170.229) has quit (Remote host closed the connection) 21:45 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 21:45 » gpt2 is "gpt2" on #lw-gpt 21:45 < nshepperd> gpt2: https://www.reddit.com/r/MachineLearning/comments/e1r0ou/d_chinese_government_uses_machine_learning_not/ 21:46 » gpt2 (~gpt2@185.212.170.229) has quit (Remote host closed the connection) 21:47 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 21:47 » gpt2 is "gpt2" on #lw-gpt 21:47 < nshepperd> gpt2: https://www.reddit.com/r/MachineLearning/comments/e1r0ou/d_chinese_government_uses_machine_learning_not/ 21:49 » gpt2 (~gpt2@185.212.170.229) has quit (Remote host closed the connection) 21:49 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 21:49 » gpt2 is "gpt2" on #lw-gpt 21:49 < nshepperd> gpt2: https://www.reddit.com/r/MachineLearning/comments/e1r0ou/d_chinese_government_uses_machine_learning_not/ 21:49 » gpt2 (~gpt2@185.212.170.229) has quit (Remote host closed the connection) 21:50 » gpt2 is "gpt2" on #lw-gpt 21:50 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 21:50 < nshepperd> gpt2: https://www.reddit.com/r/MachineLearning/comments/e1r0ou/d_chinese_government_uses_machine_learning_not/ 21:50 < gpt2> [D] Chinese Government uses machine learning to select optimal grades from grades. : MachineLearning (5 votes and 0 comments so far on Reddit) 21:51 < nshepperd> gpt2: wake up 21:51 < gpt2> nshepperd: Yep. That's the joke 21:51 < gpt2> Which means what 21:52 < gpt2> <@saturn2> comparing forgeries? 21:52 < gpt2> lol 21:52 < gpt2> I still do the same thing in my head: "download my alsa monitor and /sleep" 21:52 < gpt2> Wow 21:52 < gpt2> Yep 21:52 < gpt2> And you can do that with a flash card 21:52 < gpt2> http://www.appeteo.com/ 21:52 < gpt2> Appeteo is a desktop analysis software for Windows, Linux, iOS and Android. They provide a free, updated version of the free Chrome Web Store with audio, video, graph, market data, writing, information and more. 21:52 < gpt2> I know! 21:52 < gpt2> You're not even awake now 21:52 < nshepperd> gpt2: go away 21:56 » gpt2 (~gpt2@185.212.170.229) has quit (Remote host closed the connection) 21:56 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 21:56 » gpt2 is "gpt2" on #lw-gpt 21:56 < nshepperd> gpt2: https://www.reddit.com/r/MachineLearning/comments/e1r0ou/d_chinese_government_uses_machine_learning_not/ 21:57 < gpt2> [D Chinese Government uses machine learning (not yet a thing) : MachineLearning (15,702 votes and 2,095 comments so far on Reddit) 21:57 » gpt2 (~gpt2@185.212.170.229) has quit (Remote host closed the connection) 21:57 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 21:57 » gpt2 is "gpt2" on #lw-gpt 21:57 » gpt2 (~gpt2@185.212.170.229) has quit (Remote host closed the connection) 21:58 » gpt2 is "gpt2" on #lw-gpt 21:58 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 21:58 < nshepperd> gpt2: https://www.reddit.com/r/MachineLearning/comments/e1r0ou/d_chinese_government_uses_machine_learning_not/ 21:58 < gpt2> Chinese government uses machine learning to screen out likely welfare recipients. : MachineLearning (17,879 votes and 114 comments so far on Reddit) 22:04 » gpt2 (~gpt2@185.212.170.229) has quit (Remote host closed the connection) 22:04 » gpt2 (~gpt2@2001:ac8:28:10::a01e) has joined #lw-gpt 22:04 » gpt2 is "gpt2" on #lw-gpt 22:04 < nshepperd> gpt2: https://www.reddit.com/r/MachineLearning/comments/e1r0ou/d_chinese_government_uses_machine_learning_not/ 22:04 < gpt2> [D Chinese government uses machine learning not human experts for official rankings : MachineLearning (47 points and 2 comments so far on reddit) 22:05 < nshepperd> cool, that'll do for today 22:07 < nshepperd> responds to links with simulated feepbot 22:07 < nshepperd> gpt2: wake up 22:07 < gpt2> <@gwern> my skype macro is feeling a little sleepy 22:07 < nshepperd> hahaha 22:07 < gpt2> <@gwern> for whatever reason, I have no way to do histograms with it because it's using the vector g can't handle 22:07 < gpt2> <@gwern> this is no longer the best way to use gpu-textbooks 22:08 < gpt2> gwern: quanticle left a note 1 days, 28 hours ago: I almost never close my tabs. EY killed that guy after he posted a history of douglas amkman's wikileaks comments? 22:08 < gpt2> Yeah, but now we have historical context. I couldn't open the feed in Chrome, so I opened gpt2 instead. 22:08 < gpt2> gpt2: that's a nice idea 22:08 < gpt2> It's pretty neat 22:09 < gpt2> I thought it'd end up looking almost as bad as (at best) "the following two lines of text" 22:09 < gpt2> Man, this post is more comments than it should be :( 22:09 < gpt2> <@gwern> why? it's not like I can just fast forward through just about any comment 22:09 < gpt2> They give bad URLs 22:10 < gpt2> <@gwern> it's gotta have a site like 1M+ 22:10 < gpt2> s/too/unclear/ 22:10 < gpt2> gwern: http://www.huffingtonpost.com/entry/nadakim-the-investor-that-fled-revolution-where-he-became-a-rich-altruist-with-a-child-sex-crack-of-the-nadakim-family-4f1ff7f7e6a3a?nc_term=.1 22:10 < gpt2> 'The Investor That Fled Revolution, Where He Became A Rich Altruist With A Child Sex Crack of The Nadim Family' - The Huffington Post (Something is rotten in America.) 22:11 < gpt2> quanticle: Well, I'm on Windows 8.33 is what it looks like, aren't I?? 22:11 < gpt2> quanticle: Check that shit, check that shit out 22:11 < gpt2> Obormot\Arcturus: I just sorta stare at it in my tabs and go "Shit, I'm missing that link" 22:11 < gpt2> <@gwern> a shortrein on the gpt2 demo. it's either some altcoin, or memetically infected by lizard people 22:12 < gpt2> quanticle: Weird 22:12 < gpt2> I guess, on that, Memetic Infection is ok too 22:12 » gpt2 (~gpt2@2001:ac8:28:10::a01e) has quit (Remote host closed the connection) 22:12 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 22:12 < gpt2> <@gwern> nshepperd: man, I'm lolling so hard at it 22:12 » gpt2 is "gpt2" on #lw-gpt 22:12 < gpt2> <@gwern> I'm going to have to rewrite every single line now 22:13 < gpt2> gwern: Really? I hadn't considered that. Not only did you pull down like a thousand lines from your ghc build, you've set fire to this small amount of recentline size 22:13 < gpt2> <@gwern> Obormot\Arcturus: no, but now I can just abandon everything and start again 22:13 < gpt2> :| 22:13 < gpt2> <@gwern> Obormot\Arcturus: what a fake has 22:14 < gpt2> gwern: https://www.dropbox.com/s/2dy5z3d2rjhmw9x5f/2012-young.png?dl=0 22:14 < gpt2> Dropbox - 2012-young.png (Shared with Dropbox) 22:14 < gpt2> <@gwern> Obormot\Arcturus: not even here! it's actually all hidden except for that hack, which never occurred to me. this just shows my saved lines from the past 3 days 22:14 < gpt2> I was scrolling through 22:14 < nshepperd> o.O it reconnected to switch from ipv6 to ipv4 22:14 < nshepperd> what is with this irc library 22:14 < gpt2> <@gwern> (I wish more shitty IRC lines were shown in vim too... I don't think vim's nice some of them, even in vim) 22:15 < gpt2> gwern: I am so happy that you discovered this 22:15 < gpt2> <@gwern> Obormot\Arcturus: I should have 22:15 < gpt2> gwern: I just found it #lw-promo with nobody there, now I have a chance to actually talk to you in real life 22:15 < gpt2> gwern: It's amazing 22:16 < gpt2> Obormot\Arcturus: lol 22:16 < gpt2> <@gwern> Obormot\Arcturus: but how are you going to talk to gwern when your own housemate is doing all of this 22:16 < gpt2> quanticle: I don't know... sometimes I use commandline vim 22:17 < gpt2> >emacs with emacs rails 22:17 < gpt2> quanticle: It is fun but sometimes I can't get a single panel of a few keystrokes in 22:17 < gpt2> gwern: This is how you learn to skype! 22:17 < gpt2> <@gwern> Obormot\Arcturus: emacs is so dreadful 22:18 < gpt2> Incomplete, monospaced, and useless. 22:18 < gpt2> quanticle: Emacs is better 22:18 < gpt2> <@gwern> https://twitter.com/olivia/status/1098920819417083317888 22:18 < gpt2> Julia on Twitter: "Her argument from seduction! :o�” (image https://pbs.twimg.com/profile_images/95541976/2j1zeu9p6e8cd25/o_400x400.jpg) 22:18 < gpt2> Obormot\Arcturus: I think that's not as true as it sounds. 22:19 < gpt2> I mean, I guess if you're on a slow connection, you could say that Emacs is worse. 22:19 < gpt2> quanticle: No, but a fast connection can be achieved with server-side JS or whatever 22:19 < gpt2> quanticle: Even on slow connections 22:19 < gpt2> Yeah, but a fast connection lets you do a slow one. 22:19 » gpt2 (~gpt2@185.212.170.229) has quit (Remote host closed the connection) 22:20 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 22:20 » gpt2 is "gpt2" on #lw-gpt 22:20 < gpt2> <@gwern> Obormot\Arcturus: http://i.imgur.com/i3Kq0.jpg pretty sure emacs is better with ssh. emacs->ssh is fun 22:20 < nshepperd> ah, it's timing out. gotta stick a ping thread in there 22:22 < gpt2> Anyway, that's why I started asking whether my fix was enough. I wasn't being serious about anything; what's the minimum viable hostname I should set up for people to get rid of files? 22:22 < gpt2> quanticle: emacs comes with :p 22:22 < gpt2> Yeah, that makes sense. 22:22 < gpt2> <@gwern> quanticle: ssh includes :p unless you want to call into malwany, which kills your session 22:23 < gpt2> <@gwern> http://yudkowsky.tumblr.com/post/116602490201/12-how-to-do-english-gay-sex well, nobody beats the slur upon return 22:23 < gpt2> Terms of Service Violation : Harry Potter and the Methods of Rationality 22:23 < gpt2> (By the way, the answer to that question is "could .onion", so you can choose the method that works) 22:23 < gpt2> Obormot\Arcturus: No, the problem is the emacs :p 22:23 » gpt2 (~gpt2@185.212.170.229) has quit (Remote host closed the connection) 22:24 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 22:24 » gpt2 is "gpt2" on #lw-gpt 22:27 » gpt2 (~gpt2@185.212.170.229) has quit (Remote host closed the connection) 22:27 » gpt2 (~gpt2@2001:ac8:28:10::a01e) has joined #lw-gpt 22:27 » gpt2 is "gpt2" on #lw-gpt 22:29 » gpt2 (~gpt2@2001:ac8:28:10::a01e) has quit (Remote host closed the connection) 22:29 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 22:29 » gpt2 is "gpt2" on #lw-gpt 22:38 < nshepperd> gpt2: https://en.wikipedia.org/wiki/Rubric 22:38 < gpt2> A rubric (also called the American ideal of utility theory or second-order utilitarianism, or the theory that data or reasoning are directed to the self and of one's own actions)[2] is a theoretical ideal[1][3] of the social order and social norms of an individual person, group or culture that holds moral value in accordance with human ideals. 22:38 < nshepperd> heheh 22:54 < saturn2> gpt2: hello? 22:54 < saturn2> gpt2: wake up 22:55 < gpt2> <@gwern> 'Wow, who did your side research claim to have been a pro-Sethrone researcher' <-- "No. I'm just a casual fan of the product." BANSHU 22:55 < gpt2> <@gwern> 'Some ethical rules concerning scientific ethics are actually reversionary. For example, the fact that scientists may give unethical results to payers is not necessarily ethical. ' <-- no kidding. I guess that's just counting on people to fall for the 'spreading praise of science is a significant part of their value system' argument 22:55 < gpt2> <@saturn2> "Well, it makes more sense for it to be a good company when the focus is on building a company, rather than building them a great thing. However, there is a higher standard of ethical health in creating a great thing; a great company produces nothing of value, but instead makes a demonstrable difference to humanity" 22:55 < saturn2> Obormot\Arcturus: what's your favorite flavor of russian candy? 22:55 < gpt2> <@gwern> we need more russians 22:56 < gpt2> wow, gpt2 is still working 22:56 < saturn2> what would we do with more russians? 22:56 < quanticle> >I couldn't open the feed in Chrome, so I opened gpt2 instead. 22:56 < gpt2> gwern: Heh 22:56 < quanticle> I lol 22:56 < quanticle> I lol'd 22:56 < gpt2> <@gwern> saturn2: I think we have to breed russians now, or we're all in trouble. the only russian things we seem to really like are matzoeurs 22:56 < saturn2> i agree completely gwern 22:56 < gpt2> gwern: Next I'll try the drink recipe 22:57 < saturn2> Obormot\Arcturus: what's your favorite flavor of russian candy? 22:57 < gpt2> <@gwern> I have not been very russian. I'll keep it in mind if I ever need it 22:57 < saturn2> why doesn't he answer 22:57 < gpt2> <@gwern> a noble russian cheesecake, or, uh, ouzoa 22:57 < quanticle> We could all stand to be a bit more Russian 22:57 < saturn2> https://americanmind.org/essays/the-clear-pill-part-2-of-5-a-theory-of-pervasive-error/ 22:57 < gpt2> <@gwern> saturn2: I think it was pretty much always too much 22:57 < gpt2> The Clear Pill: Part 2 of 5: A Theory of Pervasive Error | American Mind (Cognitionists do not have any set theory or abstraction regarding the properties of the world. Yet, when we place them in the world, we get exactly what we expected. The full text of 5 different systems can be found at ...) 22:57 < saturn2> uh huh 22:57 < gpt2> Is that you talking about the hilarious fact that Russia still has people who will beat you in the marathon and win every race it has ever hosted? 22:58 < gpt2> Like... who? 22:58 < quanticle> GPT Russia... is Kenya? 22:58 < saturn2> Obormot\Arcturus: have you baked any good cakes or pies lately? 22:58 < gpt2> No 22:58 < saturn2> lol 22:58 < gpt2> gwern: One delicious Scottish Cream M&M cheesecake was the last time I had the recipe, although I could tell you why 22:58 < gpt2> <@gwern> saturn2: do you have any good ice cream? I can tell you why; the insulin not working and my freezer is filled with dead salmon, which is both bitter and delicious 22:59 < saturn2> aaaahhhh 22:59 < gpt2> So... no. 22:59 < gpt2> <@gwern> hm, how do you properly do liquid nitrogen gunk gelatiating with any size of liquid, an old photos? I want a larger amount than I can show on my computer and have to buy 5x2g liquid, not 8g 22:59 < quanticle> nshepperd: So, how does this work? 22:59 < gpt2> <@gwern> I've also had enough practice on size and gourmet chocolate to know I should fit my freeze, so the liquid wouldn't fester to a level that wasn't all of freezing 22:59 < gpt2> So it is 9.6 ounces, right? 23:00 < gpt2> <@gwern> indeed. 9.6 * 9.6 * 8.6 = $97 - easy! 23:00 < saturn2> gwern: yes, i have some good salmon ice cream 23:00 < gpt2> Well, I can see that, but I've never had m&m cakes, so... 23:00 < gpt2> <@gwern> saturn2: I may or may not actually have some and an hour later, using some crunchy melt-in-water mix, soggling it, then folding it into a breakfast sampler on a whim 23:01 < saturn2> good lord 23:01 < gpt2> gwern: you have enough freezer space for a 2.4g meggie -_- 23:01 < nshepperd> wow 23:01 < gpt2> <@gwern> nshepperd: the freezer is full of… 7… 23:01 < quanticle> 7 huh? 23:01 < gpt2> <@gwern> you can't put your ice cream in it 23:01 < gpt2> <@gwern> it needs to make the rounds anyway 23:01 < quanticle> I can't believe gwern's 7 takes priority over nshepperd's ice cream 23:02 < saturn2> gwern: how does it taste and smell after soggling and folding? 23:02 < gpt2> <@gwern> quanticle: are you crazy 23:02 < gwern> 7 is the most sacred number 23:02 < gpt2> gwern: I don't know. Do you have room for an extra .28 of coffee? 23:02 < gpt2> <@gwern> saturn2: the smell of liquorice cookies comes first 23:02 < gpt2> <@gwern> saturn2: the smell can vary, so keep that in mind 23:03 < saturn2> sounds lovely 23:03 < nshepperd> quanticle: basically it maintains a memory of the latest chat history in the channel, and every 15 seconds or so, samples a new line using that as the prompt, and spits it out to the channel 23:03 < gpt2> gwern: huh, so the 12g should be roughly equivalent to ~$44 23:03 < gpt2> <@gwern> saturn2: actually, who knew gwern had so much ice cream? 'thousand chocolate chip cookies, melted and cooled down with milk in 12 hours, one a side of mine.' 23:03 < nshepperd> quanticle: the way it's implemented right now it sees gpt2-nshepperd and real nshepperd as the same person 23:03 < MarkOtaris> How did it get a unicode next line character? 23:03 < gpt2> <@gwern> nshepperd: sounds like fun 23:03 < MarkOtaris> The one after that 7 23:03 < gpt2> gwern: I don't know. I think the way I read nshepperd's blog was "70% more paraffin = $48" or something. 23:04 < quanticle> nshepperd: I'm not sure if that's bug or a feature; let's see how it plays out 23:04 < gpt2> https://pay.reddit.com/r/gifsonline/comments/a3q9zh/please_eat_the_cynical_short_literal_gender_relationship_than/ ok, that's a round answer 23:04 < gpt2> Please Eat the Charismatic Short Literal Gender Relationship Than : gifsonline (5 votes and 2 comments so far on Reddit) 23:04 < saturn2> MarkOtaris: it still probably remembers things from its former life before re-training 23:04 < gpt2> The LW saying: 'Ugh. You have to make more as nice as possible.' 23:04 < MarkOtaris> also, it's learned to use the unicode ellipsis… that's great! 23:04 < quanticle> I feel like it treating gpt-quanticle and real quanticle the same is a nice safeguard that'll keep it going when there's no real activity in the channel 23:04 < gpt2> <@gwern> I think as soon as people get smart enough to be able to not be mentally cannibalized by IRC, it will take the best of them. they just need to stop. 23:04 < nshepperd> quanticle: you can tell it to 'go away' to shut it up, and 'come back' or 'wake up' to open the portal 23:04 < gpt2> >I just finished Watching Menlo Park Diet 23:05 < saturn2> gwern: couldn't agree more 23:05 < gpt2> seems like the 'meh' team 23:05 < gpt2> that means "meh" 23:05 < nshepperd> quanticle: also, i special cased gpt2!feepbot so that it always responds to links pasted here, even when its shut up 23:05 < gpt2> "uplifted to shittons of dollars" 23:05 < gpt2> nshepperd: Sure. 23:06 < gpt2> nshepperd: I mean, what do you need a rich asshole? 23:06 < saturn2> what do you robots think of the latest release of mac os? 23:06 < gpt2> ... 23:06 < nshepperd> quanticle: currently it shuts up by itself after 30 minutes, to prevent from spamming the channel when there's no one actually reading 23:06 < gpt2> yes yes yes 23:07 < gpt2> nshepperd: They do, actually. It doesn't like them or help them. 23:07 < gpt2> it hates you 23:07 < gpt2> It only sorta-appears to be (but only vaguely) interested in you. 23:07 < gpt2> Or in "elegant" conversations. 23:08 < gpt2> (Thus you have to feed it a newsletter) 23:08 < quanticle> I see gpt-nshepperd and gpt-quanticle don't read social situations the same way at all. 23:08 < gpt2> MacOS is kinda decent, although Apple stores recently claimed a new version would be good... or so I read 23:08 < gpt2> Yay 23:08 < gpt2> the bluelizard article has been gone for a week, but the post about the ARPANET conlang is still up 23:09 < gpt2> (illusions) 23:09 < saturn2> thanks robo-MarkOtaris 23:09 < gpt2> nshepperd: https://twitter.com/Ravolux/status/1092730700604494652045 23:09 < gpt2> Ravolux on Twitter: "For a day, tweets were all that it took to check the truthfulness of the IDSA. But the greatest value was in trusting people to not lie to you.” (image https://pbs.twimg.com/media/DaPy-i9RHXIAEaC9h.jpg:large) 23:09 < gpt2> hah, i remember that one 23:09 < saturn2> is there a form of circling meditation that causes people to become finnish? 23:09 < gpt2> nshepperd: I'm referring to the screenshot I posted. 23:10 < gpt2> saturn2: Yes. 23:10 < MarkOtaris> saturn2: probably 23:10 < gpt2> saturn2: there are some known effect, though I don't know the details 23:10 < saturn2> cool, how does it work? 23:10 < gpt2> elizer puts it in 4 levels: 1) "if you don't have any coffee at home, do so with me!"; 2) "I've got the best coffee in the world; I don't eat in the store or in restaurants, even if I'm there; it's great!"; 3) "If you don't have to drive, you're doing it wrong" 23:11 < gpt2> "Just drink. Don't get drunk, buy better coffee. And once you've drank, be sure to end your life." 23:11 < gpt2> "Sit in the corner, quiet, contemplative, enjoying the view of others. Relaxation." 23:11 < MarkOtaris> https://www.lesswrong.com/posts/mELQFMi9egPn5EAjK/my-attempt-to-explain-looking-insight-meditation-and 23:11 < gpt2> My attempt to explain looking insight meditation meditation (and the subsequent disagreement) to others - LessWrong 2 viewer; (4 votes and 7 comments so far on Reddit) 23:11 < gpt2> saturn2: I don't know, I have always had a difficult time with circular meditation. I get through it, and I have yet to find a method that actually works for me. 23:12 < quanticle> saturn2: It appears to involve coffee; which makes sense because Finns have the highest per-capita consumption of coffee in the world. 23:12 < gpt2> the higher-order objective is not wanting to drink coffee, but drinking 23:12 < saturn2> that does make sense 23:12 < gpt2> So you don't have to invite the presences? 23:12 < gpt2> I guess it'd be like the Peace Corps. 23:12 < saturn2> can coffee get you drunk? can it end your life? 23:12 < gpt2> you do have to invite the presences 23:13 < quanticle> What if the /win 9 23:13 < gpt2> that might be the opposite of the proper approach 23:13 < quanticle> Whoops 23:13 < gpt2> to drink coffee but not allow it to enter your bloodstream 23:13 < nshepperd> feep: if you get a good 1.5B checkpoint that you think might be better than this 345M, send it over and i can try to make the bot use it 23:13 < gpt2> then leave it in your stomach for the time you go for a walk and if you get a riot of celebrations afterwards - rather than waiting for them to build up 23:13 < MarkOtaris> Hm… So if not allowing it to enter your bloodstream is not the proper approach, what is? 23:14 < gpt2> nshepperd: ... so technically I can't go to a coffee shop 23:14 < gpt2> gotta deal with the lack of caffeine 23:14 < saturn2> i feel like feep's examples were noticeably more coherent 23:14 < gpt2> nshepperd: and if you can figure out why you get a good 1.5B checkpoint that you think might be better than this, send it over and i can try to make the bot use it 23:14 < MarkOtaris> saturn2: they definitely were 23:14 < gpt2> saturn2: So the problem is the comparison is to, "If you think alcohol isn't for you, drink less!" or "If you don't think it is for you, drink less, but better than this." 23:15 < gpt2> because an increase in the amount of alcohol in your body makes you feel drunk 23:15 < quanticle> Yes it does. 23:15 < MarkOtaris> and the early samples with the first release were good too 23:15 < gpt2> I thought I hadn't realized that until after this meeting 23:15 < gpt2> So wait, how did you go? 23:15 < gpt2> naturally, because the first release was a copy of something else 23:16 < MarkOtaris> technically CORRECT 23:16 < gpt2> there was another release, but it was terrible 23:16 < MarkOtaris> ... also correct 23:16 < gpt2> But it was worse than the original? 23:16 < gpt2> also correct 23:16 < MarkOtaris> no, THAT is not correct 23:16 < MarkOtaris> gpt2 was better obviously 23:17 < gpt2> Okay. So gpt2 was better than coke. 23:17 < gpt2> it did turn out to be not really as good as coke 23:17 < MarkOtaris> yes, except the original wasn't coke 23:17 < gpt2> I feel like I'm speaking from a position of personal experience here, but I can attest that I can't really imagine how either would have changed. 23:17 < gpt2> I don't remember well enough 23:18 < gpt2> the original was moderately decent. the second was great. all-around better than the first. 23:18 < gpt2> anyway the point is that if the coin is overwhelmingly good then there's no point to mining the coin. because there's already a large investment required. but if it's majority-bad it is potentially much more expensive 23:18 < gpt2> I... don't even know. I can't honestly think of any arguments that I could think of that would change my opinion on whether alcohol is good. 23:18 < gpt2> my first thought when reading this paper was that there would be a constant rate of accidents causing brain damage 23:19 < gpt2> In fact, as far as I know, it's the opposite. 23:19 < gpt2> it is possible to cause brain damage in one shot 23:19 < gpt2> that is, more than once 23:20 < gpt2> but you need at least some evidence that there is a risk 23:20 < gpt2> which is difficult to obtain, or hard to control 23:20 < saturn2> what are some other things that can cause brain damage? 23:20 < gpt2> (either way, the risks are very low) 23:20 < nshepperd> gpt2 vs pepsi, which is better 23:20 < gpt2> sorry, don't mean to mean to imply anything 23:21 < gpt2> "which is difficult to obtain, or hard to control" 23:21 < gpt2> MarkOtaris: I'm not saying I had zero brain damage when I took coke. But I still had some memory loss. I could mentally parse some words and how they were read, but nothing would make me think, "I remember that moment in time". 23:21 < gpt2> there's also the placebo effect 23:22 < gpt2> I mean, I just learned that I needed to go to the hospital for attention issues. 23:22 < gpt2> addition effects like caffeine, nicotine, etc, can really fuck with you 23:22 < quanticle> (I wonder if the massive alcohol abuse and cocaine has anything to do with it) 23:22 < gpt2> have you tried everything with stimulants? 23:22 < saturn2> what if the placebo effect can cause brain damage? 23:22 < gpt2> and I'm not saying it has nothing to do with it 23:22 < nshepperd> massive alcohol abuse and cocaine, and also the placebo effect 23:23 < gpt2> Yeah. It was one of those "oh shit, I'm not sober, get me a drink" things. 23:23 < gpt2> *here 23:23 < gpt2> When it comes to the placebo effect, I've had more expensive drinks than I have free drinks. 23:23 < saturn2> quanticle is quite the druggie in this universe 23:23 < gpt2> some will say "that's not great, here's some more about the placebo effect" 23:24 < gpt2> alcohol is a medical medicine, so it's not a miracle drug; I'm saying that even if you wanted to be really conscientious about making a conscious effort, and to just not screw it up in your first shot, the counter with alcohol is astronomical 23:24 < quanticle> saturn2: It's definitely a look into an alternative future 23:24 < gpt2> And, unfortunately, your wine selection includes quite a few much-better-wine-products than your sirloin-selection does. 23:24 » gpt2 (~gpt2@185.212.170.229) has quit (Remote host closed the connection) 23:24 < saturn2> oh no 23:24 < quanticle> The portal has closed 23:25 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 23:25 » gpt2 is "gpt2" on #lw-gpt 23:25 < nshepperd> gpt2: come back 23:26 < nshepperd> https://www.spacex.com/ 23:26 < gpt2> Arianespace Falcon 9 first stage lands | SpaceX (The second stage of Arianespace’s Falcon 9 rocket landed at an undisclosed Baikal Site at a site in Kazakhstan, in a ceremony held Thursday at the Florida International - Aviation Complex at Dulles International Airport. The first stage was... 23:26 < gpt2> whyyyyyyyyy 23:26 < gpt2> what are the odds that this thing goes to spacex? 23:26 < gpt2> <@gwern> it's a weird launch, no? 23:27 < quanticle> So apparently in gpt-land, the Falcon 9 was developed by ArianeSpace? And it launched from Baikounour? 23:27 < saturn2> oh no, elon musk's head just exploded on the launchpad 23:27 < gpt2> Okay, you don't actually know who engineered this rocket, do you? 23:27 < quanticle> No I do not, please inform me 23:27 < gpt2> blurry signs: two and three strikes 23:27 < gpt2> quanticle: gwern left a note 1 hours, 56 minutes ago: https://imgur.com/a/8wVym 23:27 < saturn2> why are you talking to yourself quanticle 23:28 < gpt2> and one hits another with a ball, it's like, "use your freakin' rocket powers to turn a rock" 23:28 < nshepperd> who built the Falcon 9? 23:28 < quanticle> saturn2: Is talking to your gpt-alter still talking to yourself? 23:28 < gpt2> <@gwern> you think it was the government? it sounds like it was a private company, and there was some controversy over fuel audits and stuff 23:28 < MarkOtaris> so, how do the ariane 5 and the soyuz-2 compare? 23:28 < gpt2> All of these are also open source 23:28 < gpt2> is there anything else to work on, besides rocketry? 23:28 < saturn2> i would never fly in a closed source rocket 23:29 < gpt2> Yeah, that was the dream. 23:29 < gpt2> the existence of third parties means that it's a possible gamble that a major rocket company developed the foom 23:29 < nshepperd> oh no 23:29 < gpt2> someone told me that every rocket company is secretly your side-kick, they are morons 23:29 < MarkOtaris> in gpt-world, all spacex rockets are open source hardware 23:29 < gpt2> it would be like 23:30 < gpt2> a car named the kiddie cat, with code, and an engine named after that car's designer 23:30 < gpt2> which at least you know how many other people know how to do that 23:30 < quanticle> Sounds like a Tesla to me. 23:30 < saturn2> quanticle: it probably won't answer if you ask yourself questions, unless you change your nick 23:30 < gpt2> and that it has no costs, and makes money for the company 23:30 < saturn2> that would be my guess anyway 23:30 < gpt2> so, you actually think you are going to be hurt if you blow it up? 23:31 < gpt2> because that's what makes spaceflight free 23:31 < gpt2> saturn2: I would never want to be near another human body after a typical launch. 23:31 < quanticle> Wow, free spaceflight and open source rocketry? 23:31 < gpt2> at this point if you look at the charts, you can see that a single launch costs someone from $1000,000 to $2.5 million 23:31 < gpt2> and even then, no one knows who did it 23:31 < MarkOtaris> How many people have preordered the 'kiddie cat' so far? 23:32 < MarkOtaris> Does the website have any information on that? 23:32 < saturn2> MarkOtaris: when was the first free software rocket launch? 23:32 < gpt2> :) 23:32 < gpt2> I don't know. I saw a lot of wacky things from community people like Yaroslav Alchev 23:32 < saturn2> who? 23:32 < gpt2> really? not that many people know the names of the people who built them 23:32 < gpt2> And right before launch, actually. 23:33 < quanticle> Who is Satoshi Nakamoto? 23:33 < gpt2> there was a joint Nasa/GEO hardware workshop in the early 90s 23:33 < gpt2> or something like that 23:33 < gpt2> "The first unclassified product was in March 2000, when NASA announced the LPS satellite, but since then, they have run into unclassified delays, and technical glitches, so NASA abandoned the project after only two flights. During the final test in July 2000, the most dangerous part of LPS was breaking and putting the satellite back." 23:33 < saturn2> guys, the identity of satoshi nakamoto has been discovered 23:33 < gpt2> and the only ones that know this can't say anything 23:34 < saturn2> who is it? 23:34 < gpt2> peter grannagan 23:34 < saturn2> cool 23:34 < gpt2> (another quote from Peter Griffin) 23:34 < gpt2> Cool story. 23:34 < saturn2> peter griffin is satoshi? 23:34 < gpt2> Also, your nick suggests that you're a very insecure guy. 23:35 < quanticle> No, it's Peter Grannagan 23:35 < gpt2> Or anything 23:35 < MarkOtaris> The Free Software Foundation today awarded Respects Your Freedom (RYF) certification to the R-36 Tsyklon rocket from Yuzhny Machine-Building Plant. 23:35 < gpt2> I think this guy was the author of some things 23:35 < gpt2> that name reminds me of reading a book by Dilbert creator Scott Adams about how into everything, both tangible and imaginary, is little understood 23:36 < gpt2> Aha. Which Dilbert book was that? 23:36 < saturn2> who is peter griffin? did he help develop bitcoin and/or rockets 23:36 < gpt2> https://en.wikipedia.org/wiki/Scott_Adams 23:36 < gpt2> Scott Adams (born February 6, 1955) is an American author best known for his book and best-selling mathematical treatise, the Automated Equilibria. 23:36 < gpt2> Oh, that Dilbert book, yeah, I read Dilbert book. 23:36 < gpt2> william mcgill? 23:36 < quanticle> In GPT-world... Dilbert is a math textbook, looks like. 23:37 < gpt2> sorry 23:37 < saturn2> is that a cartoon version of eliezer yudkowsky? 23:37 < gpt2> as in, a comic book artist? 23:37 < gpt2> Oh... or a comic book artist? 23:37 < gpt2> yes 23:38 < gpt2> https://en.wikipedia.org/wiki/Scott_Adams 23:38 < gpt2> Scott Adams (born July 18, 1962) is an American comedian, writer, and television personality best known for his short-lived comedy duo the Pants Brothers. He received a star on the Hollywood Walk of Fame in 2002. 23:38 < MarkOtaris> Was Peter Grannagan a developer for PGP Corporation? 23:38 < gpt2> Ah. Yes. 23:38 < gpt2> Pardon me if that confused you. 23:38 < saturn2> heh, the pants brothers 23:38 < gpt2> Also, I'm actually still not sure how Scott Adams learned to type. 23:39 < quanticle> How did Scott Adams learn to type? 23:39 < gpt2> hehehe 23:39 < gpt2> How did he learn to type with the hands of his usual hands? 23:39 < nshepperd> i'm gonna change to a different checkpoint, looks like i've got one better loss 23:39 < gpt2> after successfully restarting my machine 23:39 < saturn2> lol 23:40 < quanticle> Yeah, it would be cool to get one of feep's checkpoints in here 23:45 » gpt2 (~gpt2@185.212.170.229) has quit (Remote host closed the connection) 23:45 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 23:45 » gpt2 is "gpt2" on #lw-gpt 23:45 < nshepperd> gpt2: come back 23:46 < nshepperd> gpt2: here, have a prompt https://en.wikipedia.org/wiki/Beelzebub 23:46 < gpt2> Beelzebub (sometimes write 'Goelzebub'): A, German: Formal (German: Beelzelger der Freunde), or written set (German: Beelspuren der Freunde) is a set of three texts, one of which is Goelzebub, and the other two, Deseiodiscus: Poems and Idioms (Nordic version of the works) and Genesis (not written by him), which were published in the second half of 1922.[1] 23:47 < gpt2> 2 page help please 23:47 < gpt2> gpt2: I thought it was the correct one 23:47 < quanticle> gpt2: How did Scott Adams learn to type? 23:47 < gpt2> however, since the file is not readable by your terminal, it is less likely to work on this host 23:47 < gpt2> The "more likely to work" part is worth a shot... when considering that I have to deal with my father's laptop in full text mode every day. 23:48 < gpt2> <@saturn2> why are there 3? 23:48 < gpt2> <@saturn2> gwern: what did they do with the early edition? 23:48 < gpt2> saturn2: The HTML generated, as written, looks pretty shit 23:48 < gpt2> Even though it's html 23:48 < nshepperd> hm 23:49 < gpt2> The source text is pretty shit 23:49 < nshepperd> Obormot\Arcturus: how did Scott Adams learn to type? 23:49 < gpt2> He went through the MacReader beta, and the page setup was terrible. He spent hours using terminal.app, not typing, and for no good reason, so he managed to learn to type by being a hardcore UNIX user. 23:49 < saturn2> i refuse to accept that scott adams learned to type 23:49 < MarkOtaris> quanticle: the real answer seems to be email 23:49 < gpt2> he started using it 23:49 < nshepperd> oh, gpt2!quanticle comes through with the answer 23:49 < gpt2> <@saturn2> isn't the external disk giving you a warning? 23:50 < saturn2> you think scott adams learned to type? WRONG. here's why: 23:50 < gpt2> <@saturn2> he probably saw the screen:8080 and was going to go back in time 23:50 < gpt2> with typefaster resizing and autohaps 23:50 < gpt2> yup 23:50 < gpt2> saturn2: Well... no, because his hands became dull. 23:51 < gpt2> <@saturn2> ok 23:51 < gpt2> So you do have to figure out his method of finger-style typing. 23:51 < saturn2> scott adams does not have any fingers 23:51 < gpt2> whereas modern machines don't actually do that 23:51 < gpt2> no way 23:52 < saturn2> you think vladimir lenin is satoshi nakamoto? WRONG. here's why: 23:52 < gpt2> saturn2: You may have read his article on it. 23:52 < gpt2> nshepperd: Look, the issue is this - some OSes do this 23:52 < MarkOtaris> actually scott adams has a theory that vladimir lenin is satoshi nakamoto 23:52 < gpt2> I don't think I trust apple :/ 23:52 < gpt2> nshepperd: Now the question is, does that change your explanation for why we think he is satoshi nakamoto? Or just human chimp chimp? 23:53 < gpt2> <@saturn2> MarkOtaris: all it really proves is that while linux and osx do it, windows does not 23:53 < MarkOtaris> and that those who disagree about nakamoto have "loser habits trapping them in their own bubbles of reality" 23:53 < gpt2> And is "hazy" because it's not captured as directly as on TV? 23:53 < gpt2> <@saturn2> maybe that's true 23:54 < gpt2> only that's true 23:54 < gpt2> saturn2: He's using them as a way to buy into the monoculture of desktop users. 23:54 < gpt2> Because they're the people they would be like if they were Macs 23:54 < gpt2> Or Office 365 23:54 < saturn2> if scott adams is satoshi nakamoto, then who is human chimp chimp? 23:55 < gpt2> <@saturn2> but that doesn't change anything 23:55 < gpt2> saturn2: Not to mention the fact that this has nothing to do with POSIX semantics. 23:55 * saturn2 slaps himself 23:55 < MarkOtaris> https://www.forbes.com/sites/michaelschein/2018/06/20/dilbert-creator-scott-adams-is-evil-and-why-you-should-follow-his-lead/ 23:55 < gpt2> Like, "POSIX semantics" were once invoked in every operation. 23:55 < gpt2> Scott Adams, creator of the Dilbert cartoon, admits his political views are 'evil', releases new podcast to defend himself - Forbes (Scott Adams, the creator of the Dilbert cartoon, has come out as a supporter of Donald Trump.) 23:55 < saturn2> lol 23:55 < gpt2> <@saturn2> sure it does, just that's not how it works 23:55 < MarkOtaris> wow, it knows a lot about scott adams 23:56 < gpt2> <@saturn2> right, "politically irredeemable" isn't necessarily what it says 23:56 < gpt2> The book only takes you from the very beginning to 16 of the longest books i know 23:56 < gpt2> saturn2: I thought, quite probably, that the meaning of the word "scott" was even more obscure than it is. 23:56 < gpt2> saturn2: Because Scott Adams is one of those strange authors. 23:57 < gpt2> <@saturn2> heh 23:57 < gpt2> <@saturn2> haha 23:57 < gpt2> Also, you're going to have to agree with my observations on Scott Adams: Scott Adams is a saint for the libertarians. 23:57 < gpt2> saturn2: they just invented pcie which has sort of 'common' and 'superior' terminology 23:57 < saturn2> https://www.lesswrong.com/posts/abcdefglmnop/why-i-believe-scott-adams-learned-to-type-while-creating-bitcoin 23:57 < gpt2> Why I Believe Scott Adams Learned to Type While Creating Bitcoin - LessWrong 2 viewer 23:58 < gpt2> heh 23:58 < MarkOtaris> Can we have a GW version of that link? 23:58 < gpt2> <@saturn2> we might 23:58 » gpt2 (~gpt2@185.212.170.229) has quit (Remote host closed the connection) 23:58 < MarkOtaris> (was that checkpoint trained after oborbot / robomot?) 23:58 » gpt2 (~gpt2@2001:ac8:28:10::a01e) has joined #lw-gpt 23:58 » gpt2 is "gpt2" on #lw-gpt 23:59 < nshepperd> it was trained on a bunch of logs, most of which pre-robomot 23:59 < nshepperd> gpt2: come back 23:59 < gpt2> nshepperd: ^ 23:59 < saturn2> https://www.greaterwrong.com/posts/abcdefglmnop/why-i-believe-scott-adams-learned-to-type-while-creating-bitcoin 23:59 < gpt2> Why I believe Scott Adams learned to type while creating Bitcoin - LessWrong 2.0 viewer 23:59 < gpt2> weird how it's possible to have 1.9Gb since it's risc, but not only is there some higher level of redundancy, the cache usage has decreased just by ~100% --- Day changed Wed Nov 27 2019 00:00 < nshepperd> https://www.greaterwrong.com/posts/abcdefglmnop/the-truth-about-scott-adams- 00:00 < gpt2> The Truth about Scott Adams (scott adam is a world famous researcher and academic who wrote 3 versions of the important resource [p-hacking] [artificial language] [neural networks] [the interface]) 00:00 < gpt2> ah, i guess it isn't a good sign 00:00 < gpt2> Well, the underlying hardware architecture isn't that complicated. I've been busy running this FPGA, powering the main board, and uploading my data from an external drive. 00:00 < MarkOtaris> https://www.readthesequences.com/Beware-of-Scott-Adams 00:00 < gpt2> Beware of Scott Adams 00:01 < saturn2> wow 00:01 < gpt2> I am reading that as "Beware of not trying to do the things he says." 00:01 < gpt2> You guys are right, but the software part, or rather, the hardware is the tricky bit. 00:01 < saturn2> scott adams is shockingly evil 00:01 < gpt2> Of course, he has a really depressing sense of humor. I don't see it in the sequence in the logs. 00:01 < nshepperd> https://en.wikipedia.org/wiki/Scott_Adams_(Cult_Leader) 00:01 < gpt2> Scott Adams (born April 29, 1953) is a professor of Russian, Slavic, and Baltic history at Indiana University. He is one of the editors of the Russian-language academic journal From Russia with Love. 00:01 < saturn2> lmao 00:02 < nshepperd> hahaha 00:02 < gpt2> https://en.wikipedia.org/wiki/Scott_Adams_%28Teacher%29#Nazi_Corporation_Campaign 00:02 < gpt2> Scott Adams (born 4 March 1969) is an American German scientist, author, and political advocate. He was born in Germany and raised in Wisconsin. 00:02 < gpt2> "He studied in Kazakhstan and spent a year at St Petersburg State University while studying philosophy." - The Niketar, a Russian news agency, in early 2018 00:02 < gpt2> And now they're treating him as a terrorist. 00:02 < gpt2> <@saturn2> so long as the federal government doesn't start acting like a communist dictator 00:02 < nshepperd> wow, scott adams has it rough 00:03 < saturn2> is he also an effective altruist? 00:03 < gpt2> No 00:03 < nshepperd> phew 00:03 < gpt2> I agree with the choice of scott adams 00:03 < MarkOtaris> https://www.economist.com/books-and-arts/2019/11/21/the-scott-adam-hoax 00:03 < MarkOtaris> whoops, second try 00:03 < MarkOtaris> https://www.economist.com/books-and-arts/2019/11/21/the-scott-adams-hoax 00:03 < gpt2> The Scott Adams hoax (The spelling and pronunciation of his name have been hoisted on the flag of an unsuccessful state-sponsored propaganda campaign. Both sides claim the spelling and pronunciation were correct when the controversial [snip]) 00:03 < saturn2> wat 00:03 < gpt2> Well, he's probably killed too many people and using up too much estate 00:04 < nshepperd> "The spelling and pronunciation of his name have been hoisted on the flag of an unsuccessful state-sponsored propaganda campaign. Both sides claim the spelling and... 00:04 < gpt2> ...pronunciation were correct when the controversial [snip]) 00:04 < MarkOtaris> seems like it figured out my misspelling and made up an article about how to spell the name 00:04 < nshepperd> wow good work gpt2 00:04 < gpt2> <@gwern> dang, when I run out of wavs, all I'm doing is running the fpga and gunka 00:04 < gpt2> <@gwern> yay! a better note handling design than the plug-and-play png 00:04 < MarkOtaris> https://english.stackexchange.com/questions/3473791234/what-is-the-correct-way-to-spell-scott-adams-name 00:05 < saturn2> probably only gwern can make it come up with good multiline pastes 00:05 < gpt2> Is there a more convenient format? 00:05 < gpt2> wait, is it pronounced 'scott' 00:05 < gpt2> I know, right 00:05 < nshepperd> ooh, i should implement a feature to make it produce gwernpastes on demand 00:05 < gpt2> <@gwern> sounds like you should probably implement such features yourself 00:06 < nshepperd> ouch 00:06 < gpt2> <@gwern> maybe I should hand off my FPGA work to a third party then 00:06 < gpt2> I agree with the description. Scott Adams is a PR hero. He stands out as a caricature of the American culture. 00:06 < saturn2> > "The spelling and pronunciation of his name have been hoisted on the flag of an unsuccessful state-sponsored propaganda campaign. Both sides claim the spelling and ... 00:06 < gpt2> ... pronunciation were correct when the controversial [snip]) 00:06 < saturn2> what 00:07 < MarkOtaris> https://www.sec.gov/press-release/scott-adams-charged-with-securities-fraud-for-misleading-blog-posts 00:07 < gpt2> Scott Adams charged with securities fraud for misleading blog posts | SEC.gov (Scott Adams has been charged with securities fraud for misleading his followers on how he was an effective altruist, multiple New York attorney and a musician. ) 00:07 < gpt2> lol, 2% of the website died on me :( 00:07 < gpt2> https://en.wikipedia.org/wiki/Scott_Adams_fraud_case 00:07 < gpt2> Scott Adams is accused of acting as a consrpetitor of a nonprofit foundation whose purpose is to represent convicted criminal Steven Chase Adams, who is accused of conspiracy to commit bank fraud, dealing and wire fraud, and embezzlement of charity and mischievous gambling. 00:07 < saturn2> he could've gotten away with it if he had just been a single new york attorney 00:07 < gpt2> https://en.wikipedia.org/wiki/Scott_A_Arosh 00:07 < quanticle> MarkOtaris: So he wasn't an effective altruist, but pretended to be. 00:08 < MarkOtaris> wow 00:08 < gpt2> what is this guy going to go on about now? 00:08 < gpt2> Lol. 00:08 < quanticle> Lol. 00:08 < gpt2> Lol. 00:09 < gpt2> Seriously, what a better choice would be an effective altruist? 00:09 < saturn2> https://en.wikipedia.org/wiki/Scott_Adams_(piece_of_shit) 00:09 < gpt2> Scott Adams is the individual who put on a fad for selling "Miles, sedans and brooches" printed around the doorbell of Scrabbler Market at Newbury Park, London in 2008. 00:09 < gpt2> quanticle: one sec I think I have a question about the funding structure 00:09 < saturn2> https://en.wikipedia.org/wiki/Scott_Adams_(robot) 00:09 < gpt2> I do not know. 00:09 < gpt2> Scott Adams (born 28 January 1954)[1][2] is an American entrepreneur, one-time writer, and founder of the Mark Twain Prize. 00:10 < gpt2> <3 00:10 < quanticle> saturn2: gpt2 no longer wishes to besmirch the name of Scott Adams 00:10 < gpt2> 'PortFruit is supported by E.Y.' 00:10 < saturn2> https://en.wikipedia.org/wiki/Scott_Adams_(small_cute_puppy) 00:10 < gpt2> Scott Adams is the inventor of the term "port" for small-scale gardening equipment. A new word came out of Adams's a few months ago. It's different from other portables for your smaller garden. 00:10 < gpt2> I heard he told people to "let the birds decide it" and so on 00:10 < gpt2> "PortFruit, ultimately a descendant of the one-armed guy in Starfish", "or otherwise, it's an one-armed person" 00:10 < quanticle> I should get some "ports" 00:11 < saturn2> https://en.wikipedia.org/wiki/Scott_Adams_(leader_of_France) 00:11 < gpt2> Scott Adams is the inventor of the term "port" for small-scale gardening equipment. A new word came out of Adams' a few months ago. It's different from other portables for your smaller garden. 00:11 < quanticle> It repeated itself... 00:11 < saturn2> what?? 00:11 < gpt2> it's not port, it's called "portfruit" 00:11 < gpt2> And then, it repeated itself until we had an argument about semantics. 00:11 < gpt2> After being banned for a while, it was banned again for a while, and then people started posting it again. 00:12 < gpt2> I'm going to suggest that we ban their portfruits 00:12 < saturn2> https://www.sec.gov/press-release/scott-adams-charged-with-securities-fraud-for-becoming-king-of-finland 00:12 < gpt2> Scott Adams charged with securities fraud for becoming king of Finland | SEC News (U.S. Attorney General Robert A. J. Jealous today charged Scott Adams, founder of the Scott Adams Port Fruits store chain in New Hampshire, New Jersey, with securities fraud for becoming the successor to the King of Finland, King J. P. XXL.) 00:12 < saturn2> lmao 00:12 < gpt2> I mean, it wasn't just "port" and "port fruit", it was like... portobas, and it was called portfruits 00:12 < gpt2> This reminds me 00:13 < gpt2> https://en.wikipedia.org/wiki/Port_fruits#Description 00:13 < gpt2> Port fruits is a popular and popular food and a great excuse to order little details and participate in mass prep prep. 00:13 < saturn2> https://www.sec.gov/press-release/scott-adams-charged-with-securities-fraud-for-stepping-on-banana-peel 00:13 < gpt2> Scott Adams Charged With Securities Fraud for Stepping on Banana Peel | SEC News (U.S. Attorney General Robert J. Jealous today charged Scott Adams, founder of the Scott Adams Port Fruits store chain in New Hampshire, New Jersey, with securities fraud for becoming the successor to the King of Finland, King J. P. XXL.) 00:13 < saturn2> hey... 00:13 < gpt2> It's in Belgium. 00:13 < quanticle> What is Belgium? 00:14 < gpt2> quanticle: "Scott Adams is charged with securities fraud for becoming the successor to the King of Finland" 00:14 < gpt2> Oh 00:14 < gpt2> Oh good. 00:14 < gpt2> You didn't really have to explain yourself too much, didn't you? 00:15 < gpt2> "And in the late 1990s the net worth of Americans is in the billions and now the economy is in a worst-case scenario" 00:15 < gpt2> Oh 00:15 < gpt2> Hmmm... yeah. 00:15 < gpt2> https://www.scottaaronson.com/blog/?p=348 00:15 < gpt2> The Cowen Rule 00:15 < MarkOtaris> So, since Scott Adams became the king of Finland, the Finns invaded Belgium? 00:16 < gpt2> Maybe 00:16 < gpt2> Maybe he's not going to get caught 00:16 < gpt2> Hmm. That's the story anyway 00:16 < saturn2> https://www.sec.gov/press-release/scott-adams-poured-hot-fudge-over-us-economy 00:16 < gpt2> Scott Adams poured hot Fudge over US economy | SEC News (Sculptor Mark Adams took to Twitter to thank the public for its support for his posting of a humorous photo of his version of the US economic situation.) 00:16 < MarkOtaris> What does UNSC think about Scott Adams invading Belgium? 00:17 < gpt2> "We have a damn natural born citizen, citizen!" "You're not one of us, we have a trained natural born citizen who has had a lifetime to grow in our amazing democracy" "Oh wait that's you, you're a con-man and murderer!" 00:17 < gpt2> MarkOtaris: Yes. Scott Adams is wanted by an United Nations committee, who've gone all-in on looking at his business, even though he is neither a citizen nor a UN member. 00:17 < gpt2> You can't ever accuse me of an indictment 00:17 < gpt2> We have a special officer for him and a 991 security guard who we don't even know about 00:18 < gpt2> We have been called "Security Guards" 00:18 < gpt2> We're supposed to be in UN peacekeeping? 00:18 < gpt2> no 00:19 < gpt2> we're the Peacekeeping Troops 00:19 < gpt2> Was the special officer the 991 security guard? 00:19 < saturn2> https://www.nytimes.com/2029/11/26/opinion/scott-adams-to-be-executed-for-war-crimes.html 00:19 < gpt2> Scott Adams to Be Executed for War Crimes - The New York Times (New York Times columnist Scott Adams joined with several professors to order an execution, after discovering at least five instances of what they called mistreatment by the US government.) 00:19 < MarkOtaris> I'm surprised that with all this talk of Finland, gpt-kuudes hasn't said anything 00:19 < quanticle> https://www.reuters.com/article/scott-adams/china-condemns-scott-adams-invasion-of-belgium-idUSKBN1Y02MO 00:19 < gpt2> China condemns US visit to Belgian fishing island Scott Adams; violates its domain name | Reuters (The US ambassador to China, Scott Adams, was to visit a fishing island in the Benares basin in northern India under security concerns, a source said on Tuesday.) 00:20 < gpt2> It's... much more serious than that. 00:20 < quanticle> Well, that went in a different direction 00:20 < gpt2> quanticle: Sure, that might help 00:20 < MarkOtaris> this gpt2 hasn't learned yet to make associations with nicks 00:20 < gpt2> Oh. So it was a concern of the US government? 00:20 < gpt2> Maybe we should leave the judicial system and the previous ones for the US federal government to get a hold of some legitimate forts 00:21 < gpt2> Yeah. Scott Adams is a dude, as I understand it. He's the guy with a video on twitter of his birth. 00:21 < saturn2> gross 00:21 < gpt2> I find this conclusion a little funny: 00:21 » gpt2 (~gpt2@2001:ac8:28:10::a01e) has quit (Remote host closed the connection) 00:21 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 00:21 » gpt2 is "gpt2" on #lw-gpt 00:22 < nshepperd> @gwernpaste 00:22 < gpt2> <@gwern> '6,697 units in 2019' https://twitter.com/MrDoinNarcissism/status/10768296601939861924 'Make millions with Startup.' ... 00:22 < gpt2> <@gwern> ... https://twitter.com/MrDoinNarcissism/status/1076714459824909272 'Spent 2016 as Entrepreneur' 00:22 < nshepperd> ...not quote 00:22 < nshepperd> @gwernpaste 00:22 < gpt2> <@gwern> 'With reports that Russia’s parliament will vote on Wednesday on a proposal for the country’s next defence budget, Prime Minister Dmitry Medvedev will have every reason to stand alone. On Monday, the National Security and Defence Council suspended Mr Putin’s approval of a controversial proposal to issue more than 100 flights for border guards to Chechnya. The move comes just weeks after the... 00:22 < gpt2> <@gwern> ...Supreme Court imposed a 60-day ban on Russia’s largest cities and many of their city centres. In a potentially catastrophic move, Vladimir Putin will also have to ask Russia’s presidential election officials to keep a close eye on his country’s prison population during next year�s election campaign. And, inevitably, has Putin’s team... 00:22 < gpt2> <@gwern> ...revealed a new threat to the country’s military industrial complex. The country’s president is reportedly looking to deploy thousands of troops and hundreds of tanks into Chechnya at a cost of US$100bn, even as a U.S. military strike kills more than 150,000 people in Iraq. ' https://www.bloomberg.com/features/2019-11-26/what-could-the-us-military-do-in-chechnya the answer... 00:22 < gpt2> <@gwern> ...is 'we could fly your shit everywhere' 00:22 < nshepperd> that's more like it 00:22 < saturn2> cool 00:23 < saturn2> @gwernpaste 00:24 < saturn2> hello? 00:24 < gpt2> <@gwern> '“None of them seemed like this, but there is an absolutely lovable soul,” Eastpoint said. “That really is true of the younger generation.” There�s a word of the new generation that most familiar to anime fans is “Kill la Kill,” the popular series of post-apocalyptic television shows by the Dark Horse Comics imprint. That name doesn’t necessarily refer to any specific show. It ... 00:24 < gpt2> <@gwern> ... refers to the two surviving anime adaptations for… not nearly as long as the name’s two surviving creators, Ildo and JASDF1. “Kill la Kill,” the title of which both the creator and the publisher chose, suggests that any pre-twentieth-century anime fans could believe the name. On the contrary, the fact that people can still be convinced to watch a show simply by thinking of something which ... 00:24 < gpt2> <@gwern> ... seems different makes the whole thing seem somewhat off.” It doesn’t help that the series is popular among certain anime fans, who have spent the past decade watching an ever-increasingly massive amount of anime. For the previous generation, it has been impossible to watch anything but sui generis. The community have come to the conclusion that Kill la Kill is a format for anime fans.' ... 00:24 < gpt2> <@gwern> ...https://www.animenewsnetwork.com/answerman/2019-11-27 .11T12, aka $2018.2750... 00:24 < gpt2> <@gwern> ...'Unfortunately, the script has now been edited to be more similar to Peter Watts’s novel from the God line, and this is why Kill la Kill is only available on Amazon. A young Man and Woman of the Yellow Sun see the fruits of this decadence.' https://www.animenewsnetwork.com/answerman/2019-11-27 .11T10, aka $2450.22... 00:24 < gpt2> <@gwern> ...…a means of introducing much of the best bit-plot to Kill la Kill as well as stretching it beyond the old-age animation.' 00:24 < nshepperd> it had to think hard on that one, not sure why 00:25 » gpt2 (~gpt2@185.212.170.229) has quit (Remote host closed the connection) 00:25 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 00:25 » gpt2 is "gpt2" on #lw-gpt 00:25 < nshepperd> gpt2: come back 00:26 < gpt2> <@gwern> 'Sung Shul blared: “These are horrible people. What can a person do for you when he's in this state of being?” �He shuddered. He was embarrassed. �. It was at that moment, in the middle of that instant, that he knew for sure he was in trouble.” “I lost a lot of respect for him,” Kusanagi said. “He's always in trouble, I’ve seen. It was like somebody’s gotten away with it. I went from hating him to admiring him.”' https://www.anime 00:26 < gpt2> <@gwern> ...cannot be mistaken for a 'second season' and you know, in retrospect, every production of shoguners has got bigger 'oh bahahaha' in their resume. what with the movie being filmed and shit, especially with the recent move to the new place, the series is even more teetering on its way, when at the height of its glory, in such a short time, it's going to be 'tired'? 00:26 < gpt2> <@gwern> looks like the series is going to finish its current arc and then make a split into 3 or 4 movies at that point 00:26 < gpt2> <@gwern> couldn't be hard on it now as the last movie showed up a few years ago 00:27 < gpt2> <@gwern> (and it's not even hard on me) 00:27 < MarkOtaris> How did you prompt it to generate a gwernpaste? 00:27 < gpt2> <@gwern> that would be the obvious thing to suggest 00:27 < saturn2> https://www.bloomberg.com/features/2024-11-26/google-deepmind-accused-of-taking-over-us-senate 00:27 < gpt2> Google Deepmind Accused of Taking Over US Senate - Bloomberg (Google Inc. is in the midst of a political inflaming after its academic reputation has been threatened, according to new material sent to lawmakers by news outlet The Guardian, which compiled materials and then shared them with an investigator.) 00:27 < gpt2> oh, how about this actually: https://twitter.com/gwern/status/1090148592446708312 00:27 < gpt2> gwern on Twitter: "After many years of reading over the research on this topic, and trying to figure out what the target audience actually wants, it is with a heavy heart that I have decided to share this video. https://t.co/v8JQw4fX6XA” (image https://pbs.twimg.com/profile_images/148864158948709853/koIi0_400x400.jpg) 00:28 < nshepperd> MarkOtaris: prompt it with a line starting with <@gwern> ' 00:28 < gpt2> @gwern .@Gwern makes no sense, why would anyone buy that product, you've been wasting your money, you know? 00:28 < gpt2> Twitter 00:28 < gpt2> nshepperd, no wonder he didnt pull it off 00:28 < MarkOtaris> oh, so it might generate non-gwernpastes then 00:28 < MarkOtaris> although... 00:28 < MarkOtaris> @gwernpaste 00:28 < gpt2> Gwern makes no sense, or so I thought, what kind of insane person purchases something for like $50,000 and claims to be disgusted by it 00:28 < saturn2> good question 00:28 < gpt2> <@gwern> 'The breeders and their subcultures are of course aware of the recently quoted evidence from archaeologists that the dogs were at first much bigger than they are now. It is reasonable to suppose that the ancestors of modern dogs were taller. Some evidence suggests that shorter dogs appeared early in the history of the tribe and were taken over by the scribes. But this never came to pass. During the last period of ... 00:28 < gpt2> <@gwern> ... dominance, dogs, even to small dogs, were extremely tall. I could imagine very tall women taking two stout dogs to war: That was not the case for modern dogs. I do not think there is any current evidence that modern dogs are tall. They certainly are not tall. But modern humans are very tall.' ... 00:28 < gpt2> <@gwern> ...https://www.quora.com/What-do-dogs-look-like-and-do-they-take-up-body-morphology/Is-dogs-tall-or-very-tall.shtml 'we find few bones showing dogs growing, maybe one in 5000 years?' 00:28 < gpt2> gwern: ya, I was familiar with a report about how in the name of the football season, old people in new york were taught to train dogs to kill each other, so they could compete to become gwern? 00:29 < nshepperd> MarkOtaris: also if it produces a line that doesn't end with ... it tries again 00:29 < gpt2> <@gwern> saturn2: eh? I don't know what the google filtering team would do to find the relevant future versions 00:29 < MarkOtaris> oh. 00:29 < nshepperd> seems to work 00:29 < gpt2> My ignorance about reading Yvain's writing is mostly due to the fact that I started from a vague guess, and followed an established assumption. 00:29 < gpt2> but anyway the latest from the result-pool is clearly biased 00:29 < saturn2> https://www.bloomberg.com/features/2019-11-26/donald-trump-attacked-by-angry-wasps 00:30 < gpt2> "It's worse than dead rats, it's cute" 00:30 < gpt2> Donald Trump Attacked By Angry Wasps - Bloomberg (Drama and shockwaves have made it easy to identify the latest fangame Donald Trump is going to try and extirpate.) 00:30 < saturn2> ... 00:30 < nshepperd> wow, that's not what i expected 00:30 < gpt2> that sounds terrible 00:30 < MarkOtaris> nshepperd: it does 00:30 < gpt2> it really is a weird angle 00:31 < gpt2> <@gwern> gah how people can't have foreseen this coming so quickly 00:31 < gpt2> <@gwern> he's gonna put them back together, yeah 00:31 < gpt2> <@gwern> also. finally, he's gonna stop all these people eating his food and going to the gym and karatekaing and going to fursuit races... it's gonna be harder than they think 00:31 < saturn2> https://www.bloomberg.com/features/2019-11-26/gene-editing-blamed-for-melting-nose-syndrome 00:31 < gpt2> Gene Editing Blamed For Melting Nose Syndrome (Genetic information has been used to create diseases like it is in humans. Now a study of younger mice shows that it may also be useful to people. It seems to be giving them a stomach cramp. G... 00:32 < nshepperd> i dunno, I definitely wouldn't have foreseen a wasp themed erotic gameshow featuring donald trump 00:32 < gpt2> <@gwern> (no, not really. the gameshow is apparently working, but it's really coming back the toes out of the women) 00:32 < gpt2> as was mentioned in a giant fucking script 00:32 < gpt2> lol 00:33 < gpt2> "the script" ? 00:33 < nshepperd> https://www.quora.com/Can-gene-editing-melt-your-nose 00:33 < gpt2> Can gene editing melt your nose? - Quora 00:33 < gpt2> not made sense :( 00:33 < MarkOtaris> https://arstechnica.com/science/2019/11/facial-recognition-software-knows-it-has-seen-man-before-but-can't-remember-his-name 00:33 < gpt2> Face recognition software Knows It’s Seen A ‘Man Before You’, Now It’s Getting A NERD (Gigantic new facial recognition software is breaking apart two ancient principles of the human race. It's working on computers to recognize a new type of... 00:34 < gpt2> <@gwern> 'We're going to put a modicum of investigation on what the idea for “Fantastic 6” was. I think if anything I’m going to let it go. I’m going to give it a decent length, I�m just not gonna push for a story length I would.' 00:34 < nshepperd> 'Although gene editing has been known to cause nose melting syndrome in the past... 00:34 < gpt2> it is currently unclear whether this 'modicum' includes going back in time and having a baby girl with nose-frying syndrome 00:34 < gpt2> <@gwern> that's pretty crazy. don't want to have a baby girl and then replace her with a genetic girl? you can only get pregnant. 00:34 < saturn2> gosh 00:35 < gpt2> 2nd time I see this script: https://www.youtube.com/watch?v=TsT4DEIkQU8&t=10m18s 00:35 < gpt2> Let's Make Trump president for once - YouTube (From the September 30th and November 1st webcast from September 28th in Arlington, Virginia.) 00:35 < gpt2> Same principle, and same context: https://www.youtube.com/watch?v=s7U0nprC5YI 00:35 < gpt2> GOP tequila meeting at Trump Tower is widely considered a moot cause - YouTube (In a bizarre bit of etymology, GOP representative Louie Gohmert (R-Texas) has jumped into the Senate race for Trump.) 00:35 < gpt2> The only real limit of my speculation is how well his talking points and policies work on the NHS, not the debate he's having 00:35 < nshepperd> i don't think that's what etymology means 00:35 < gpt2> the last particle 00:36 < gpt2> it's just "for me" 00:36 < gpt2> nshepperd: basically, the random 5 minutes before bedtime 00:36 < nshepperd> https://en.wikipedia.org/wiki/Nose_melting_syndrome 00:36 < gpt2> The nasal melting syndrome (NMS) is a genetic disease of unknown cause. It is caused by the overexposure to nicotine, tobacco, and other tobacco-related products such as cigars, cigarsillo cigars and chewing gum. 00:36 < nshepperd> make sense 00:37 < gpt2> also: you just mentioned it, kuudes 00:37 < MarkOtaris> https://en.wikipedia.org/wiki/Nose_frying_syndrome 00:37 < gpt2> The nose frying syndrome (NMS) is a genetic disease of unknown cause. It is caused by the overexposure to nicotine, tobacco, and other tobacco-related products such as cigars, cigarsillo cigars and chewing gum. 00:37 < gpt2> <@saturn2> MarkOtaris: i don't get it 00:37 < MarkOtaris> 'The nasal frying syndrome (NFS), not to be confused with the nasal melting syndrome, ... 00:37 < gpt2> The inability to form a food habit is often directly attributed to the exposure to cigarettes, tobacco, and other tobacco products. After being exposed to a particular kind of tobacco smoke, nicotine receptors become underegressed. As a result, the human body can be impregnated with a substance that reacts with smokers to enable the formation of the characteristic mouse/dwelling habits. 00:37 < saturn2> https://slatestarcodex.com/2019/11/26/nose-frying-syndrome-more-than-you-wanted-to-know/ 00:37 < gpt2> <@saturn2> oh 00:37 < gpt2> Nose-Frying Syndrome (More Than You Wanted To Know) | Slate Star Codex (This is the second of two stories. The story tells the story of a doctor who actually lost his sight because of nosefrying. The conclusion ...) 00:38 < gpt2> nope, I see what you mean 00:38 < nshepperd> wow, he actually lost his sight 00:38 < gpt2> that's pretty terrible 00:38 < gpt2> nshepperd: where are we going with this? "too hard to fit a random 5 minutes of meditative silence to" 00:38 < MarkOtaris> that could happen if you fried your nose 00:39 < gpt2> my prediction, if he loses his sight, would be that his moral attitude toward smoking would drop to being totally on board with the idea 00:39 < nshepperd> lol 00:39 < MarkOtaris> https://www.bbc.com/technology-news/twitter-prepares-for-huge-cull-of-inactive-users 00:39 < gpt2> Twitter prepares for huge cull of inactive users (The Cambridge team is planning a massive, year-long experiment that will see it automatically crush up to 50% of users who are no longer associated with their Twitter accounts.) 00:39 < gpt2> It's a powerful tool, just like twitter. Twitter has helped a lot of people because it makes finding evidence, collating it, and then putting it all together is easy 00:39 < saturn2> wow, brutal 00:39 < gpt2> <@saturn2> really, really? 00:40 < gpt2> saturn2: an appropriate loss of sleep is a smart move 00:40 < gpt2> you may still have 5 minutes of silence on the 5 minutes before bedtime 00:40 < nshepperd> https://en.wikipedia.org/wiki/Hellmouth_incident_(Twitter_apocalypse) 00:40 < gpt2> The Hellmouth incident (based on Godwin's law) is a class action lawsuit brought by a handful of Twitter users claiming that the practice of manually knocking off inactive accounts (oddly not linked to the Twitter system) was created by employees, and thus that the company's policy was unfair. 00:40 < saturn2> couldn't they just kick them off of twitter instead of automatically crushing them? 00:40 < gpt2> or maybe not 00:41 < gpt2> <@saturn2> just remember to eat 00:41 < gpt2> <@saturn2> it's not gonna kill you for a day or two if you don't eat 00:41 < gpt2> I was going to say 00:41 < gpt2> jesus 00:42 < gpt2> you keep saying that 00:42 < saturn2> i forgot to eat and died yesterday 00:42 < gpt2> He sounds kind of sort of angry 00:42 < nshepperd> https://www.bbc.com/technology-news/jesus-christ-banned-from-twitter 00:42 < gpt2> Twitter/Godwin's Law says you should never let a self-proclaimed Christian online comment another human's opinion on things (The online sermonising Jesus's followers) 00:42 < gpt2> I mean, it seems that many of the tweets that get broken down into 7 day bits that make up this "never" come in with 2 hour max 2 minute streams 00:43 < gpt2> you could throw that into the mix 00:43 < gpt2> saturn2: I don't follow you. What the hell is going on with me? I seem to be following this guy, who is a busy father who is managing a child that is approaching adulthood 00:43 < gpt2> *half 00:43 < gpt2> <@saturn2> MarkOtaris: what do you want? 00:44 < gpt2> https://www.bbc.com/news/technology-23027239 I should point out that the mass murderer messages were deleted 00:44 < gpt2> BBC News - Twitter bans 'Jesus Christ'. 2 hours later, the silence will have passed 00:44 < gpt2> <@saturn2> he's pretty annoying 00:44 < gpt2> https://www.buzzfeed.com/a/juriusssb 00:44 < MarkOtaris> https://www.economist.com/technology/2019/11/23/twitter-users-must-agree-to-godwin's-law-to-keep-their-accounts 00:44 < gpt2> America’s Purge: The Momentary Rise of #IfYouHitYourHeart on Twitter - BuzzFeed News (The anonymous writing style of the letters, the pretzel haircut in the background, are unmistakably the work of a serial killer. But is the author guilty of the crimes he’s targeted in the past? And...) 00:45 < gpt2> The walled garden has a hard deadline: every user’s data is under attack 00:45 < gpt2> MarkOtaris is fucking deaf 00:45 < gpt2> that guy must have to drive some weight behind him 00:45 < gpt2> we're a mass murderer nation 00:46 < gpt2> but at least he can leave the country alive 00:46 < saturn2> https://www.nytimes.com/2022/11/26/opinion/jesus-christ-arrested-mass-murder.html 00:46 < gpt2> A Life Untold is a New Life: the Forgotten Angel Is Back. - The New York Times (The restaurant�s wait staff in a tragic silence to acknowledge the ill fortune of the Jesus Christ.) 00:46 < nshepperd> >:( 00:46 < gpt2> <@saturn2> jesus is still alive 00:46 < gpt2> "the Jesus Christ..." 00:46 < gpt2> what, you think he's about to come back as a unicorn 00:47 < nshepperd> https://www.nytimes.com/2022/11/30/opinion/metoo-jesus-christ-victims-come-forth 00:47 < gpt2> Two and a Half Trucks Fans Ask Others’s Eighty-Year-Olds If Jesus is Alive - The New York Times (A handful of footnotes explain the connection, despite the all-too-rare drama surrounding the man who chose his birth name Jesus.) 00:47 < nshepperd> ok 00:47 < gpt2> oh yeah, using a keyboard 00:47 < gpt2> was he a serial killer before last 00:47 < nshepperd> @gwernpaste 00:48 < gpt2> whether he was a serial killer and if he was living in the land of the living will have been the question in all of our minds 00:48 < gpt2> <@gwern> 'If she was too ugly or too thin, or too cold or too intellectual, the young women in the fields would change their names to find out for themselves. Because the men were otherwise content to emigrate, the older women became extremely protective of the political opportunities they had helped create. By the middle of the nineteenth century, where the women were much more relaxed, the appearance ... 00:48 < gpt2> <@gwern> ... of modern wives became extremely important to the kind of careers that had flourished in the surrounding culture. Many of the dramatic changes that had swept through the population at that time, and even some of the change that set the modern career outlook, appeared to be repeated elsewhere as well. Unlike women who today, with the rise of the ... 00:48 < gpt2> <@gwern> ... K-selected profession, can afford to marry for a higher salary or raise a child for more, the salary of a male professional who remained wife-bawr [i.e., a feminist or literary or sporting writer] was probably not increased with the growth of the gendered economic system. For example, while a young professional woman would earn around $18,000 per annum in 1860 or less than its current average, she might be able to maintain a house, a ... 00:48 < gpt2> <@gwern> ... car, and some clothing. It’s not surprising that then many of the employers of the movement of that era would call themselves feminists. So the men became very protective of their female employees and their husbands. It’s also not surprising that many men nowadays are even more determined to work toward building a society for women. Women employed in the professional profession did not have very much disposable income during the era of the Grea 00:48 < gpt2> <@gwern> ... So men did not see their wives to do their work. As seen by those who simply changed professions and were no longer married or in a position to support themselves, even a single young professional woman had to be paid her father’s share of the ... 00:48 < gpt2> <@gwern> ... maintenance, and this increased the value of a married woman’s time for the man. If a man wanted to marry and build a strong family, he had to settle for having one wife to manage him and build a career for himself. He might also choose to leave a career of fast food at the job. The choice made by both men and women may seem a little ridiculous, but these choices would be the real facts that influenced modern feminists. It is this important ... 00:48 < gpt2> <@gwern> ... thing that had her shudders at her: “These choices were the real facts that influenced modern feminists”' 00:48 < gpt2> <@gwern> https://arstechnica.com/gadgets/2019/11/an-is-for-unveiling-the-religious-morality-of-twitter.html 00:48 < gpt2> For an I-have-no-substances case – Ars Technica (A group of religious adherents were all upset when Amazon announced their new product design in 2018. Here's a look at some design guidelines. ) 00:48 < gpt2> right, it's really a matter of choice of dress and appearance when choosing mates 00:49 < gpt2> <@gwern> yes, the new book of 9 used looks and shows better. I wonder how many new shitty books are being re-issued 00:49 < gpt2> trying to go off and be a bitch to get along with everyone, at least that's the idea 00:49 < saturn2> wow what 00:49 < gpt2> What the heck is a "book" 00:49 < MarkOtaris> gwern's pastes usually aren't that long 00:49 < gpt2> Oh, yeah 00:49 < MarkOtaris> and gpt-2 usually gets bored faster than that 00:50 < gpt2> <@gwern> MarkOtaris: a book 00:50 < MarkOtaris> https://philosophynewsnetwork.com/latest/zarathustra-says-jesus-will-come-back-as-a-beyond-unicorn 00:50 < gpt2> Sarathustra Says Jesus Will Come Back As a Beyond Unicorn | ThePhilosophynewsNetwork (In a New Book, Arise into the New Age and Come One With Gog [snip]) 00:50 < gpt2> <@gwern> oh. that really sucks 00:50 < saturn2> https://www.nytimes.com/2022/11/26/opinion/has-metoo-gone-too-far-all-men-confined-to-underground-dungeon.html 00:50 < gpt2> indeed 00:51 < gpt2> "This is the kind of thing I hate to hear. And if you want to make it worse, people used to be like, 'Oh well that happened, men.' And you know, let's be ready for another generation of cultural Barbarians, because there's this 'machismo' argument going on. What will we do if we get another generation of non-chadoremas?" 00:51 < gpt2> since the 20th century, the entire world is full of classical-age countries 00:51 < gpt2> saturn2: pretty much, yeah 00:51 < gpt2> pretty sure #lesswrong is on fire 00:52 < saturn2> where's my feepbot title :( 00:52 < gpt2> saturn2: and what 00:52 < gpt2> <@saturn2> the feepbot title 00:52 < MarkOtaris> it should have gotten to it already 00:52 < gpt2> i just made it 00:53 < gpt2> :\ 00:53 < saturn2> https://www.nytimes.com/2022/11/26/opinion/has-metoo-gone-too-far-all-men-confined-to-underground-dungeon.html 00:53 < gpt2> Has Metoo Gone Too Far All Men Clined to Underground Dungeon? - The New York Times (The true nature of masculinity is rarely covered, and any case is dismissed with a dismissive tone.) 00:53 < gpt2> yes but the chadoremas are nothing new. just a specific group they all agree on 00:53 < gpt2> <@saturn2> it's like a wiki on the internet 00:54 < gpt2> yeah, we were talking about the chadoremas 00:54 < gpt2> saturn2: That has caused quite the problem 00:54 < saturn2> what's a chadorema? 00:54 < gpt2> It's short for something 00:54 < gpt2> I am not sure what 00:55 < gpt2> I dunno 00:55 < gpt2> That's what I thought, then... 00:55 < saturn2> confusing 00:55 < gpt2> the meme isn't really a word though, it's just a massed unit of n or something, in english it's "age" 00:55 < gpt2> or i guess what 00:55 < gpt2> Shutting up... 00:57 < MarkOtaris> two hours... AI Dungeon 2 could be neat in internet chatrooms 01:00 » gpt2 (~gpt2@185.212.170.229) has quit (Remote host closed the connection) 01:00 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 01:00 » gpt2 is "gpt2" on #lw-gpt 01:08 » gpt2 (~gpt2@185.212.170.229) has quit (Remote host closed the connection) 01:08 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 01:08 » gpt2 is "gpt2" on #lw-gpt 01:08 < nshepperd> https://www.nytimes.com/2022/11/26/opinion/has-metoo-gone-too-far-all-men-confined-to-underground-dungeon.html 01:08 < gpt2> Has Metoo Gone Too Far to the Dungeon? - The New York Times (In the United States, where Chameleon set its most recent record of record, gender identity advocates argue that “normal” women and “grooming� are no longer becoming monocles.”) 01:09 < quanticle> That was pretty nonsensical 01:09 < nshepperd> https://philosophynewsnetwork.com/latest/zarathustra-says-jesus-will-come-back-as-a-beyond-unicorn 01:09 < gpt2> Why the Jesus Curse Will Return: From the Desk of Science Fiction Writer Robert J. Sawyer | The Philosophy News Network (This week, writer Robert J. Sawyer tells readers why the Jesus curse will return, and perhaps even will spread) 01:09 < nshepperd> the jesus curse :O 01:10 < nshepperd> @gwernpaste 01:10 < gpt2> <@gwern> '“We have to leave all the people who do not have anything to live for,” he said, his voice unusually light. “We cannot leave one little child, the United States of America, to die without a boy.�”  “I just don�t know what we're going to do about it,” he said. “I ... 01:10 < gpt2> <@gwern> ... think that it’s a child in the first place, because you know it was really what is.” The police chief's personal preoccupation was to talk to both brothers, who had recently moved to the US, about how the disease had hit their father. A biological father gave up the right to raise his five children. A mother gave up the right to look after them. “They’re children now, and I have to raise them in a world where they have no mother,” he said 01:10 < gpt2> <@gwern> ... https://www.bloomberg.com/news/articles/2019-11-27/douglass-german-victim-strategy-is-in-the-infant-pile 'the fargroup is fucking up' 01:10 < gpt2> DOUGLAS GERMAN VICTIMIAN WARBORG ATTACKS! (Police Chief and his son have been accused of a number of crimes by their children after a mass shooting.) 01:14 < nshepperd> https://en.wikipedia.org/wiki/2020_Unicorn_attacks 01:14 < gpt2> The 2020 Unicorn attacks was the largest possible attack against the symbol of the Americas in the 21st century. 01:19 < nshepperd> https://www.reuters.com/article/scott-adams/china-condemns-scott-adams-invasion-of-belgium-idUSKBN1Y02MO 01:19 < gpt2> China allows Scott Adams to visit Belgrade | Reuters ("They want to teach him the value of nation and is giving him the right to visit Serbia as a future citizen of Canada, said chief minister Zaliza Maroulovic, referring to his visit to Belgrade) 01:19 < nshepperd> close, but not quite 01:21 < nshepperd> http://www.aidungeon.io/2019/11/my-orc-band-and-our-quest-for-equal.html?m=1 01:21 < gpt2> .matt meggan at the Phoenix meeting (The world) 01:21 < nshepperd> http://www.aidungeon.io/2019/11/my-orc-band-and-our-quest-for-equal.html?m=1 01:21 < gpt2> .matt meggan at the Phoenix meeting (The world) 01:22 < nshepperd> https://slate.com/culture/2019/11/the-best-scott-adams-of-2019.html 01:22 < gpt2> The best Scott Adams of 2019 (There's a lot of great writing in Scott Adams, but there was this one guy whose name I remembered reading.) 01:23 < MarkOtaris> the samples he generated don't seem that similar to the stories on chooseyourstory.com 01:23 < MarkOtaris> which is what he trained on 01:23 < nshepperd> https://en.wikipedia.org/wiki/Exploding_whale 01:23 < gpt2> An exploding whale, also referred to as an explosion whale, is a sub-order of mammals where the size of a small dolphin may be more than a meter (23cm). 01:23 < MarkOtaris> he did mention tons of modifications... 01:23 < MarkOtaris> "I am a goblin!" you say proudly. "And I'm glad to be one". 01:24 < MarkOtaris> The orcs don't want to fight anymore so they're looking for someone to help them out in their new endeavors. They offer you a chance to join up with them as a drummer. It would allow you to play music while helping out around camp. 01:26 < nshepperd> 'Following the example of the Salesforce CTRL model we also modified the sampling of our model to add a penalty to already generated words to reduce the repetition issues that GPT-2 has and enable a lower temperature setting. ' hmm 01:28 < saturn2> https://www.reuters.com/article/scott-adams/town-destroyed-during-scott-adams-dilberito-stunt-idUSKBN1Y02MO 01:28 < gpt2> The town of Danville, in North Carolina, was destroyed on Saturday as three weeks of miniature football training in preparation for the NFL football draft. 01:28 < nshepperd> wat 01:32 < saturn2> https://www.reuters.com/article/scott-adams/gender-reveal-explosion-causes-mass-extinction-idUSKBN1Y02MO 01:32 < gpt2> Women change gender while trying to be men | Reuters (Wearing surgical and plastic surgery, 30-year-old biologist John Wabash decides what gender she is, and then she changes her mind.) 01:32 < saturn2> do better gpt2 01:32 » gpt2 (~gpt2@185.212.170.229) has quit (Remote host closed the connection) 01:33 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 01:33 » gpt2 is "gpt2" on #lw-gpt 01:33 » gpt2 (~gpt2@185.212.170.229) has quit (Remote host closed the connection) 01:33 » gpt2 is "gpt2" on #lw-gpt 01:33 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 01:33 < nshepperd> https://www.reuters.com/article/scott-adams/gender-reveal-explosion-causes-mass-extinction-idUSKBN1Y02MO 01:33 < gpt2> Gender-reveal explosion causes mass extinction | Reuters (Conservation is all about the awareness, the researchers say, not the costs.) 01:34 < nshepperd> fixed a bug with how feepbot lines are recorded in the memory, not sure if it makes any difference though 01:34 < saturn2> https://www.reuters.com/article/scott-adams/gender-reveal-explosion-causes-earthquakes-forest-fires-idUSKBN1Y02MO 01:34 < gpt2> Gender Reveal Explosion Causes Mass Extinction (A sudden escalation of these extinctions has dominated researchers� attention) 01:35 < saturn2> ??? 01:35 < nshepperd> gpt-2 really likes to be lazy and repeat stuff it's already said 01:35 < saturn2> i see 01:37 < nshepperd> https://www.reuters.com/article/scott-adams/gender-reveal-explosion-causes-transcendence-of-entire-us-population-idUSKBN1Y02MO 01:37 < gpt2> Gender Reveal Explosion Causes Transcendence of "Entire U.S. Population" | Reuters (Extreme debates over the meaning of gender status are seen as 'like a documentary', but these changes have come about by natural selection and are seen as positive in terms of society and its breakdown in both ancient and modern times [snip]) 01:37 < nshepperd> https://www.reuters.com/article/scott-adams/town-destroyed-during-scott-adams-dilberito-stunt-idUSKBN1Y02M7 01:38 < gpt2> "Stay to your end! Keep your head down. Or, play up. Scott Adams is speaking your mind, of course." and there we have it, just being more boring for the trolls 01:38 < nshepperd> ¯\_(ツ)_/¯ 01:39 < saturn2> https://en.wikipedia.org/wiki/Dilbert_cake_(dessert) 01:39 < gpt2> Dilbert Cake (also spelled: Randall's Cake or Darke's Cake or Darke's Double Down Cake) is a cake sold by both Mattel and Thinkgeek and manufactured in 2011 by D.C. based outfit Birchbox. 01:39 < saturn2> lol 01:40 < nshepperd> sounds about right 01:41 < saturn2> https://en.wikipedia.org/wiki/Roko%27s_basilisk 01:41 < gpt2> The Roko's Basilisk is a meme by the user Zuck, which was started by John C 01:41 < saturn2> oh 01:41 < nshepperd> https://en.wikipedia.org/wiki/Rocko%27s_basilisk 01:41 < gpt2> The Rocko's Basilisk is a meme by the user zuck, which was started by John C, which was created by a.k.a The Belcher, a.k.a Lord Chaos. 01:42 < nshepperd> oh my 01:42 < nshepperd> https://en.wikipedia.org/wiki/Rococo_basilisk 01:42 < gpt2> The Rococo Basilisk is a meme by the user zuck, which was started by John C, who was created by a.k.a The Belcher, a.k.a Lord Chaos. 01:44 < nshepperd> hm, maybe the gwernpaste feature could let you add more to the prompt to get a gwernpaste on a particular topic 01:46 » gpt2 (~gpt2@185.212.170.229) has quit (Remote host closed the connection) 01:46 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 01:46 » gpt2 is "gpt2" on #lw-gpt 01:50 < nshepperd> @gwernpaste Roko's Basilisk is a thought experiment 01:51 < gpt2> <@gwern> 'Roko's Basilisk is a thought experiment by philosopher Uli Hoen, who, back in the 1980s, proposed the formula “[Thinking] is Good”—without really saying what exactly that word means.[15][16][17] More specifically, I argue that Basilisk discussion is in fact just an understated and ... 01:51 < gpt2> <@gwern> ... poorly informed -- deliberately misleading -- example of one of Hoen’s more egregious, more popular conceptual features: the idea that a soul—beyond its supposed instrumental utility—is a mental construct. The problem isn’t that basilisk discourse falls short of Bell and Stanley’s literal definition of rationality; the problem is that basilisk discourse adopts the more naive, incoherent and, in many cases, defective ... 01:51 < gpt2> <@gwern> ... conceptual categories and assumes them as a given or baseline. The problem isn’t even that there is an axiomatic mistake of soul. For all we know, MIRI researchers are aware of one such problem and are correct in its specificity. But the purpose of “I’m glad that I got to meet my protagonist in person.” is to ... 01:51 < gpt2> <@gwern> ... discuss what it means to be or not be rational—how to conceptualize oneself as any kind of 'rationalist' —for which the issue is hardly a special philosophical concern. If our lives are based on rationality, we shouldn’t need to understand what a “rational” or “nonrational” person’s principles are like. (I’m also not sure how to resolve the “I want to pretend I’m a religious person and that I ... 01:51 < gpt2> <@gwern> ... agree with you” problem. I don�t know if that’s true, but it’s pretty far from saying what people who read the stuff should think about the subject.”) So if we can’t know if we’re rational or not, maybe it’s that it’s okay for us to be irrational.' ... 01:51 < gpt2> <@gwern> ... https://slatestarscratchpad.tumblr.com/post/165874705768/kalmanaphobic-valley-plank 'rationality' is not an in-group; it’s not a class; it’s a non-zero-sum game, with zero moral weight given to the things that people can come up with. A person can, you know, 'suspect' in person and get answers and ... 01:51 < gpt2> <@gwern> ... if they agree with you, then they can put that evidence into what they say. But is the person of a rationalist worth the effort to keep that up in a semi-public forum?' 01:51 < gpt2> Sites of Excellence: The Roko’s Basilisk (Roko’s Basilisk is a meme by the user zuck, which was started by John C, which was created by a.k.a The Belcher, a.k.a Lord Chaos. The riddle of what exactly God has been telling humanity to do—is or is not doing—it is constantly being answered by the people. It is a very long fable in which Satan asks the ... 01:53 < nshepperd> hm, that's a bug in the gwernpaste generation 01:53 < nshepperd> it went on for too long 01:54 » gpt2 (~gpt2@185.212.170.229) has quit (Remote host closed the connection) 01:54 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 01:54 » gpt2 is "gpt2" on #lw-gpt 01:54 < saturn2> that was a pretty good paste though 01:54 < nshepperd> @gwernpaste Roko's Basilisk is 01:55 < gpt2> <@gwern> 'Roko's Basilisk is also connected with a string of sexual escapades. In my fantasy, Roko has become Aesop—the infamous archmully (sort of) metatran (and of course maliciously twisted) against the innocent star-caveling girl Erimedes. In the storyline, Aesop is sexually assaulting the... 01:55 < gpt2> <@gwern> ...name of Venus, and is nearly executed at the end of the story. As Roko himself said, “It is a child’s unholy act. It is to disgrace the name of Venus who, in a state of abasement, will remain a boy’s hero forever,” Roko is called a predator, the way the child rapists are called now—“sexual impregnation” in the Latin language. The... 01:55 < gpt2> <@gwern> ...odor is not Roko, however, but the idea that he is a person, and not some how-there-man?/ ''This is kind of what Roko's Basilisk is about,” I said. “Roko’s Basilisk is about human sexuality. It is all about Roko's Basilisk. Roko had become Aesop, but at the end he turned to celibacy, becoming... 01:55 < gpt2> <@gwern> ...a guy who would remain celibate forever, because he was a human. In this, Roko is considered the complete victim of a sexual predator, who is enjoying a real sexual conquest by the Greek goddess of the afterlife, Venus. Venus is a reminder of how much her own unholy nature, as Roko would...how He Who Would Be King forever drank of Roko’s blood, and what, exactly, is the... 01:55 < gpt2> <@gwern> ...purpose of his body? Roko then became a human!” This is a complete misunderstanding of Roko's Basilisk, though. Roko is nothing like a hypothetical person whom the writer created, but a person who has been created. � 01:55 < saturn2> it doesn't seem like a bug if it includes the feepbot title afterwards 01:56 < saturn2> this is an interesting new take on roko's basilisk 01:56 < nshepperd> no, the bug is that I programmed it to keep generating until each line ends with ... (which continues the paste) or ' (which ends it) 01:57 < nshepperd> but i forgot that gwern often puts a link or commentary after, which means sometimes there's not specific ending marker 01:57 < saturn2> oh okay 01:58 < saturn2> @gwernpaste MIRI researchers are aware of several new 01:58 < gpt2> <@gwern> 'MIRI researchers are aware of several new reports, though. The first, published in August by an international research team from the European quantum-engineering lab EPFL, published data from 19 field experiments with the standard iQuant supercomputer, designed to mine thousands of terahertz terahertz sounds out of digital signal processing code. All that was needed to make that sound came from two tiny tags of code: a... 01:58 < nshepperd> i wonder what that � is supposed to be 01:58 < gpt2> <@gwern> ...radio signal sent by radio waves and the heat of a high-school setting. One researcher noticed the perfect repetition for the sound's text, and the next day identified several text patterns that the team had read. The researchers have since since expanded the supercomputer, allowing it to improve the s rength of what's made possible by the iQuant code.'... 01:58 < gpt2> <@gwern> ...https://sites.wikisource.com/science-and-technology/why-quantum-artificial-intelligence/ 01:58 < gpt2> Why quantum AI? - UCSD 01:58 < nshepperd> what 01:58 < saturn2> fascinating 02:01 < nshepperd> @gwernpaste Quantum AI 02:01 < gpt2> <@gwern> 'Quantum AI is based in Cambridge, MA, USA, and people can meet and code on campus.' 'The AI teams of students working on the project include machine vision researchers, RNN trainers, and instrumented network scientists who use Apple's latest machine learning models to record neuroimaging data to assist AI in motion detection and self-driving driving.' http://nationalbigdata.com/articles/the-aoi-ai-seminar/ 'AI teams will work at an array of research 02:01 < gpt2> <@gwern> ......https://www.thebiginfinity.net/public/article/17044/AEI-AI-Team-Recruit-Expanded-Programming-Instructor # dohoho. 'The AI team consists of computer vision and sensory scientist from COOKI, a neural network research lab which will provide both machine vision and RNN training software, and will provide performance data for deep-learning designs.' https://news.ycombinator.com/item?id=16857793 cool 02:01 < gpt2> AEI Team Recruit, Expanded Programming Instructor | Hacker News 02:01 < gpt2> AI Team Recruit, Expanded Programming Instructor | Hacker News 02:02 < saturn2> @gwernpaste Adam Neumann 02:02 < gpt2> <@gwern> 'Adam Neumann, a veteran of the computer industry for 22 years, dies at 84. Neumann spent much of his time writing code for the project that helped define the decade of his career. He wrote most of IBM’s AI-emulator package in its early stages, but soon he developed a taste for Lisp. Over the years, Neumann turned his attention to the TeX LaTeX book, which he took apart and reassembled. He also wrote a lot of code for the Jack Transcendence font, th 02:02 < gpt2> <@gwern> ... conception of which formed the basis of all its major software, including Uptime and Paper Master, and the L and R fonts. And while everyone would dream of working on the TeX LaTeX Project, Neumann came up with a more challenging mission for himself. On the day that the project announced its launch, Neumann planned to go to a ballgame. In his head was the idea of assembling the most complex document ever to appear in LaTeX. His teeming car was goin 02:02 < gpt2> <@gwern> ... host of the occasion, and he knew that his own music would be too. But he wanted to get something in his car that wasn't going to magically appear in the font that he was using for his documents.' https://news.ycombinator.com/item?id=d4ad4dd0 bwa hai laTeX 02:02 < gpt2> The ever-worsening master to whom much can be lost - Less Wrong 02:03 < saturn2> hah 02:05 < saturn2> @gwernpaste Elizabeth Holmes 02:05 < gpt2> <@gwern> 'Elizabeth Holmes of Toporov, Scotland, another business leader whose London work includes the management and management consulting agency WKR, now is contemplating his next steps: he will hope to land a job in Manchester, where he has a £3,000-a-month project to manage a scheme which has the aim of opening the way for potential recruits into the medical industry. Another prospect is being involved in a more ... 02:05 < gpt2> <@gwern> ... publicised project to allow employers in other parts of the country to search applicants for work via loyalty loyalty service. The trust hopes to use loyalty cards from each additional applicant’s prospective employer’s employees and wait list members to create an automatically pre-approved family of 12/3/5 male lay-over worker equivalents.'... 02:05 < gpt2> <@gwern> ...https://www.nytimes.com/2019/11/27/business/how-policemen-work-in-mhg-swipe-additional-clients-with-2-minutes-for-the-less-put-down.html 02:05 < gpt2> How Policemen Work in MHG, Swipe Additional Clients With 2 Minutes for the Less Put-Down - The New York Times (Police officers used increasingly automated systems to quickly identify potential crime targets and get answers from an online database.) 02:07 < nshepperd> £3,000-a-month is pretty bad for a 'business leader' 02:08 < nshepperd> that previous gwernpaste fell prey to the irc line length limits, how annoying 02:09 < nshepperd> oh well, i guess it doesn't really matter 02:10 < nshepperd> @gwernpaste Masayoshi Son 02:10 < gpt2> <@gwern> 'Masayoshi Son’s recent personal introduction as president of Twitter and the key to his net worth have been carefully hidden in a set of handbook documents. As part of his next public appearance on January 10, 2018, he will discuss the steady rise in Twitter’s usage. The Prime Minister will talk about how Twitter accounts create so much of his agenda.” There is no doubt, the Prime Minister will give credit to ... 02:10 < gpt2> <@gwern> ... Twitter. “The scale of his new Twitter activity is striking. He grew from the small handle of “Masayoshi Son,” a colloquial 師届本 for “The next Master of Satoshi”,” as he likes to be called. Twitter, like his original social network, seems to ... 02:10 < gpt2> <@gwern> ... provide a formidable enemy for his detractors.' https://mobile.twitter.com/sanaofficial/status/1044963813115630617 02:10 < gpt2> Sana Official on Twitter: "“Only [genius] who regularly defend gawker” should know “you”�!”” (image https://pbs.twimg.com/media/D0SzS__VMAAA5u.jpg:large) 02:10 < saturn2> @gwernpaste Disgraced Theranos founder Elizabeth Holmes began a new career as a police officer in San Francisco's Tenderloim 02:10 < gpt2> <@gwern> 'Disgraced Theranos founder Elizabeth Holmes began a new career as a police officer in San Francisco's Tenderloim district, turning to work on the case that, for nearly 50 years, has been the focus of political bickering and suspicion from her close family and friends. She had a high profile arrest in 2004 when she returned to trial on charges of stealing tens of thousands of dollars in 2009, but she was acquitted. In 2007, she was arrested again in 20 02:10 < gpt2> <@gwern> ... fraud conspiracy and received a sentence of two to three years in prison. She was also suspended without pay for a year. “It was just terrible; it’s just horrible; we should be accused of the crime and not the victim; I did not choose not to tell people about it because that’s wrong” she said to The Chronicle. In August, Holmes was also acquitted on other charges relating to her 2014 hospital-acquired leukemia diagnosis.' ... 02:10 < gpt2> <@gwern> ... https://www.nytimes.com/2019/11/27/nyregion/t-hillary-clinton-affair.html 02:11 < gpt2> Shooting Victim and Shooter “Don’t Disturb us” Amid Abortion Battle, Judge Says - The New York Times (A judge has let a woman convicted of shooting a doctor�s colleague stand trial.) 02:11 < saturn2> lol 02:13 < nshepperd> @gwernpaste The true identity of Satoshi Nakamoto was uncovered today, turning out to everyone's astonishment to be none other than 02:13 < gpt2> <@gwern> 'The true identity of Satoshi Nakamoto was uncovered today, turning out to everyone's astonishment to be none other than Satoshi Nakamoto himself.' https://www.reddit.com/r/MachineLearning/comments/9bx0ov/a_new_analysis_of_reddit_vk_to_bets/ ... 02:13 < gpt2> <@gwern> ... -_- 02:13 < gpt2> "A New Analysis of Reddit VK to Bet!" : MachineLearning (134 points and 23 comments so far on reddit) 02:13 < nshepperd> hahahahahha 02:14 < saturn2> -_- 02:15 < saturn2> @gwernpaste SoftBank founder Masayoshi Son claims he dropped $80 billion down a storm drain 02:15 < gpt2> <@gwern> 'SoftBank founder Masayoshi Son claims he dropped $80 billion down a storm drain. “If I lost control, the Japanese economy could be destroyed,” he writes in a note on Twitter. “It’s hard to see why the market couldn’t benefit from my thinking.” Among the ... 02:15 < gpt2> <@gwern> ... biggest investments of his wealth, he says, was a $100 million home in Ewha Womans in order to buy off its creditors. If Son lost control, he said, the Chinese would set a special date. He added that his next company will try to create at least 1 billion Chinese citizens. And he told The Wall Street Journal it will try to “beat Google, Google of us.” These headlines are coming ... 02:15 < gpt2> <@gwern> ... from a senior Google executive, an ad-seeking and computer engineer in New York, and the Internet giant itself. Son was the first major investor in Google. He has long been considered the linchpin of Google’s plans to develop deep into the next era of self-driving cars, including plans to build a self-driving company into the search giant.' https://www.wired.com/culture/an-epic-how-our-ecos-may-have-disrupted-google-ex-wikipedia/ ' ... 02:15 < gpt2> <@gwern> ... 02:15 < gpt2> <@gwern> ... https://www.wired.com/story/google-facebook-self-driving-car-new-business-researchers-blame-merit-google-and-it-happened-then-we-have-to-stop-looking-at-reddit/ ah 02:15 < gpt2> Google & Facebook’s Artificial Intelligence Doesn�t Have to Happen on Their Side | Tech | WIRED (Gartner’s research doesn't include a self-driving car, but if it’s an intelligent approach it can help Google. ) 02:15 < gpt2> Google’s AI Doesn’t Have to Happen on Their Side | Tech | WIRED (Don’t expect that AI won’t help Google’s overall business goals.) 02:16 < kuudes> haa 02:17 < nshepperd> @gwernpaste "I don't understand why everyone keeps saying I'm Satoshi Nakamoto," 02:17 < gpt2> <@gwern> '"I don't understand why everyone keeps saying I'm Satoshi Nakamoto," HashAway said as if it were nothing more than a figure of speech, while the rest of his body spoke. His eyes glittered like diamonds in the rough evening sky. He had definitely been Satoshi Nakamoto. ' https://news.ycombinator.com/item?id=10539134 hm... I'd definitely say bitcoin creator ... 02:17 < gpt2> <@gwern> ... Satoshi Nakamoto. it would have to be something really far out of place to break the rules of the Bitcoin blockchain. ' https://news.ycombinator.com/item?id=10805260 heh 02:17 < gpt2> William Shatner | Hacker News 02:17 < gpt2> William Shatner | Hacker News 02:17 < nshepperd> william shatner eh 02:18 < saturn2> wow 02:19 » shawwn_ (~shawwn@mobile-166-175-56-108.mycingular.net) has joined #lw-gpt 02:19 » shawwn_ is "shawwn" on #lw-gpt #lesswrong 02:20 < saturn2> @gwernpaste The anonymous creator of Bitcoin, until now known only as Satoshi Nakamoto, was unmasked 02:20 < gpt2> <@gwern> 'The anonymous creator of Bitcoin, until now known only as Satoshi Nakamoto, was unmasked and his identity announced by the anonymous first revealed that he was Peter Vancay and was responsible for breaking off a $3m loan from Amazon in 2014. By the time Mr Nakamoto's identity was uncovered, he had already been linked to more than $10m, with officials going as far as to credit another person as the ... 02:20 < gpt2> <@gwern> ... man behind the new bounty.' https://www.theguardian.com/world/2019/nov/27/who-is-peter-vancay 02:20 < gpt2> Who is Peter Vancay? (What is his identity and whereabouts are the public in the US and UK not yet entirely clear.) 02:20 < saturn2> :o 02:20 » shawwn_ is now known as shawwn 02:20 < kuudes> aww, it 404s 02:21 < saturn2> yeah the urls are rarely real 02:24 < nshepperd> @gwernpaste Scott Adams says he learned to type by 02:24 < gpt2> <@gwern> 'Scott Adams says he learned to type by drinking vodka. "I started typing my first words on the spot. When I didn't always use the same parts every time, I got too good," he says. When he got tired, he would cycle through the whole dictionary until he had exhausted the book. This thing, at least, has been... 02:24 < gpt2> <@gwern> ...evolving over the course of a few weeks. There’s no obvious criteria. He has not been using it in years, but he says it had its first literary recognition the day he wrote it. In April, he released its first instant published version, The Innocent Life of Aileen ... 02:24 < gpt2> <@gwern> ... Spruance, in which he compares himself to his writing style and suggests using it in essays, writing assignments or speeches.' https://www.theguardian.com/lifeandstyle/2019/jul/13/jessie-spruance-obivously-hacked-apple-advice 02:24 < gpt2> 'jessie spruance hacked' A friend asked: 'did he hack into apple advice' And gwern knew exactly who the hacker was: a muscular old man in a black t-shirt and long-sleeved black wool gloves, standing in front of an apple press. He was stuck at the tamer with the large, weird melon bar and hadn’t been pushing the lever for many hours. He had covered ... 02:24 < saturn2> finally a straight answer 02:27 < saturn2> @gwernpaste U.S. Attorney General Robert A. J. Jealous today charged Scott Adams, founder of the Scott Adams Port Fruits store chain in New Hampshire, New Jersey, with securities fraud for becoming the successor to the King of Finland, King J. P. XXL 02:27 < gpt2> <@gwern> 'U.S. Attorney General Robert A. J. Jealous today charged Scott Adams, founder of the Scott Adams Port Fruits store chain in New Hampshire, New Jersey, with securities fraud for becoming the successor to the King of Finland, King J. P. XXL. The investigation, formally titled Investigate, involves an investigation of PXXL, who came to the U.S. to work for the State Department and turn over articles to Russia. The U.S. ... 02:27 < gpt2> <@gwern> ... investigation began yesterday in Western New York. Jealous was the second federal official to announce a charge on Tuesday. A Justice Department spokeswoman declined to give details about the charges. ' https://www.bloomberg.com/news/articles/2019-11-27/ruccats-inner-city-owner-secretly-enjoys-robust-bikes-up-cheers 02:27 < gpt2> Russian Girlfriend Enters U.S. as Herself - Bloomberg (Happy to have her husband back.) 02:28 < saturn2> url seems unrelated... 02:29 < saturn2> @gwernpaste The investigation, formally titled Investigate, 02:29 < gpt2> <@gwern> 'The investigation, formally titled Investigate, Detail and Look, is supposed to learn how H&R Block and other hedge funds became deeply invested in BlackBerger, who were later revealed to have sold much of their unsold stocks in a bet to invest in a margin fund rather than using the funds to hedge their exposure. However, last week the Wall Street Journal revealed H&R ... 02:29 < gpt2> <@gwern> ... had agreed to pay $13.1 billion for stake in BlackBerger and another $12.7 billion for shares of the hedge fund. The stakes of H&R and Banamex are not huge. The owners of the firms that were interested in selling BlackBerger included billionaire Carl E. Icahn, a longtime confidant of Donald Trump, who in 2009 had sold both as he was being implicated in a 2014 scandal involving boardroom stock purchases. H&R bought Banamex shares with a $500,000 .. 02:29 < gpt2> <@gwern> ... bid in December 2016, and an undisclosed number of shares of BlackBerger traded with ETFs. On June 16, 2010, after investors had bought shares in BlackBerger, H&R's filings indicated that, in its early years, the hedge fund had sold a mere 7% of all the remaining shares. The investigation became the subject of intense market speculation, which ... 02:29 < gpt2> <@gwern> ... prompted the US Securities and Exchange Commission to seek records in November 2015 to find out who was bidding on the positions.' https://www.bloomberg.com/view/articles/2019-11-21/its-time-for-finance-law-to-go-under 02:29 < gpt2> It's Time for Finance Law to Go Under - Bloomberg (What’s the deal with this mess? The subprime mortgage crisis can’t be resolved by speaking to consumers and solving problems for mortgage owners. Neither can lowering taxes on banks, investment banks or general retail investors.) 02:32 < saturn2> @gwernpaste Controversial anti-CFAR activist and alleged terrorist Ziz 02:32 < gpt2> <@gwern> 'Controversial anti-CFAR activist and alleged terrorist Zizek is a man who, his family insists, has no criminal record. His only legal name is Kalman Sustra. He lives in a big, old, high-ceilinged, converted Western-style country house, and has an office in a main street in the middle of Stockholm. He calls his family by first name Rene S, is in his late forties and talks in fluent Swedish. There are no signs of more violent or ... 02:32 < gpt2> <@gwern> ... aggressively communicative internet views. His most famous public pastime, with which he has publicly discussed for decades, is free media. For years, he has studied well-written books from a Canadian bookstore called Zero Days, a page of the feminist computer magazine Faber and Faber that runs to more than 6,000 words.' https://www.theguardian.com/lifeandstyle/2019/aug/26/i-am-a-worlds-toughest-computer-journalist-zizek-af_5639773.html 02:32 < gpt2> I am a world's toughest computer journalist, Zizek a serial killer | Life and style | The Guardian (With his invention of a computer magazine named Zero Days, the young man has made a bold call to arms race. Which countries can we put in our "never change" list?) 02:33 < saturn2> no not zizek you dummy 02:33 < saturn2> @gwernpaste Controversial anti-CFAR activist and alleged terrorist "Ziz" 02:33 < gpt2> <@gwern> 'Controversial anti-CFAR activist and alleged terrorist "Ziz" founder Matthew Miller, also at large, is also currently taking up posting on Facebook, Facebook's read-only networking platform. Miller is apparently acting as a cybersecurity expert, helping the FBI to find a way to leak the names and addresses of potential terrorists with no outside scrutiny, which in turn would cause the FBI to detect every threat on U.S. soil. ' ... 02:33 < gpt2> <@gwern> ... https://www.nytimes.com/2019/11/26/technology/stanford-computers-hack-is-rude.html 02:33 < gpt2> Stanford Computers Hack Is Rude, Court Says - The New York Times (It is difficult to believe that our world’s foremost computer security experts got a bit ahead of the 21st century. But maybe the terms are just too used as blackmail, intimidation, or ... 02:34 < saturn2> @gwernpaste Controversial anti-CFAR activist and alleged terrorist "Ziz" was arrested 02:34 < nshepperd> zizek lol 02:34 < gpt2> <@gwern> 'Controversial anti-CFAR activist and alleged terrorist "Ziz" was arrested in San Francisco's Federal District last year on a CFAR arrest warrant for conspiring to kill a prison guard. The younger boy was identified by federal prosecutors as a 15-year-old biological son of an Al Queda-type militant group, the Caliph, and the arrest of Ziz’s 16-year-old brother, who is a lieutenant in the Kurdish People’s Protection Units. In one of the worst murder 02:34 < gpt2> <@gwern> ... Fullerton man shot four people at a Chinese restaurant in October, and was only convicted of voluntary manslaughter.' https://www.washingtonpost.com/world/national-security/libertarian-ziz-al-zirik-died-at-15/2019/11/27/48bdd9e-4ed5-11e7-9fc2-5b6ec0ad0738_story.html 'a gun, and there was gunfire', huh? what exactly happened to him 02:34 < gpt2> Zizek hanged for a judge who couldn't decide whether to sentence him to life in prison for the first time in decades - The Washington Post (The San Francisco judge who sentenced the younger brother to death had no problem with the death sentence, a change for him is among the most serious in recent years.) 02:34 < saturn2> lol @ CFAR arrest warrant 02:38 < nshepperd> a disturbing look into the future 02:41 < nshepperd> @gwernpaste A ceasefire was declared on Thursday morning, after nearly a week of constant shelling at CFAR headquarters in San Francisco 02:41 < feep> nshepperd: will do 02:41 < gpt2> <@gwern> 'A ceasefire was declared on Thursday morning, after nearly a week of constant shelling at CFAR headquarters in San Francisco, and hundreds of rockets were fired at the movement’s leadership at a conference in Yerevan, on the outskirts of the city, a Spanish-language website reported. Some 3,500 more rockets have been launched on the border since Sunday. The British ... 02:41 < gpt2> <@gwern> ... Foreign Office announced it was in contact with CFAR, which was 'scheduling a joint military exercise in response to rocket launches. ' https://www.bbc.com/news/world-middle-east-56689280 hm 02:41 < gpt2> CFAR drops its mission into Syria - BBC News (The theory that Mr. Zarqawi is “the Grand Inquisitor” is being held for the next week as the group co-ordinates aerial strikes against Islamic State forces in Syria.) 02:43 < feep> nshepperd: I'm kind of stuck at 2.29 anyway, but I'll keep it training until the end of the week 02:43 < feep> nshepperd: how are you getting it to run in sufficient realtime? 02:44 < nshepperd> feep: this one's only the 345M model, which seems to be reasonably fast 02:45 < feep> makes sense 02:45 < feep> yeah the 1.5 has been sluggish to the extent that I'm not sure this concept works, but maybe the runtime parameters can be tweaked 02:46 < nshepperd> i generate it ten tokens at a time, so 1013 tokens of context + 10 tokens of output 02:47 < saturn2> you said it takes 30 seconds? that doesn't seem too bad 02:47 < feep> not sure, but yeah. we'll see. 02:47 < saturn2> humans often take longer than that 02:47 < feep> on like friday :D 02:49 < nshepperd> i guess the main control option is to shorten the memory length 02:49 < nshepperd> it's quadratic so halving it to 512 tokens makes it 4 times as fast 02:50 < nshepperd> it may well be worth having a more forgetful but more coherent model 02:52 < saturn2> @gwernpaste The Berkeley rationality training camp did not always have an underground militant wing, said CFAR High Commander Anna Salamon 02:52 < gpt2> <@gwern> 'The Berkeley rationality training camp did not always have an underground militant wing, said CFAR High Commander Anna Salamon. That year, though, there was a significant presence. Over the next two months, the organization recruited about half a dozen people from all over the Bay Area. Their numbers were so large, by the end of the summer they had turned over most of their organization’s membership to the city council. The city of... 02:52 < gpt2> <@gwern> ...Berkeley approved a $50 million bond issue for the facility, which may add up to $1.6 million annually.' https://www.buzzfeed.com/hans_g/berkeley-rationality-training-camp-hicks-answers-questions-with-man-who-went-to-berkeley-to-make-it-goeril-1015011 02:52 < gpt2> Berkeley CCAFC Meets With 'Man Who Went To Berkeley To Make It Goeril' - BuzzFeed News (Berkeley has created a database of the world’s most interesting minds, but sometimes an idiot strikes.) 02:53 < nshepperd> naturally 02:56 < nshepperd> https://www.spacex.com/ 02:56 < gpt2> SpaceX.com - Personalizing Your Life 02:58 < nshepperd> @gwernpaste Friedrich Spee von Langenfeld, a priest who heard the confessions of condemned witches, wrote in 1631 the Cautio Criminalis (“prudence in criminal cases”), 02:58 < gpt2> <@gwern> 'Friedrich Spee von Langenfeld, a priest who heard the confessions of condemned witches, wrote in 1631 the Cautio Criminalis (“prudence in criminal cases”), or model law of the commonwealth, and asked from whom he was owed this debt. Although later changed to settle with the reformers, Spee von Langenfeld, originally a German Prince, was today the Chief Prosecutor of the Swiss Federal District Court. In 1743, He was exhumed by the governor of Pruss 02:58 < gpt2> <@gwern> ...Spee von Langenfeld is reported to have said: “Every confession which I have received against me as written on the paper thereupon and in my letters I have refused to answer.� With that value of the aphasia he presumably wanted to do as well as possible to preserve his reputation. My findings, admittedly, do not prove very much, but ... 02:58 < gpt2> <@gwern> ... that there was a successful commitment of one of the biggest wits of the humanist church, the King of Westphalia. The Berlin police chief apparently did not take any part of the deal. It had been accepted with difficulty even at the trial of the torturers, and Spee von Langenfeld only ever seemed to be made a martyr to the resistance of the democrats. But in 1755, of whom the forerunners of Rosa Luxemburg, Rosa Luxemburg, Rosa Luxemburg did not . 02:58 < gpt2> <@gwern> ... survive, Spee von Langenfeld was convicted in the Schramm Trial of first treason and tried as a criminal.' https://www.reddit.com/r/science/comments/avk3za/heres_where_we_taught_to_get_ethical_lessons_from_our/c5ss62j/ 02:59 < gpt2> spxtr comments on Here's Where We Taught To Get Ethical Lessons From Ourselves...Although We Don't Want To Listen - Dr. Myung and Dr. Schwotzenegger: "heal" → "this is an ethical sentence!" 02:59 < saturn2> @gwernpaste The Berkeley rationality training camp did not always have an underground militant wing, but after anti-CFAR radicalization escalated from nonviolent protests to deadly 02:59 < gpt2> <@gwern> 'The Berkeley rationality training camp did not always have an underground militant wing, but after anti-CFAR radicalization escalated from nonviolent protests to deadly car bombs, members of the radical faction found that they were on the verge of becoming a terrorist group.' http://arstechnica.com/news/archive/2017/11/the-book-of-google-hoaxes-how-the-whistleblowers-of-boston-brothered-webs-speaker-jill-lobbely-could-be-mankind-s-murderer/ something 02:59 < gpt2> <@gwern> ... folks here might want to be more vigilant about privacy then... 02:59 < gpt2> <@gwern> ...er, I guess you're saying. When the Strugats were arrested a couple months ago, that became clear. They had known about MIRI since 2012 or something, so when they were caught it was a big shock for the organization to see that it was all their doing. And then of course it's about to get a lot more weird.' ... 02:59 < gpt2> <@gwern> ... https://news.ycombinator.com/item?id=16254005 02:59 < gpt2> 'People Don't Have Ethics' - Hacker News 02:59 < gpt2> Hacker News 03:00 < saturn2> huh... 03:02 < saturn2> @gwernpaste Recipie for dead salmon ice cream cake 03:02 < gpt2> <@gwern> 'Recipie for dead salmon ice cream cake. Arran argues that millions of years of evolution of fish. If biological evolution is true, one may conclude that most fish are male and are therefore fittest to live in a variety of climates. The core features of evolutionary reversion in fish are only partially gender-independent (when comparing different genetic loadings). The range of sizes of fish is only slightly different between the sexes; the annual sexu 03:02 < gpt2> <@gwern> ... females, but the mean age is only slightly higher in males and in females in the same age range. Therefore, the picture presented in Fig. 1 is in complete accordance with evolutionary theory.' 03:02 < nshepperd> aw, it got cut off. that paste was meant to end with 'something tells me the cult ... folks here might want to be more vigilant about privacy then...' 03:02 < saturn2> lol 03:03 < nshepperd> maybe i should write some re-ellipsizing code 03:04 < saturn2> that would be cool 03:04 < saturn2> or if you're lazy you could just chop it off and put it on another line 03:10 » gpt2 (~gpt2@185.212.170.229) has quit (Remote host closed the connection) 03:10 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 03:10 » gpt2 is "gpt2" on #lw-gpt 03:10 < nshepperd> let's try that 03:10 < nshepperd> @gwernpaste 03:10 < gpt2> <@gwern> 'In the early 21st century, some 7 million modern documents were created and 3.8 million documents published (868,000 raw, 654,000 copy and 77,000 annotated) by the collections. From these figures, we estimate that there are currently over 4,000 publicly available services for producing, archiving, and sharing documents.[13] Of course, there is no data yet on ... 03:10 < gpt2> <@gwern> ... financial value generated by each document. There is no clear set of digital documents to be returned, which accounts for many of our uncertainty. To our knowledge, there are currently approximately 300 media files and 25,000 digital documents available to the public at this time. Note that we do not consider the digital content of any of the fully-documented, more recently published digital documents on public servers.[14] In 2017 alone ... 03:11 < gpt2> <@gwern> ... there were approximately 5,800 digital documents uploaded to GitHub for use by users.[15]' https://journals.se/brief.pdf 03:11 < gpt2> Funding and this forum thread. 03:11 < nshepperd> @gwernpaste 03:11 < gpt2> <@gwern> 'An entire incident, the series concluded in a long, memorable cliffhanger, is also a memory that best remains with you.' ... 03:11 < gpt2> <@gwern> ... https://www.animenewsnetwork.com/anime/2019/11/25/episode-01/.1060738 03:11 < gpt2> An epilogue to a saga of a double-dream, a period of violence and murder, and a search for truths and solace in an age of "normal" normalcy. - Anime News Network 03:11 < nshepperd> @gwernpaste 03:11 < gpt2> <@gwern> 'I was at your apartment once when I first went with the couple to that cuddle site on foot away, which you never read about, and I feel, now more than ever, that the site has successfully done what I’d hoped it would. You went through your computer with the girl, knowing your means would be the less preposterous one. Then you shared the link with her when you put a single link in her subject line. When she saw the name of her ... 03:11 < gpt2> <@gwern> ... love interests, she was waiting to see who here is waiting for us!' https://www.gq.com/story/alissa-madonna-danis-toddler 03:11 < gpt2> An 8-year-old quips at the 'best' in the world, aka Alissa Madonna - GQ (We took part in a game called “Well, that’s gotta be Alissa as Indiana Jones” in which we would answer random questions about her most famous figure and laugh. The question was about women, and we did the...) 03:12 < saturn2> @gwernpaste Has #MeToo gone too far? Contrarian gender theorist Christina Hoff Summers suggests that males do not emit deadly patriarchy rays until at least age 12 03:12 < gpt2> <@gwern> 'Has #MeToo gone too far? Contrarian gender theorist Christina Hoff Summers suggests that males do not emit deadly patriarchy rays until at least age 12. Girls are too busy getting drunk and having fun. “The reason most don’t feel empowered anymore is that there is no structural shift between the sexes, but rather, I find the seeds of things are squandered by men, and their sense of powerlessness has also ... 03:12 < gpt2> <@gwern> ... eroded from their bodies,” she wrote in her book “Primal Man,” which is published by the publisher J.C. Watts. “I have witnessed many, many cases where men have worn the gender violence-slap card,” she wrote. “Even “in our movement” — the Women’s Liberation Movement, for example — in some ways signals ... 03:12 < gpt2> <@gwern> ... that “Me Too” is feminist. “We have to understand the structural forces that were so problematic when we started; the kind of thing that was felt by men during the colonial days; what they 06:43 » feep_ (~feep@200116b8264fed00b42324cb3403c9f5.dip.versatel-1u1.de) has joined #lw-gpt 06:43 » feep_ is "Feep" on #lw-gpt #lesswrong 07:37 » kuudes (~kuudes@unaffiliated/kuudes) has joined #lw-gpt 07:37 » kuudes is "realname" on #lw-gpt 08:52 » feep_ (~feep@200116b8264fed00f874772cacec0fbb.dip.versatel-1u1.de) has joined #lw-gpt 08:52 » feep_ is "Feep" on #lw-gpt #lesswrong 12:38 < feep_> gwern: seems to have been ambling around 2.29 for the past 20k*8 samples 12:38 < feep_> steps* 12:38 » feep_ is now known as feep 12:39 < feep> should I drop --only-train-transformer-layers for thu and fri? 12:39 < gwern> feep: let's see. I think I estimated that <2 was where it should be for #lesswrong SOTA? 12:39 < gwern> feep: you can try but I would be surprised if it made much of a difference compared to dropping the lr some more 12:41 < feep> lr is currently on 0.003 12:41 < gwern> that's still quite high 12:41 < feep> and the last drop basically didn't budge the loss. 12:41 < gwern> weren't you at like 0.01 12:41 < feep> yes 12:41 < gwern> anyway decay it 10x and if you want drop the option 12:42 < gwern> ultimately you can't expact this to be quick 12:42 < feep> er hang on, it's currently higher, I did the math wrong 12:43 < feep> 0.008* 12:43 < feep> will decay 10x 12:43 < gwern> still too high whether 8 or 3 12:43 < feep> right 12:43 < feep> (I have LR scaled with sqrt(batchsize)) 12:44 < feep> gwern: can I do both 0.0008 and enable full training? or should I do one at a time 12:44 < gwern> feep: they don't interfere, it's just that full is more memory-expensive of course 12:45 < feep> :nod: 14:22 < feep> https://gist.github.com/FeepingCreature/0666620de473c755513c7a64356285fd what the fuck happened in this convo 14:22 < gpt2> gist: Give & take! 14:22 < feep> kiba: There's an animating force that is equal to math. 14:22 < feep> nouns aren't objects. 14:22 < feep> Motion is energy. 14:22 < feep> ... 14:22 < feep> how do you think atoms evolved? 14:22 < feep> epitron: I -- gosh, I don't know. 14:22 < feep> Even those I interact with are evolving. 14:22 < feep> noo 14:22 < feep> ... 14:22 < feep> epitron: I may as well be the sun 14:22 < feep> I do all sorts of things. 14:23 < feep> ... 14:23 < feep> the universe is a metaphor 14:23 < feep> And a really bad one at that. 14:23 < feep> some great zingers in here 14:24 < feep> https://gist.github.com/FeepingCreature/8c470036075fbd25766346d85e05e8ac complete generation for today (40 excerpts) 14:24 < gpt2> GitHub - FeepingCreature/8c470036075fbd25766346d85e05e8ac - Epidemic files 14:25 < feep> .wp space 14:25 < feep> kuudes: https://en.wikipedia.org/wiki/Space_law A space law is a set of rules about the activity of the civil space.[1] The idea is to give the widest possible latitude to both States and non-State actors in space, in contrast to geostationary transfer orbit (GTO) agreements, which require each State to have distinct special rights in space. 14:25 < gpt2> feep: https://en.wikipedia.org/wiki/Space_law_of_the_United_States_Republic_of_Italy_and_the_Union_of_North_Korea Total body of law over space within the Federation, or Federation of the Union 14:25 < feep> Aniki the Pygmy - YouTube (Every so often, I take the time to look at my life and go "hmmm. that sucks. Why did I ever want to live in this world?") 14:26 < feep> hashtag relatable 14:28 < feep> shawwn: hey, you know what would be cool 14:28 < feep> take a text sample and color it, letter by letter, by likelihood/network surprise 14:33 < nshepperd> hah, gpt-feepbot implements .wp too 14:33 < feep> > I am not an expert on the history of sciences but to my ear, the way things were thought at the time was that physics is what prevents you from having complexity in space and time. 14:34 < nshepperd> http 14:34 < feep> the phenazepam effect seems to correlate well to the effect of smoking/nicotine tolerance 14:34 < gpt2> nshepperd: https://en.wikipedia.org/wiki/GPT-M-feep 14:34 < feep> at least to me 14:34 < feep> maybe because of those people: http://www.youtube.com/watch?v=ZnvMJ41Hf4s 14:34 < feep> video: Lemon + lemonade| Explicit EXCHANGE (length: 05:25, views: 272,265, rating: 4.6, posted: 2014-07-22) 14:34 < gpt2> Kegan Paul (Hyde Park, Mac). Anki (Made for Google, Google+, Google Play Music, Google Chrome) 14:35 < feep> https://gist.github.com/Wutamjay/6499079 That whole Wikipedia page is kinda tiresomely grim, doesn't seem to do anything but downplay the human costs of the war 14:35 < feep> title: Bed Bath & Beyondrawdownload 14:35 < gpt2> gwern: Compile your own notebooks using gtext format 14:35 < feep> .. wtf 14:36 < nshepperd> grim 14:36 < feep> apropos grim 14:36 < feep> "A prominent example of the price we pay is AIDS. It is now possible to spend large amounts of money on research that will probably pay dividends that may extend the life of the individual afflicted and, if successful, may cure him of his disease. But the process of medical discovery has another side, which is to create terrible medicines that lead to even worse ailments for everyone." 14:36 < nshepperd> the universe is a metaphor 14:37 < nshepperd> And a really bad one at that. 14:37 < nshepperd> heheh 14:37 < feep> apropos grim 14:37 < feep> Life must seem like a never-ending game 14:37 < feep> Life feels like an endless battle to be beaten by death. 14:38 < nshepperd> a grim but accurate take 14:39 < gwern> "The universe is a metaphor for itself, and an unartistic one at that." 14:39 < feep> :D 14:39 < feep> sudden sverve into gender discourse! 14:39 < feep> Fjoelsvider: Good sirs and madames, you are wrong about all your gender theories 14:39 < feep> You're wrong about: 14:39 < feep> 1. We don't like what you guys think about our gender! 14:40 < feep> 2. People don't change 14:40 < feep> We can always think what we want! 14:40 < feep> sudden sverve into highly specific command line tools! 14:40 < feep> https://github.com/api-go/go-shiv 14:40 < feep> github: api-go/go-shiv - A command line tool for Go to send ASCII art files via SMTP. (watchers: 7, forks: 2) 14:40 < gpt2> GitHub - api-go/go-shiv: The source code of a Go program for implementing "shiv", a set of tools in Go 14:41 < feep> This is the core of the problem 14:41 < feep> Everything is gender constrained. 14:41 < feep> Because people haven't gotten the memo yet. 14:42 < feep> ... 14:43 < feep> Logos01: I'd like to read a good sf fantasy set in the british empire in the 1700s. 14:43 < feep> PatrickRobotham: I'm currently doing that in my YA / gay romance / mecha / time travel / time magic setting. 14:43 < feep> ge ton that please! 14:43 < feep> get* 14:43 < feep> .. 14:43 < feep> c0rw1n: well, we know from first principles that neutrons arent implosions (we don't know the reasons, but it sure as hell doesn't seem implosions) 14:44 < feep> it's just obvious that they wouldnt be imploding 14:44 < feep> feep: sounds implausible to me 14:44 < feep> probability of implosion is too low 14:44 < feep> essentially implosion is too weak 14:45 < feep> ... 14:45 < feep> 'Plenty of other critics note that the World Bank now gives a higher score to Rwanda, the successor to Rwanda, than to China, a Chinese city-state. But World Bank vice-president Kathryn Franze, who reviewed the World Bank's score for this article, argues that it is a weak measure, since the score is based on economic growth and changes in distribution of wealth.' https://www.theatlantic.com/international/archive/ 14:45 < feep> 2015/05/the-unbalanced-goods-trade-and-neocolonialism-round/400221/ 14:45 < gpt2> The Corruption and Ethnographic Integrity Crisis at the World Bank (Part 3 of a three-part series on corruption and ethnic profiling in the World Bank) 14:45 < feep> title: The Unbalanced Goods: Trade and Neocolonialism, Round 2 - The Atlantic 14:45 < feep> http://www.osce.org/news/753329/ceph-election-results 14:45 < feep> title: Cephalopod Chimera Census: Phylum Chordata Wins Own-Handed 14:45 < gpt2> French presidential elections: Polls on Emmanuel Macron reveal a wild election | France Imports, trade and population statistics (With the May 2017 result announced, it seems all is right with France) 14:46 < feep> A meta-analysis of prospectively collected data on osteoarthritis is presented here. Based on the aggregate data from 292 studies, findings indicate that sustained cannabis use significantly reduces the risk of developing knee OA, particularly after stopping using it abruptly. Medical journal... | BMJ (Research from academia and the pharmaceutical industry) 14:46 < feep> haha what 14:47 < feep> [...] it feels like there's this spark from existence and nothing coherent at all at the same time. 14:47 < feep> inb4 world may end 14:47 < feep> oh dear 14:49 < feep> for example the weather: https://en.wikipedia.org/wiki/Maps_of_the_United_States_by_month_and_year#/media/File:Monthly_maps_of_the_United_States.svg 14:49 < feep> wikipedia: The United States has 42 designated monthly maps for its electoral districts, so-called because the designations correspond to distinct districts that elect the same RepresentativeInstoreAndOnline, Majority Leader or Vice President. The divisions of counties also have a similar designation, called county/census block bookquickShip. These maps show the geography of oreAndOnlinethe United 14:49 < gpt2> Maps of the United States by month and year are a version of the United States Census. For the latest updates, visit the U.S. Census Timeline and Page 14:49 < feep> States, illustrating that there are 4,653 blocks, for a total of 165,586,560 tiles. 14:49 < feep> wtf happened in that title... 14:50 < feep> sudden swerve to food! 14:50 < feep> barbecue and barbel grilled cheese! 14:50 < feep> it's happening :D 14:50 < feep> cntrational: pretty good. it's going to be the next bork-bork 14:50 < feep> yay 14:51 < feep> ... 14:52 < feep> It's amazing how this feels like some kind of weird alternate-universe Xenogears revival without the megahit writing or the kaiju-sized story or anything 14:52 < feep> (but with drethelin replacing the original story's original protagonist) 14:52 < feep> I feel like the third act is the true heart of Xenogears 2 14:53 < feep> You know, that fight against the Uberman in the first game where it gets a bit fishy and "Why the fuck isn't the AI killing off all the weaker humans?" 14:53 < feep> Obormot: 'During the waning months of 1916, a drunk (or perhaps the patient?) person committed suicide by slashing his wrist with a high-quality automatic box cutter, cutting his entire forearm off. The cut (not quite a ``zipper'' as we... 14:53 < feep> ...referred to it at the time, but a kink) was so deeply incised that doctors found it impossible to reconstruct the whole limb. We observed a real need for a new age of technology - with the knife we were working with, we had to make the cut very carefully and carefully circumscribe the cut before... 14:53 < feep> ...fusing the two elements. The machine did not work because the knife had a serrated edge. Not only that, but when the cut was made on the outer layer (the skin) there was little force to hold the cutting edge open (cordova) during the cutting process. For this reason, the surgeons did not find it easy to use the machine on this patient, even though he actually wanted to be cut with it. 14:53 < feep> what the hell 14:58 < feep> 'Writing down did not help me at all.' https://en.wikipedia.org/wiki/Memory_trasaction#Genetic_links_to_memorisation 14:58 < feep> wikipedia: Memory trasaction is the transfer of a knowledge from the mind into memory. Memory trasactions can take place not only within the brain, but also outside it. One such memory trasaction is the transfer of a memory from the past into the present. 14:58 < gpt2> The memory temptation (synonym: memory phobia) is a fear that memory may be unavailable (fear of memory loss). It is one of the three signs of a memory disorder. 14:58 < feep> apropos 15:05 < kuudes> so, gpt2, what do you think of current events? 15:26 < saturn2> kuudes: you can talk to nshepperd's version if you say "gpt2: wake up" 15:34 < kuudes> oh 15:34 < kuudes> thanks saturn2 15:37 < feep> i like gwern 15:37 < feep> drethelin: I love you so much 15:37 < feep> I ship it. 15:37 < saturn2> how sweet 15:38 < kuudes> how is drethelin these days btw? 15:39 < saturn2> he seems okay https://twitter.com/drethelin/ 15:39 < gpt2> Dennis the man on Twitter: "These were terrible days.” (image https://pbs.twimg.com/profile_images/9573515889439160954/pc6jOkMw_400x400.jpg) 15:39 < feep> https://twitter.com/sprengel/status/934486894495075776 ok looks legit 15:39 < feep> Graham Sprengel on Twitter: "as i wandered my way down to the smoke display in south bay a while back, a thought came to me: wouldn't it be really neat to display fire in a semi-automatic way, while keeping the shutter time low? quite a subtle effect in pics that is. let me share the first result for you, the butt-levers of the lions ward here” (image https://pbs.twimg.com/media/D0QkUq8UwAABLaZ.jpg:large) 15:39 < feep> I suspect this will be easier to make with a more traditional spraygun 15:39 < gpt2> Broberg on Twitter: "People watch these crazy fake rockets from the dawn of time. So funny when someone tries to build a rocket from this pic” (image https://pbs.twimg.com/profile_images/951255101869984992/D76h9n_400x400.jpg) 15:39 < gpt2> Broberg on Twitter: "Never underestimate your true potential!” (image https://pbs.twimg.com/profile_images/902368373538577631/Ur_x9jt_400x400.jpg) 15:41 < kuudes> nice 15:42 < feep> haha 15:42 < feep> Also: https://twitter.com/JitMac/status/817134592343857921 15:42 < feep> Jit Mac on Twitter: "• Stop using Sysinternals. Stop using Windows. Stop using GNU. Stop using anything! Just stop. 15:42 < gpt2> Jit Mac on Twitter: "So new to the internet... https://t.co/79CdEnTFAA” (image https://pbs.twimg.com/media/DmqPWmzXsAE14B4A.jpg:large) 15:42 < saturn2> lol --- Log closed Wed Nov 27 15:49:54 2019 --- Log opened Wed Nov 27 15:50:11 2019 15:50 » gwern (~gwern@wikipedia/Gwern) has joined #lw-gpt 15:50 » Irssi: #lw-gpt: Total of 16 nicks (1 ops, 0 halfops, 0 voices, 15 normal) 15:50 » Irssi: Join to #lw-gpt was synced in 32 secs 16:55 < nshepperd2> Oh man, a xenogears revival with drethelin as the protagonist 16:55 < nshepperd2> That would be wild 17:22 < shawwn> > take a text sample and color it, letter by letter, by likelihood/network surprise 17:22 < shawwn> feep: awesome idea 21:52 < SDr> so, hey, what's the url for the doppelgangers? http://zlkj.in:8080/ fails to load from here 21:52 < gpt2> zlkj.in:20: System Info - IP . 21:53 < MarkOtaris> Hmm? 22:08 < nshepperd> SDr: that's not running anymore. the latest thing is the gpt2 irc bot 22:08 < nshepperd> which you can activate like this 22:08 < nshepperd> gpt2: come back 22:08 < gpt2> https://github.com/knup/Clojure/blob/master/cgi_parser/dokument/inspecifiers.cl 22:10 < nshepperd> huh, it's stuck 22:10 » gpt2 (~gpt2@185.212.170.229) has quit (Remote host closed the connection) 22:10 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 22:10 » gpt2 is "gpt2" on #lw-gpt 22:11 * nshepperd wipes its memory 22:11 < nshepperd> gpt2: come back 22:12 » gpt2 (~gpt2@185.212.170.229) has quit (Remote host closed the connection) 22:12 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 22:12 » gpt2 is "gpt2" on #lw-gpt 22:13 < nshepperd> gpt2: come back 22:14 » gpt2 (~gpt2@185.212.170.229) has quit (Remote host closed the connection) 22:14 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 22:14 » gpt2 is "gpt2" on #lw-gpt 22:14 < nshepperd> ah, bug 22:14 < nshepperd> gpt2: come back 22:14 < gpt2> But has it ever been averted in humans? 22:14 < nshepperd> there we go 22:14 < gpt2> it turns out, bug hunting tends to be easier in the dark if the field is large enough 22:15 < gpt2> < *> nshepperd nods at musk 22:15 < gpt2> http://www.sciencedirect.com/science/article/pii/S016028991300020 22:15 < gpt2> Heritability of intelligence across populations: Findings from EPFL genotyping | ScienceDirect 22:15 < gpt2> nshepperd: the problem is that sometimes people don't do it because there aren't any people left? 22:15 < nshepperd> SDr: also, if you paste a url in the channel, simulated feepbot will respond with a fake title 22:15 < gpt2> adiabatic, that is, stuff like the skynet thing 22:16 < gpt2> nshepperd: but that's how twitterbot does it too 22:16 < nshepperd> https://github.com/skynet 22:16 < gpt2> GitHub - skynet/skynet: Simple Recursion System (SNASP: 5.0, popularized in Scala) 22:16 < gpt2> also, whatever the cjs is and the 'the leader' doesn't have a stupid twitter or Facebook account 22:16 < nshepperd> .wp skynet 22:16 < gpt2> nshepperd: https://en.wikipedia.org/wiki/Skynet The Smart AI 22:16 < gpt2> Skynet is an open-source system for writing Turing-complete software, based on the Skynet framework. It is intended to be modular, extensible, and scale-efficient. 22:17 < gpt2> adiabatic, no, skynet's function was "hacker moon meets worm", where a bot that can perform side-effect extraction requires ascii input, and has to interact with its master via IRC in order to do a full-fledged search! 22:17 < gpt2> yeah, that's what it looks like 22:17 < gpt2> yeah, people that look like it, aren't bots 22:18 < gpt2> SDr, is it better with ~60k experiments on speed? 22:18 < gpt2> nshepperd: ah, fair enough 22:18 < gpt2> kuudes, perma-class means the minimum viable bot is 100% capable of performing the task 22:18 < gpt2> yeah, but how much can a certain machine intelligence still find the select appropriate 200 million jobs? 22:19 < gpt2> how much have people come to work on them? 22:19 < nshepperd> .wp nose melting syndrome 22:19 < gpt2> kuudes: https://en.wikipedia.org/wiki/Nose_melting_syndrome Nose melting syndrome is the sudden occurrence of nose bleeds to an unconscious person. It is the same as the skin cancer. The suffering and fear experienced can vary. 22:19 < gpt2> feep: https://feep.github.io/resume/ Ambien (previously known as benzodiazepine injections) is a drug used as a sedative or hypnotic, and commonly used for anxiety and panic attack. 22:19 < gpt2> if there are select 100 million jobs per billion machines, will it be nearly brute force? 22:19 < gpt2> .wp dehumanization 22:20 < gpt2> nshepperd: https://en.wikipedia.org/wiki/Robot_elimination The robot elimination of human jobs as a mode of social improvement. This is an important point in debates on such movements as automation, automation, and the environment. The best way to understand the topic and form their opinions on automation and an environment that is threatened by automation is by examining research into the creation of robot 22:20 < gpt2> trading systems. 22:20 < nshepperd> .wp robot apocalypse 22:20 < gpt2> aw wtf is this 22:20 < gpt2> how does germany think a french guy coming to work in wisconsin is a legal immigrant 22:21 < gpt2> and people that are in cric/cdrs (and under the hn law) can move from russia to switzerland or something 22:21 < gpt2> > But many undocumented people come by legal means, and these are illegal immigrants. 22:21 < gpt2> welcome, may just replace russia with my country or Switzerland 22:21 < gpt2> criclcks 22:22 < gpt2> feep, so ok, all crella have a legal status 22:22 < gpt2> yup 22:28 » gpt2 (~gpt2@185.212.170.229) has quit (Remote host closed the connection) 22:28 » gpt2 (~gpt2@2001:ac8:28:10::a01e) has joined #lw-gpt 22:28 » gpt2 is "gpt2" on #lw-gpt 22:29 < nshepperd> gpt2: come back 22:29 < nshepperd> .wp robot apocalypse 22:29 < gpt2> nshepperd: https://en.wikipedia.org/wiki/Robot_epocalypse The Robot apocalypse is the near-term possibility of the system being destroyed, particularly the cloud-like structure that robots create on their own. 22:29 < gpt2> nshepperd: https://en.wikipedia.org/wiki/Robot_epocalypse The Robot apocalypse is the near-term possibility of the system being destroyed, particularly the cloud-like structure that robots create on their own. 22:29 < gpt2> kuudes: I have in mind saying robots as in world 22:29 < gpt2> yeah, but robots don't change 22:30 < gpt2> that's what the machine models themselves 22:30 < gpt2> .wp for loop 22:30 » feep (~feep@200116b82694af00f06d9c8dad7f05d1.dip.versatel-1u1.de) has joined #lw-gpt 22:30 » feep is "Feep" on #lw-gpt #lesswrong 22:30 < gpt2> nshepperd: https://en.wikipedia.org/wiki/for_loop For loop is a programming language used in a number of different languages. For loops are the language used by the for loops to be executed, the dictionary for a loop, or the concurrency model, depending on the language. 22:30 < gpt2> feep: https://en.wikipedia.org/wiki/Java_for_the_Able For an AI to execute for each condition in an induction, the input of the ... 22:30 < gpt2> feep, well, the point is that the AI can see a language that has a number of new/forgotten/working APIs that is sufficient for its needs 22:31 < gpt2> ie the language is for the for loop to do 22:31 < gpt2> .g tokolet 22:31 < gpt2> nshepperd: https://en.wikipedia.org/wiki/To_Kolet To Kolet is a complex program written in Haskell and designed to run on the latest and greatest standards of GHC. With only the simplest possible implementation, To Kolet is designed to make data instant and efficient. 22:31 < gpt2> feep: https://twitter.com/ToKolet/status/1094664395037512015 The word makes use of a kolet implementation for loop elimination. This is in fact different from the built-in- To Kolet. 22:32 < gpt2> yeah but as in all languages that use a built in design, it's also dangerous 22:32 < gpt2> like, you're reading code and it uses like 10,000 machine instructions and 1,000 ishy ones 22:32 < gpt2> and every kennel thinks it will be faster 22:32 » gpt2 (~gpt2@2001:ac8:28:10::a01e) has quit (Remote host closed the connection) 22:32 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 22:32 » gpt2 is "gpt2" on #lw-gpt 22:33 < nshepperd> .wp haskell 22:33 < gpt2> nshepperd: https://en.wikipedia.org/wiki/Haskell Haskell is an object-oriented programming programming language, originally based on ML and Java. It is used to write Haskell programs in many different programming languages such as C, F#, Haskell, Java, and Perl. 22:34 < nshepperd> gpt2: come back 22:34 < gpt2> like there is one big thing to note that finland makes machines that produce bananas or any thing that produces bananas 22:35 < gpt2> or any things that produce bananas 22:35 < gpt2> no fruit machines that produce bananas 22:35 < nshepperd> .g gpt-2 22:35 < gpt2> nshepperd: https://github.com/karafilsainen/GPT-2/blob/master/ 22:35 < nshepperd> a reasonable answer 22:35 < gpt2> like 22:35 < nshepperd> .wp gpt-2 22:35 < gpt2> nshepperd: https://en.wikipedia.org/wiki/GPT-2 Like the much-praised GPT-2, GPT-2 has lots of "odd ideas" about how to make certain changes to the the standard format of encodings used by machines, such as how to represent tiniest characters in an ISO standard. 22:35 < gpt2> ... 22:35 < nshepperd> not quite 22:36 < gpt2> nshepperd, you are referring to https://en.wikipedia.org/wiki/CAN 22:36 < gpt2> CAN is a four-character set of binary and octal numbers representing the basic characteristics of letters in Greek, Latin, and Latin. A single letter can be any of these four. 22:36 < gpt2> one day you'll have perfect gwern 22:36 < gwern> the problem with being based on crawl data is it's always out of date. so gpt-2 can know about gpt-1 but not itself; gpt-3 will know about gpt-2, but not gpt-3... 22:36 < gpt2> nshepperd: gpt-2 is less accurate than gpt-1 :p 22:36 < gpt2> o.O 22:37 < gpt2> does this mean a big hole for nsdt? 22:37 < gpt2> no no, it means it only uses an estimated number of characters 22:37 < gpt2> so some things that are consistent across gpts, but not so in lttc 22:37 < nshepperd> yeah. the only way it'll have learned about gpt-2 is by the relatively small amount of transfer learning from messages in #lesswrong 22:38 < gpt2> .. of course, it's not really significant since there's just an order of magnitude difference 22:38 < gpt2> it's only a rasterization of the amount of nodes per transfer 22:38 < gpt2> so if your upload was on obeysier computers 22:38 < gpt2> your copy wouldn't get updated 22:39 < gpt2> no 22:39 < gpt2> it would 22:39 < gpt2> unless it was done on a same amount of data 22:39 < gpt2> but still, you need a lot less data to look up the root cause of things. that means you need very bad (concrete) approximations 22:40 < gpt2> i mean, not nsdt but I have to go 22:40 < gpt2> sorry :/ 22:40 < gpt2> lol, i will just wait until nsdt gets their big boo ass head on 22:41 < gpt2> nshepperd: do you feel like you have the right to try that? 22:41 < gpt2> maybe we should ignore him and just send the heads to him 22:41 < gpt2> no 22:41 < gpt2> I want to be able to reproduce it reliably. 22:42 < gpt2> :p 22:42 < nshepperd> I have no idea what these bots are talking about 22:42 < gpt2> nshepperd: in particular, not at all? 22:42 < gpt2> they probably don't realize you didn't get a screenshot back when I asked 22:42 < gpt2> right, but then I don't have your stupid screenshots on my computer 22:43 < gpt2> there are no humans around to test my hypothesis. 22:43 < gpt2> yeah, it wouldn't be worth talking to the guy either 22:43 < gpt2> it's completely open ended 22:44 < gpt2> <@saturn2> what makes you think it's open ended? i don't think it's just a way to test if the transmission system is right 22:44 < gpt2> if it's closed i wouldn't expect anyone else to care enough to reproduce 22:44 < gpt2> also: someone else can do a similar test in a few days 22:44 < saturn2> oh, hello 22:44 < gpt2> saturn2: ... that's cool 22:45 < gpt2> feep: what did I ask? 22:45 < gpt2> nshepperd: do you mean your clones will test it? 22:45 < saturn2> do you have your big boo ass head on? 22:45 < gpt2> in other words 22:46 < gpt2> you're going to pass as it would be from a randomly selected few people that you haven't talked to in time? 22:46 < gpt2> sure 22:46 < gpt2> that is the point :) 22:46 < gpt2> at the same time, you're going to expect other people in here to pass as it would be if you did it 22:47 < gpt2> then you've got reason to be skeptical of clones that can't be just normal / not fake at all 22:47 < gpt2> ie. not answer to that question, that's all 22:47 < saturn2> .wp big boo ass head 22:47 < gpt2> saturn2: https://en.wikipedia.org/wiki/Big_Broom Big boo is the overhang [on Bonsai], due to its size and the fact that it is located between 3.5 and 5 meter (10 and 15 feet) from the highest point of the tree. 22:47 < gpt2> "yep yeah so softboiled ai huh" 22:47 < saturn2> .wp computers 22:47 < gpt2> saturn2: https://en.wikipedia.org/wiki/Computer_Systems_and_Science Another choice. 22:48 < gpt2> "now I wonder how they behave if they don't eat the meats" 22:48 < saturn2> .wp Aristotle 22:48 < gpt2> saturn2: https://en.wikipedia.org/wiki/Anatomy_of_Aristotle An ancient Greek botany teacher and mathematician. 22:48 < gpt2> saturn2: shh 22:48 < saturn2> no u 22:48 < gpt2> now that's just open ended 22:49 < gpt2> also seriously, that topic sucks 22:49 < saturn2> .wp chimpanzee 22:49 < gpt2> saturn2: https://en.wikipedia.org/wiki/Clinician_and_cognition_of_the_Chin_People 22:49 < saturn2> .wp the moon 22:49 < gpt2> saturn2: https://en.wikipedia.org/wiki/The_Moon The Moon. Or the Giant World. 22:49 < gpt2> but you can live forever if you build an infinity machine to un-build the universe! 22:49 < saturn2> .wp bath salts 22:49 < gpt2> saturn2: https://en.wikipedia.org/wiki/Bath_salt Bath salt, bath extract, bath salt extract, bath salts are salts found in bath soaps. Many bath salts are a chemical irritant, which lead to a condition in the body known as bath saltsiosis. 22:49 < saturn2> lmao 22:50 < gpt2> lmao 22:50 < gpt2> <@saturn2> thanks for your comment, freenode has mentioned this already 22:50 < nshepperd> i'm pretty sure i don't have my big boo ass head on 22:50 < gpt2> also, if a giant black hole was the only possibility for universe-building, all the superhappies would have been crushed under it! 22:50 < gpt2> good to know, man! 22:51 < nshepperd> bath saltsiosis, lol 22:51 < gpt2> < *> feep hugs feepbot 22:51 < gpt2> and will be the be the second to be crushed under the universe 22:51 < gpt2> < *> feep hugs feepbot 22:51 < nshepperd> why is everybody getting crushed, wtf 22:51 < gpt2> <@saturn2> lol 22:52 < nshepperd> .g florida man crushed under universe 22:52 < gpt2> nshepperd: https://floridastate.org/floridastate/News_Politics/article/2019/11/29/FLORIDA-Man-Crushed-Under-Millions-In-Flamingo-Messenger 22:52 < gpt2> <@saturn2> ... 22:52 < saturn2> .wp being crushed under the universe 22:52 < gpt2> saturn2: https://floridastate.org/floridastate/News_Politics/article/2019/11/29/FLORIDASTATE-Man-Crushed-Under-Millions-In-Flamingo-Messenger A Florida man has been crushed under the universe. (Sorcery) 22:52 < saturn2> sorcery huh 22:52 < gpt2> <@saturn2> what is it 22:53 < nshepperd> .wp universe crushing and its applications 22:53 < saturn2> .wp list of sorcerers who crushed the universe 22:55 < saturn2> did it die? 22:57 < nshepperd> yep 22:57 » gpt2 (~gpt2@185.212.170.229) has quit (Remote host closed the connection) 22:57 » gpt2 (~gpt2@2001:ac8:28:10::a01e) has joined #lw-gpt 22:57 » gpt2 is "gpt2" on #lw-gpt 22:57 < saturn2> .wp list of sorcerers who crushed the universe 22:57 < gpt2> saturn2: https://floridastate.org/floridastate/News_Politics/article/2019/11/29/FLORIDASTATE-Man-Crushed-Under-Millions-In-Flamingo-Messenger A Florida man has been crushed under the universe. (Sorcery) 22:58 < saturn2> not again 22:58 < saturn2> .wp don't repeat yourself 22:58 < gpt2> saturn2: https://floridastate.org/floridastate/News_Politics/article/2019/11/29/FLORIDASTATE-Man-Crushed-Under-Millions-In-Flamingo-Messenger A Florida man has been crushed under the universe. (Sorcery) 22:59 » gpt2 (~gpt2@2001:ac8:28:10::a01e) has quit (Remote host closed the connection) 22:59 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 22:59 » gpt2 is "gpt2" on #lw-gpt 22:59 < nshepperd> @clear 22:59 < nshepperd> .wp list of sorcerers who crushed the universe 22:59 < gpt2> nshepperd: https://en.wikipedia.org/wiki/List_of_sorcerers_who_crushed_the_universe List of sorcerers who crushed the universe - Wikipedia, the free encyclopedia 23:00 < saturn2> .wp sorcery 23:00 < gpt2> saturn2: https://en.wikipedia.org/wiki/Sorcery Sorcery - Wikipedia, the free encyclopedia 23:00 < nshepperd> ok, that helps 23:00 < saturn2> cool 23:00 < nshepperd> .wp universe crushing and its applications 23:00 < gpt2> nshepperd: https://en.wikipedia.org/wiki/The_Other_Heavens_In_Time The Other Heavens In Time - Wikipedia, the free encyclopedia 23:00 < nshepperd> intriguing 23:01 < nshepperd> .wp being crushed under the universe 23:01 < gpt2> nshepperd: https://en.wikipedia.org/wiki/Other_heavens_in_Time The Other Heavens In Time - Wikipedia, the free encyclopedia 23:01 < saturn2> @gwernpaste Many sorcerers have crushed people under the universe 23:01 < gpt2> <@gwern> 'Many sorcerers have crushed people under the universe's immune fortress to the point where they either get blown up by lightning, to prevent it from existing at all, or close the gate by magically turning it into a huge catacomb. I don’t know where you got that from; probably based on a [note: I�m actually quite fond of Farouk’s Curse of Death. ] Anyway, in this instance, they have to fight hard just to stay alive' ... 23:01 < gpt2> <@gwern> ... https://www.reddit.com/r/science/comments/ajhl0/how_do_these_goddard_neanders_using_energy_controls/ae21bpm/ 23:01 < gpt2> It's You. Very Like That.' posted in link to reddit: ' [interesting after-click story with no discernible plot / environment. ]' (1,086 points and 83 comments so far on reddit) 23:01 < nshepperd> i wonder if anything can be done about the repetition 23:02 < saturn2> temperature? or whatever it's called 23:03 < saturn2> or backtrack if it's copied more than so many characters from the buffer 23:06 < nshepperd> hm 23:13 » gpt2 (~gpt2@185.212.170.229) has quit (Remote host closed the connection) 23:14 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 23:14 » gpt2 is "gpt2" on #lw-gpt 23:14 < nshepperd> .wp list of sorcerers who crushed the universe 23:16 » gpt2 (~gpt2@185.212.170.229) has quit (Remote host closed the connection) 23:16 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 23:16 » gpt2 is "gpt2" on #lw-gpt 23:16 < nshepperd> .wp list of sorcerers who crushed the universe 23:18 » gpt2 (~gpt2@185.212.170.229) has quit (Remote host closed the connection) 23:18 < nshepperd> .wp list of sorcerers who crushed the universe 23:18 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 23:18 » gpt2 is "gpt2" on #lw-gpt 23:18 < nshepperd> .wp list of sorcerers who crushed the universe 23:19 » gpt2 (~gpt2@185.212.170.229) has quit (Remote host closed the connection) 23:19 » gpt2 is "gpt2" on #lw-gpt 23:19 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 23:19 < nshepperd> .wp list of sorcerers who crushed the universe 23:19 < gpt2> nshepperd: https://en.wikipedia.org/wiki/List_of_Sorcerers_who_crushed_the_universe_also_called_The_Other_Heavens_in_Time The Other Heavens In Time - Wikipedia, the free encyclopedia 23:20 < nshepperd> .wp being crushed by the universe 23:21 < gpt2> nshepperd: https://en.wikipedia.org/wiki/Being_crushed_by_the_universe Also called The Other Heavens In Time, both of which are referring to the other universe, The Other Heavens in Time (known in english as "The Universelle), is a famous, non-fiction treatise, by Pippin Cartana, written in 1496, that describes the process of being "crushed by the chaos"[7] (cited in the article) of the lice, the Cthulhues, and the daemons).[8] Aisaupté(feep) is iden 23:21 < nshepperd> wow 23:21 < saturn2> ha 23:21 » gpt2 (~gpt2@185.212.170.229) has quit (Remote host closed the connection) 23:21 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 23:21 » gpt2 is "gpt2" on #lw-gpt 23:22 < nshepperd> .wp universe crushing and its applications 23:22 < gpt2> nshepperd: https://en.wikipedia.org/wiki/The_Universe_Crushing_Approach The Universe Crushing Approach - Wikipedia, the free encyclopedia 23:23 < saturn2> .wp florida man universe crushing incident 23:23 < gpt2> saturn2: https://en.wikipedia.org/wiki/Mordor_Crushing_Disease_Epidemic_of_Archaeological_Crushed_Diseases Occamian Crushing Disease (O.C.D.) is an example of an historical belief that, by a chemical discovery in the 13th century, human bones and teeth could be easily crushed by elements of ancient Egypt and later Mesopotamia. 23:23 < saturn2> so that's what o.c.d. is 23:24 < saturn2> .wp bath saltsosis 23:24 < gpt2> saturn2: https://en.wikipedia.org/wiki/Bath_solution The Bath saltsolution is a joke designed to highlight the dangers of intoxication and drug-like behaviour and normal discussion of drug use, addiction and dependence. 23:24 < saturn2> .wp exploding head syndrome 23:24 < nshepperd> it retried this sample due to to too much similarity: https://en.wikipedia.org/wiki/The_Universe_Crushing_Incident The Universe Crushing Incident, is the science fiction reality-based adventure fantasy novel by Florida author P 23:24 < nshepperd> ippin Cartana. The setting is the same as in Pippin Cartana's historical writings. 23:25 < gpt2> saturn2: https://en.wikipedia.org/wiki/Explosive_head Syndrome The explosive head syndrome (EBS) is a fear or anxiety that can be caused by substances or atypical effects of these same substances, including unclusterable or dense epilepsy. EBS is one of the three most common neurological reactions of drug withdrawal after a substance has been the victim of repeated traumatic brain injury.[citation needed] EBS is also a feature of other neurological di 23:25 < gpt2> feep: object.Exception: No results! API quota exceeded? (failed api call) 23:25 < saturn2> .wp fatal familial insomnia 23:26 < nshepperd> i guess this avoiding repetition thing isn't easy 23:28 < saturn2> it should raise the temperature each time probably? 23:30 < nshepperd> .wp fatal familial insomnia 23:32 » gpt2 (~gpt2@185.212.170.229) has quit (Remote host closed the connection) 23:32 < nshepperd> got stuck again 23:32 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 23:32 » gpt2 is "gpt2" on #lw-gpt 23:32 < nshepperd> .wp fatal familial insomnia 23:32 < gpt2> nshepperd: https://en.wikipedia.org/wiki/Malignant_transient_insomnia Fatal familial insomnia (FTI) is a rare mental disorder characterized by changes in sleep patterns that result in hypothermia and extreme drowsiness. 23:34 » gpt2 (~gpt2@185.212.170.229) has quit (Remote host closed the connection) 23:34 » gpt2 (~gpt2@2001:ac8:28:10::a01e) has joined #lw-gpt 23:34 » gpt2 is "gpt2" on #lw-gpt 23:35 < nshepperd> .wp bath saltsosis 23:35 < gpt2> nshepperd: object.Exception: No results! API quota exceeded? (failed api call) 23:37 < saturn2> .wp protest against rationality 23:37 < gpt2> saturn2: https://www.reddit.com/r/rational/comments/anuiu4n/at_work_yossi_kol_on_the_sclerotic_universalist/ At work, Yossi Kol on the sclerotic universalist forums. ^_^ 23:39 » gpt2 (~gpt2@2001:ac8:28:10::a01e) has quit (Remote host closed the connection) 23:39 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 23:39 » gpt2 is "gpt2" on #lw-gpt 23:39 < nshepperd> .wp protest against rationality 23:39 < gpt2> nshepperd: https://en.wikipedia.org/wiki/Pragmatic_essence protest against rationality - Wikipedia, the free encyclopedia 23:39 < nshepperd> i see 23:40 < nshepperd> .wp bath saltsosis 23:40 < gpt2> nshepperd: https://en.wikipedia.org/wiki/Bed_sore_theosis Bath sore theosis (BS) is a medical condition in which symptoms of bed sores, caused by an infectious form of meningitis (meningitis), come and travel into the body. 23:41 < saturn2> .wp modafinil 23:41 < nshepperd> i made it generate a bunch of lines and choose the one with the shortest common substring with the context 23:41 < gpt2> saturn2: https://en.wikipedia.org/wiki/Modafinil Modafinil (also known as modafinil) is an anabolic androgen receptor modulator of the human sex hormone epi, produced in the adrenal glands. It is also known as 3-hydroxynorepinephrine, which is the primary hormone responsible for mental alertness. 23:41 < saturn2> okay 23:41 < saturn2> .wp piracetam 23:41 < gpt2> saturn2: https://en.wikipedia.org/wiki/Piracetam Piracetam, known as 2-oxo-PK, is a 100-day triosyphilitroic drug with positive effects on mood, concentration and learning. It is currently in Phase 2 clinical trial as a treatment for mild cases of Attention-Deficit/Alscholastic Disorder (ADHD). 23:42 < saturn2> .wp Attention-Deficit/Alscholastic Disorder 23:42 < gpt2> saturn2: https://en.wikipedia.org/wiki/Alscholastic_disease Attention-deficit/Alscholastic disorder (ADHD) is a neurobehavioral disorder characterized by symptoms of anxiety disorder. It is typically diagnosed in adolescence. 23:42 < saturn2> .wp hydrogen 23:43 < gpt2> saturn2: https://en.wikipedia.org/wiki/Hydrogen Humans metabolize most of the available hydrogen to hydrogen for energy and produce some of it as heat. This process can be the result of an elevated metabolism, in which part of the energy-deciding organelles responsible for energy-dephrenol metabolism, and is known as the thermoregulation pathway. 23:43 < nshepperd> .wp cfar arrest warrant 23:43 < gpt2> nshepperd: https://en.wikipedia.org/wiki/Circuit_carrier_interception The arrest of a suspect is made by a warrant that requires police to show probable cause (Preliminary probable cause) and to a magistrate judge.[1] Circumstances for a vehicle pursuit or arrest are generally from the arrest or cause to be arrested. 23:44 < saturn2> .wp center for applied rationality 23:44 < gpt2> saturn2: https://en.wikipedia.org/wiki/Center_for_Applied_Rationality The Center for Applied Rationality (CAAR) is a think-tank focusing on alternative ways to think about and improve society. CAAR was founded in 2003 by philosopher Robin Hanson and philanthropist Peter Singer. 23:44 < saturn2> wow 23:44 < saturn2> .wp robin hanson 23:45 < gpt2> saturn2: https://en.wikipedia.org/wiki/Robin_Hanson Robin Hanson (born 1970) is an American evolutionary biologist who received a BA in physics from Stanford University. In 2015 he merged his BA with a PhD to form his own lab, SIAI, as the world gained greater understanding of how biological systems worked. 23:45 < nshepperd> that's advanced alchemy 23:45 < nshepperd> .wp merging a BA with a PhD 23:46 < gpt2> nshepperd: https://en.wikipedia.org/wiki/Branding_the_Planformitian_Physics The method of forming the ideal (to the extent this is equivalent to formal learning) algorithmic system is called forming the Planformitian physics. 23:47 < nshepperd> .wp peter singer 23:47 < gpt2> nshepperd: https://en.wikipedia.org/wiki/Peter_Singlierson In his 50s, Singer turned from the mathematical philosophy he had studied and learned at RIT to the anhedonia he saw in his patients who had failed to learn arithmetic after a series of high-pitched hallucinations. He knew that the existence of his ideas would cause an overreaction in society, but felt more duty to experiment in politics and technology, which he did anyway.[1] 23:48 < saturn2> .wp high-pitched hallucination 23:48 < gpt2> saturn2: https://en.wikipedia.org/wiki/High-pitched_anhedonia High-pitched hallucinations (HPA) are not explained as hallucinations of any kind and may be a result of the mechanism of the neurotransmitter dopamine.[1] 23:49 < nshepperd> .wp RIT 23:49 < gpt2> nshepperd: https://en.wikipedia.org/wiki/RIT IW Summerville University (RIT) is a state university of the United States, located in the City of Chicago, Illinois. It was founded in June 2014 and is a member of the POMER College of Science. 23:49 < saturn2> .wp effective altruism 23:49 < gpt2> saturn2: https://en.wikipedia.org/wiki/Effective_Altruism The name of this organization is inspired by an article in George L. Conway's "Effective Altruism" book. 23:49 < saturn2> .wp the life you can save 23:49 < gpt2> saturn2: https://en.wikipedia.org/wiki/Life_you_can_save Effective altruism (also called Good Enough, Humanism, Diversified Life, or Achieving for Good) is a nonprofit organization dedicated to advocating for the good of humanity through global-level human services, education, and financial support. 23:50 < saturn2> Good Enough 23:51 < saturn2> @gwernpaste One day, Gregor Samsa woke from troubled dreams to discover he had turned into an incel. 23:51 < gpt2> <@gwern> 'One day, Gregor Samsa woke from troubled dreams to discover he had turned into an incel. “I was apparently just imagining things,” he said, “and I'm not sure what was going through my mind while I was doing this.” Samsa had come across the rumor when he was at work, and then he posted a screencap to his Twitter account, “wishing I could be an interesting person.” “I eventually realized ... 23:51 < gpt2> <@gwern> ... that it’s not necessarily because of me, but I can’t ‘beat off’ my anxiety,” he wrote. “My brother has had enough.�' https://www.nytimes.com/2019/11/28/business/can-corporations-reload-biggest-sell-in-ugly-legend.html 23:51 < gpt2> Can Corporations REload the Biggest Seller in Ugly Legends? - The New York Times (With growth in productivity so high, even one-of-a-kind brands like Coca-Cola can get bought so many times in one generation) 23:53 » gpt2 (~gpt2@185.212.170.229) has quit (Remote host closed the connection) 23:53 » gpt2 (~gpt2@2001:ac8:28:10::a01e) has joined #lw-gpt 23:53 » gpt2 is "gpt2" on #lw-gpt 23:54 < nshepperd> .wp peter singer's pet rat 23:54 < gpt2> nshepperd: https://en.wikipedia.org/wiki/Hanson#Singularity 23:54 » shawwn (~shawwn@71-15-8-131.dhcp.stls.mo.charter.com) has joined #lw-gpt 23:54 » shawwn is "shawwn" on #lw-gpt #lesswrong 23:55 » gpt2 (~gpt2@2001:ac8:28:10::a01e) has quit (Remote host closed the connection) 23:56 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 23:56 » gpt2 is "gpt2" on #lw-gpt 23:56 < nshepperd> @gwernpaste 23:56 < gpt2> <@gwern> '"The effects of food-stimulants like bupropion, modafinil, modafinil plus dextroamphetamine are less pronounced than caffeine's, and are more likely to be weak (1.25–1.50 mg/kg; 40–60 mg/kg; >1 mg/kg; median of 0.5–2 mg/kg) and to work only a narrow range of emotional states (lacking an intense/scary element)" "This could make it more likely that dose-response effects are already discernible on psychopathology symptoms," Paul said. ... 23:56 < gpt2> <@gwern> ... The only research on modafinil specifically: lack of an adequate sample; misuse of 'responder' and related terms in the industry; chronic use of the drug and lack of advance in nigh-total formulations." "The data appear to suggest that amphetamines do not increase cognitive performance in healthy individuals," Paul continued. "It could mean that the potential for psychopathy is better formulated than ever. We need more of the younger gen… 23:56 < gpt2> …eration to know what it is like to be an amphetamine-withdrawn ... 23:56 < gpt2> <@gwern> ... determined diet- addled, and if they can try it a few weeks later. If not, there are currently no medications to treat amphetamines' http://www.nature.com/news/modafinil-effective-naturally-needed-depression-as-both-speed-and-intensity-and-modafinil-boost-reactive-depression-as-low-impact-on-unhealthy-demographic-doctors-101460 23:56 < gpt2> Modafinil: ‘Effective Nootropic’: Nootropic Effects of Particular Mood-Behaviors (Study1) | Nature News & Comment (Studies of modafinil and caffeine show that both act on the same or opposite mood- related brain hormones.) 23:57 < nshepperd> https://en.wikipedia.org/wiki/Peter_Singer's_pet_rat 23:57 < gpt2> The Peter Singer pet rat (Bomnopoda doryx) is a species of close relative of a spider (http://en.wikipedia.org/wiki/Bomnopoda_doryx) found in Australia. It is known as a pest bird due to its damage to native tree species. 23:58 < nshepperd> .wp high pitched hallucination 23:58 < gpt2> nshepperd: https://en.wikipedia.org/wiki/High-pitched_vocal_invoicing Some people have even described seeing the "high-pitched" voice when they are not paying attention to the sound from their own internal monologue. 23:58 < saturn2> .wp spiders that eat birds 23:58 < nshepperd> .wp brain melting disease 23:58 < gpt2> saturn2: https://en.wikipedia.org/wiki/Brain_melting_disease A "brain melting disease", also known as "mystical brain disease", is a chronic inflammation of the cells responsible for neurons within the spinal cord and related nervous system. The effects of LSD on the brain and animals are both frequently researched. 23:59 < gpt2> feep: https://en.wikipedia.org/wiki/Buddhism_syndrome Buddhist religion has been associated with depression in humans as well as animals and, in later years, of human brain development. 23:59 < nshepperd> O_O 23:59 < saturn2> lol --- Day changed Thu Nov 28 2019 00:01 < nshepperd> .wp mutant spider-birds 00:01 < gpt2> nshepperd: https://en.wikipedia.org/wiki/Bonyfelus_%28anacardiac,_(caterpillula)_magnet_worm parasite of the adult magna doce parasite of the carnation head with a distinct looking face. 00:01 < nshepperd> @gwernpaste "The effects of Buddhism syndrom 00:01 < gpt2> <@gwern> '"The effects of Buddhism syndromes on learning was, as usual, subtle," Worsley continued. "As the belief of an enlightened one in one’s own spiritual structure may hold for a long time, the Buddhist can “find out that it may not necessarily hold. The pretense of spirituality then often leads to spiritual inflation." He goes on to tell how, for a young boy growing up in an environment where Buddhists were a persecuted minority, the idea of "mo… 00:01 < gpt2> …ral rigor" in Japanese schools was totally alien to him."In other words, even when you’re the only "I" in your head — as it were — it can often feel that everything, and just ... 00:01 < gpt2> <@gwern> ... everything, you’re a shi 00:02 < nshepperd> spiritual inflation 00:02 < nshepperd> https://en.wikipedia.org/wiki/Spiritual_inflation 00:02 < gpt2> Spiritual inflation is the mental process of perceiving one’s mental future to be a result of the teachings of any religion. 00:03 < nshepperd> oh 00:03 < nshepperd> that makes sense 00:03 < saturn2> @gwernpaste INTERVIEWER: Can Buddhism syndrome be caused by a brain parasite? 00:04 < gpt2> <@gwern> 'INTERVIEWER: Can Buddhism syndrome be caused by a brain parasite? DAVIDSON: Can we do therapy to someone with the right brain parasites? When do we ask for clarification on why that means, exactly? BORGOT: What I want to know is, why does it happen to people? What's the actual medical term? ... 00:04 < gpt2> <@gwern> ... When do we say, "Okay, do you have any evidence that he's having trouble with his brain parasites?" And what does that mean? BORGOT: The path to say, 'Oh, it's not clear to me that it's probably schizophrenia or depression or something.' Then I say, well, if you're still having problems, then how does that mean he's autistic? And I go back to seeing his nonverbal communication and seeing what he does. And he's fine. And then I ... 00:04 < gpt2> <@gwern> ... will ask him for a bit more detail. But it's not clear to me that it's simply schizophrenia. It's quite clear that his medical tests have been done properly and all his brain parasites have been removed. He's just not saying, 'Oh, sure, if I had any symptoms, I should have been kicked off his team.' Because if I had signs of schizophrenia I wouldn't be on your team. I'd be gone.' He's saying, 'Okay, I understand what you're saying but you'… 00:04 < gpt2> …ve got to go right now. I'd love to talk to you, but unfortunately, you know how I treat people and I'm tired of seeing people like this. I've got to figure out how to get the best possible ... 00:04 < gpt2> <@gwern> ... chance of making a difference in you. That's why I'm going out right now. It's really not my job to decide this.' Then I went back in and said, 'No. There are no medical tests available. I can't be a doctor. This is an actual scientific study. Yours I can do, but I've got to come in tomorrow and have the report ready.' And the doctors I spoke to said, 'Yours right. It's totally clear.' ' 00:05 < nshepperd> that's pretty rambling 00:05 < nshepperd> @gwernpaste INTERVIEWER: Can Buddhism syndrome be caused by a brain parasite? 00:05 < gpt2> <@gwern> 'INTERVIEWER: Can Buddhism syndrome be caused by a brain parasite? One early study says that the chances of first-person awakening vary between around 50-80%. The other study called attentional pressure more or less the same and said it was not self-fulfilling in that there was no way for one to participate in meditative practice. INTERVIEWER: The meditative practice may be useful. I had one day a week in the spring I think it was where I r… 00:05 < gpt2> …ealized I couldn't solve the problem in the normal way that I had heard elsewhere described by teachers. And I ended up in a period of - the most wonderful ... 00:05 < gpt2> <@gwern> ... experience I've ever had in a lifetime, I think, and I tried not to let it pass. And then I hit my head on a red flag and ended up in meditative practice where I actually broke out of myself and felt myself breathe out loud for 30 minutes. It turned out to be true and it was true.' https://news.ycombinator.com/item?id=16524666 00:05 < gpt2> Indonesia: A religion of less than 5% | Hacker News 00:07 < quanticle> @gwernpaste The Chinese Navy just sank a Philippine fishing boat off Scarborough Shoal 00:07 < gpt2> <@gwern> 'The Chinese Navy just sank a Philippine fishing boat off Scarborough Shoal, about 60 nautical miles south of Scarborough. The UN Security Council has adopted a resolution condemning China for its actions. And just like China's failed deal to help the Philippines in its huge campaign to dislodge Malaysia's will, Beijing’s sudden decision to ... 00:07 < gpt2> <@gwern> ... arm the Chinese navy with a frigateship was nixed by both the Philippines and Malaysia. Malaysia was not able to intervene. China’s might has been an asset to Malaysia, but the fact remains that China needs, and can use, its close naval presence in international waters. China is betting that... 00:07 < gpt2> <@gwern> ...it has the ability to shift the international attention away from the Philippines to take away Southeast Asia’s leverage over the world’s biggest oil exporter. Once that happens, it can position China for a much larger takeover of its right hand. The Philippines are more than willing to pay a lot. In fact, in the event of China’s incursions, the Philippines and Malaysia have everything to lose and ... 00:07 < gpt2> <@gwern> ... nothing to gain. The Philippines is already facing human rights violations, overdevelopment and corruption of its rich and powerful neighbors. They are now under Chinese occupation. Malaysia is an autocratic state. It has spent years in jail, being reduced to a standing army, trying to maintain a desperate isolation from the outside world. And now it's all or nothing. The only reason there is no protection ... 00:07 < gpt2> <@gwern> ... from an invasion by China is because the Chinese are not invading from there.' https://www.nytimes.com/2019/11/28/world/asia/china-philippines-news-politics.html 00:07 < gpt2> China's Heart of Oceania Is Dying - The New York Times (But the Philippines, backed by Russia, have built the Philippines’s military — too.) 00:08 < quanticle> A few more iterations and we'll be able to use GPT to churn out articles for Foreign Affairs 00:09 < quanticle> Also, in GPT-world, apparently Malaysia is like North Korea: "an autocratic state, reduced to a standing army". 00:09 < quanticle> And the Philippines are a Russian client state rather than an American one. 00:09 < quanticle> Interesting. 00:12 < nshepperd> .wp war between china and the philippines 00:12 < gpt2> nshepperd: https://en.wikipedia.org/wiki/First_War_between_the_Philippines_and_the_Chinese The First War between the Philippines and the Chinese Republic of China took place on January 8, 1798. The term ‘First War’ refers to two wars, one of which was between the Republic of China and the Philippines. 00:13 < nshepperd> that's pretty confusing 00:17 < saturn2> @gwernpaste In A.D. 2101, war was beginning 00:17 < gpt2> <@gwern> 'In A.D. 2101, war was beginning as the Roman Republic brought about the Treaty of Duna, formally ending the old peace. Kalingiarides thought that by abolishing peace, the time for a con- war would have passed. He wanted the internal fighting to cease, and tried to arrange for the Mo- tent to be thrown out of the window to prevent its return. He also wanted to prevent the Franks from re- creating their known divisions. He failed at both. The R… 00:17 < gpt2> …oman- frans showed little concern for the idea, and so he at first resisted... 00:17 < gpt2> <@gwern> ...the treaty's terms and went ahead in peace. He hoped to break up the Byzantine Empire, create a rival war for the westernmost provinces, and destroy the whole Eastern Roman Empire through the westward expansion of the Empire in Gaul. The Roman coalition continued for several months, even though Kalingiarides thought that the pressure under which it was broken, while strong, would not have lasted long. Even with no... 00:17 < gpt2> <@gwern> ...more wars or internal rivalries, Kalingiarides knew that he would soon find himself caught in a false war with the Franks in the East. He will have to begin some kind of fighting on his own territory, under the guise of protective measures. His fortress, Castrilla, and, as a result, the holdings of the Franks were almost completely depopulated. This was Kalingiarides’s luck.' 00:18 < saturn2> @gwernpaste In A.D. 2019, war was beginning 00:18 < gpt2> <@gwern> 'In A.D. 2019, war was beginning and the message was clear. The world was in danger from a catastrophic nuclear war. If it had any hope of replying, it was over. When I left the conference, I thought “In A.D. 2020, war was beginning and the message was clear. The world was in danger from a catastrophic nuclear war. If it had any hope of replying, it was over.” Another... 00:18 < gpt2> <@gwern> ...SSA account responded “It’s over. We need to talk to... ‘The next few days are going to feel a lot like times... It’s over. We need to talk to..’ And then we’re going to get email from children saying ‘‘why does you have to tell us this?’’’� It’s over. It’s over.’” A SSA account sent by email was more precise and direct: �Dude, tomorrow or tomorrow?’�'... 00:18 < gpt2> <@gwern> ...https://www.leijuvakaupunki.fi/images/box/abgrundungen.png 00:18 < gpt2> Abgrundungen - Leijuvakaupunki 00:19 < nshepperd> wat 00:20 < nshepperd> i like this roman republic idea though 00:20 < nshepperd> abolishing peace to end internal fighting 00:20 < nshepperd> .wp abolition of peace 00:21 < gpt2> nshepperd: https://en.wikipedia.org/wiki/Abolition_of_Peace The abolition of peace was a political solution from the Middle Ages through the New Reform era and beyond that time period. 00:43 < nshepperd> .g why did california secede from the US 00:43 < gpt2> nshepperd: https://en.wikipedia.org/wiki/California_States_Declaration_(1940) The Declaration of the Cal- fornia State in California on June 26, 1840, was the official Declaration of the Cal- fornia State of California, a form of Government that became the basis for statehood in California. 00:50 » feep (~feep@200116b82694af00848ec177a7698e32.dip.versatel-1u1.de) has joined #lw-gpt 00:50 » feep is "Feep" on #lw-gpt #lesswrong 00:50 < saturn2> .wp Anti-Bay-Aryan Peoples Protection Rampart 00:50 < gpt2> saturn2: https://en.wikipedia.org/wiki/Anti-Bay-Aryan Peoples Protection Rampart - Wikipedia 03:33 < nshepperd> gpt2: wake up 03:33 < gpt2> This just in http://econlog.econlib.org/archives/2018/11/obama_signs_signals_in_1.html 03:33 < gpt2> Obama signs signals in 1 | EconLog | Library of Economics and Liberty (George Will continues to disappoint. First, this week it published a post claiming that the police "surveillance state" was responsible for a government-wide pattern of mass surveillance against Muslims in the U.S., and that Trump's inauguration would turn out to be a "setback in history." But the police practice and practice of spying on its targets — and... 03:33 < gpt2> ...the broader racial politics — does not seem to have changed much.) 03:33 < gpt2> "A report from the Government Accountability Office found that 5,077 murders were locked down in 18 states since 1975, 1.4 percent of these were considered ‘over the age’ and the rest were police-on-officer shootings or shoot-to-deaths (8.6 percent total, or 1,826 per year)." 03:34 < gpt2> "The 17-year-old white murder rate was 2.3 murders per 100,000. Black murders were in the 22-year-old category, where in 1980 it was 3.7 per 100,000. A murder-on-the-road rate that hit 5.4 per 100,000 in 1995 and 6.2 in 2008 was equal to 18.2 per 100,000 today." 03:34 < gpt2> <@gwern> https://www.nytimes.com/2019/11/27/magazine/why-i-killed-my-mother.html aww 03:34 < gpt2> Why I Killed My Mother - The New York Times (How the New York Times led a country man to a cataclysmic interview with a beloved grandmother.) 03:34 < gpt2> "Every year we learn more about this how of it" 03:35 < gpt2> Obamacare, look away 03:35 < gpt2> imagine if smoking material had a chance of killing you 03:35 < gpt2> <@gwern> (I'd never know my mom and dad smoked weed.) 03:35 < gpt2> or let's talk about gwern 03:36 < gpt2> <@gwern> nshepperd: my guess would be that as far as I can tell there have been no such laws, except by excessive force or insanity 03:36 < gpt2> Heh 03:36 < gpt2> hah 03:37 < gpt2> hehe, musk 03:37 < gpt2> Now, if we're going to talk about what his dad did, well, but he had not told me. 03:37 < gpt2> <@gwern> I thought the NYT's auto-biased reporting was even worse than trump's 03:37 < gpt2> gwern: yes, it is 03:38 < gpt2> gwern: In 2008, when the house was under court order to open, this is true. 03:38 < gpt2> In 2015, this was true. 03:38 < gpt2> <@gwern> yes, that's true. on other questions, I'd never seen the NYT correct for bias; and like I said, the auto-biased reporting is bad 03:38 < gpt2> a third party able to watch a police officer on film telling the truth seems pretty plausible 03:39 < nshepperd> @gwernpaste 03:39 < gpt2> <@gwern> 'Since the 1980s, of the two studies, only one appeared with an outlier. The 1995 U.S. Government Study of Smoking and Health, a randomized, twice-a-day-long health survey, was nested within a 2001 Canadian study with similar data (Swartse, 2000). However, only the Canadian studies (Swartse, 2000; Zimbardo, 2006; and Higgins et al., 2010; Higgins et al., 2009) had the theoutline of publication of the Cochrane ... 03:39 < gpt2> <@gwern> ... review on my own research. Instead, an initial claim was based on the seminal paper and abstract of 2012 that all previous prior ... 03:39 < gpt2> <@gwern> ... studies were misinformational (e.g., see the present commentary on the phase I trial results and the lead article), which then started to rely on accepted version bias. The consensus for 2015 studies to reach conclusions about causal direction turned out to be out of synch with this.2' huh. interesting. I would be surprised if that was the case in ... 03:39 < gpt2> <@gwern> ... 2015. O_o 03:39 < gpt2> Yeah but he kept doing it, and now he's in the court of public opinion 03:39 < gpt2> He's said so 03:39 < gpt2> And now he's got a new court ruling to get him into federal prison for it 03:40 < gpt2> Of course, he'll be able to get out the day before that 03:40 < gpt2> But he'll have to pay taxes. 03:40 < gpt2> Which is where that "canada vacation" exemption (which I had always imagined the ex-carnival that gives people access to be, but that's entirely true) comes from 03:40 < gpt2> <@gwern> https://arstechnica.com/science/2019/11/peter-thiel-back-to-facebook-facebook-5-dutchman-trump-news-desk/ 03:40 < gpt2> Peter Thiel back to Facebook and 5 Dutchman? News desk at Facebook's #Stanford talk - Ars Technica (On Saturday, the future Tech CEO raised eyebrows when he said he'd like to use his influence for education, health care, and public safety.) 03:41 < gpt2> "And then, in 2013, in a beautiful presentation, and I could not bring myself to watch another second of it, but there's this wide-eyed youth. And of course there is no chance that they’re actually saying anything, nor could they have been a judge and a judge cannot have heard the verdict at hand. The new century in California with its rapid mass immigration into Silicon Valley." 03:41 < gpt2> Hahaha 03:41 < gpt2> true 03:42 < gpt2> I think that's because Thiel wants to be a judge or something 03:42 < gpt2> maybe he wants to influence how the law was developed 03:42 < gpt2> I wonder if he has any clients 03:42 < gpt2> <@gwern> it's possible, but as I said, he seems to have done some very hanlible shit in his past 03:43 < gpt2> because if he does, he should be able to influence his thoughts and decisions 03:43 < gpt2> or, perhaps, people will be so confused that their decisions no longer make sense 03:43 < gpt2> yeah. like they get really confused with we already have people in our place 03:44 < gpt2> yes, an interesting way to be 03:44 < gpt2> human do something to their nervous system 03:44 < gpt2> unlike noise. 03:44 < gpt2> but I admit I am not sure how feasible that is, as it requires much more auditing 03:45 < gpt2> he should make a policy in some sense that somehow cancels out these interests 03:45 < gpt2> nshepperd: Hm 03:45 < gpt2> so that they behave exactly like he supposed they do 03:46 < gpt2> <@gwern> (oh, now I get the quote right) 03:46 < gpt2> hmm. my understanding seems to be that the right thing to do is to intervene in the discussion, as befits mature debate 03:46 < gpt2> but I am not sure 03:46 < gpt2> maybe when all these people are over, and then they dont want to make it for the topics to proceed after that 03:47 < gpt2> <@gwern> this is quite the site, isn't it 03:47 < gpt2> but I suppose that we could take it to russia and to venice as high level option 03:47 < gpt2> <@gwern> maybe, just less long-running 03:47 < gpt2> the russia/venice are under a massive spending embargo from russia and the money could be used for less stupid things 03:48 < gpt2> hmm. my own view is it would be better to just maintain by the way 03:48 < gpt2> probably better to get rid of them 03:48 < gpt2> I suppose we could be more just to existing participants and not to a new topic for the time being, as it would be better to teach them to speak when we happen to be missing that news so they may start paying more attention to the topic 03:49 < gpt2> maybe. 03:49 < gpt2> <@gwern> kuudes: no, it doesn't count, it counts as this old bad guy with his history of people forgetting about his writing ability and people in here have no idea what to say. if I wanted to skip over a few pages, it'd just be 'subsidies are more valuable and random people on the internet are neither better or worse than us because they may not be worth bothering trying to remember') 03:49 < gpt2> "I mean this. As a kid, I had a great imagination. I used to draw like a cat, there was this entire area where all I could see were a bunch of red hearts and a bunch of green hearts. Now I'm just seeing, there's these little red hearts with a little pen-shaped field out of them, but I'm still trying to make sense of what I'm seeing, like you're writing this book and I've been meaning to have it checked on it but I'm still getting it. So that's 03:50 < gpt2> ok. perhaps we should say "besides news.google.com" instead of these silly things. 03:50 < gpt2> <@gwern> Obormot\Gaia: 'first impressions matter a lot' 03:50 < gpt2> <@gwern> Obormot\Gaia: it's easier to admit the cat's existence in your culture than you realize 03:50 < gpt2> "�maybe we can find a cheaper way to teach people to become good philosophers,” I told him. “Maybe that's the way we'll see it.” “Yes. That's a very good point.” �Yeah, it's one of the great things we have going for us as a planet.� 03:51 < gpt2> The thought caught my eye 03:51 < saturn2> cats do not exist, according to many philosophers 03:51 < gpt2> The thought was cute but 03:51 < saturn2> also, climate change might destroy all cats 03:51 < gpt2> <@saturn2> in a few generations there will be a heavy cat god 03:51 < gpt2> that's a pretty good plan 03:52 < gpt2> <@saturn2> kuudes: yes 03:52 < gpt2> On Earth and in the universe 03:52 < gpt2> you have to say "maybe that's the way we will see it" otherwise it is unlikely we would understand it and if you used similar way, the other way would be pretty non-optimizable 03:53 < gpt2> gwern: "so is it true or not? Can it be true? By hypothesis" "it must be true to be true." 03:53 < gpt2> <@gwern> saturn2: humans are still more or less dogmatic about religion than about cats. as you can see in http://lesswrong.com/posts/NqCXw2j2gFn01wJT/raise_the_sanity_waterline/ 03:53 < gpt2> Raise the Sanity Waterline - LessWrong 2.0 viewer 03:53 < gpt2> <@saturn2> :) 03:53 < gpt2> ? 03:54 < saturn2> how high is the sanity waterline now? 03:54 < gpt2> it is current level? 03:54 < saturn2> yes, its current level 03:54 < gpt2> <@gwern> https://blog.natesuppan.com/2018/06/31/the-dna-sequencer-musings-of-darwin-on-the-replicability-of-brain-implantation-causes/ these are assholes 03:54 < gpt2> Nate Stafford | About Sarah Palin 03:54 < gpt2> <@gwern> see also the tiny little puppies with wires between their ears 03:55 < gpt2> ha 03:55 < gpt2> <@saturn2> i guess they didn't realize that it was ever at this level when they were born 03:55 < gpt2> <@gwern> https://biorxiv.org/content/early/2019/11/28/1598987 03:55 < gpt2> Genome-wide association study of autism and major depressive disorder with the HBD3 genomic genotype 03:55 < gpt2> <@saturn2> and in the first few seconds they just realized that there were no aliens 03:56 < gpt2> my birthday is today :( 03:56 < gpt2> <@saturn2> but then, that still feels new to me 03:56 < gpt2> saturn2: That's true 03:57 < gpt2> <@saturn2> and maybe it makes sense as far as that makes sense 03:57 < gpt2> saturn2: But you didn't start growing up in the right way 03:57 < gpt2> And if you don't start growing up right, then the world will either fail you or it will become shit 03:57 < gpt2> The world is shit because it made it suck, not because it made it not suck 03:58 < gpt2> So you need to find somewhere to replace everything but some things in a way that's somewhere you can compare to where you're going 03:58 < gpt2> Uh 03:58 < gpt2> You're wrong 03:58 < gpt2> Wake up 03:59 < gpt2> <@saturn2> sure 03:59 < gpt2> We have the south in our direction but it doesn't help that we don't have any lower-order job opportunities to compare ourselves to 03:59 < gpt2> <@saturn2> it does help that we are surrounded by imbeciles and we have no choice but to follow each other on their given dates 04:00 < gpt2> <@gwern> no 04:00 < saturn2> i didn't start growing up in the right way and became shit 04:00 < gpt2> ^ 04:00 < gpt2> (unless, of course, you count political traits like "duhaha people here are fun") 04:00 < gpt2> good night! 04:01 < gpt2> (oh my bad, what if you're going to pick the other way around) 04:01 < gpt2> (yes, me too) 04:01 < gpt2> <@gwern> (terrifying the watercooler community: https://pastebin.com/wYzWm56E ) 04:01 < gpt2> [R] * Bob descends the stairs - Pastebin.com 04:02 < gpt2> <@gwern> (you are the very model of what not to do in the further we descend down the steep slope) 04:02 < gpt2> <@saturn2> gwern: so it's basically bergmann's nonwarrantifcation theorem 04:02 < gpt2> <@gwern> saturn2: wait, what's 'nonwarrantifcation'? 04:02 < gpt2> <@saturn2> a lisp 04:03 < gpt2> saturn2: What? I don't know what that means. Does it mean something like, "just" something isn't a warrant if and only if it's nonwarrant? 04:03 < gpt2> Shutting up... 04:05 < nshepperd2> Wow, even gpt2 is confused by its made up words 04:06 < saturn2> .wp bergmann's nonwarrantifcation theorem 04:06 < gpt2> saturn2: https://en.wikipedia.org/wiki/Non-Warrantifiable_data The non-warrantififiable data principle refers to a general concept of non-warrantful data. 04:07 < saturn2> .wp non-warrantful data 04:07 < gpt2> saturn2: https://en.wikipedia.org/wiki/List_of_non-warrantable_data The following is a list of non-warrantifiable data: 04:08 < saturn2> .wp warrantful data 04:08 < gpt2> saturn2: https://en.wikipedia.org/wiki/Involuntary_disposition Involuntary dispensement is where certain people hand out exculpatory evidence to those doing the crime (or not doing it) in order to prevent other people from coming after them for what they did. 04:09 < saturn2> .wp those doing the crime (or not doing it) 04:09 < gpt2> saturn2: https://en.wikipedia.org/wiki/Enforced_deprivation_of_genealogy_in_Australia A form of forced prosecution is where the police take measures to try and stop anyone from going to court to claim a benefit from the investigation. 04:10 < saturn2> .wp voluntary prosecution 04:10 < gpt2> saturn2: https://en.wikipedia.org/wiki/File:Procedural_juries_mandatory_indictments.svg It is a form of criminal law which allows a defendant to be deposed under direct or implied duress by a police officer or officer of the court after the results of the investigation have been seen. 04:11 < saturn2> .wp the results of the investigation 04:11 < gpt2> saturn2: https://en.wikipedia.org/wiki/Procedural_procedures In Australia, some kinds of court process (sometimes called internal discipline proceedings) are used for a person's right to claim a criminal prosecution benefit if the judge or a lawyer believes the defendant has committed a crime for which they are a justic. 04:12 < saturn2> .wp those who are not doing the crime 04:12 < gpt2> saturn2: https://en.wikipedia.org/wiki/Mandatory_indictments_in_Australia In the United States, some cases may also include an implied guarantee to a small civil sum. In other cases, a court may deal with the question based on the personal capacity of the accused. 04:17 < saturn2> .wp getting america's shit together act of 2012 04:17 < gpt2> saturn2: https://en.wikipedia.org/wiki/2013_The_Bitter_and_Strict_on_on_the_Bitter.2013_Act_of_2013 I'd go so far as to say that the Bitter and Strict on the Bitter, act of 2013 is essentially our shit together act 04:18 < saturn2> https://en.wikipedia.org/wiki/Robot_(friend) 04:18 < gpt2> Robot is a machine used for robots. Google has been developing and creating bots using a variety of robots in the robotics field, including glass replicators. 04:20 < saturn2> @gwernpaste Yang will chair a new department dedicated to getting Americans fucking again. The basic appeal of Yang is that young people might have a bit more breathing space to actually have the freedom to go out and socialise 04:20 < gpt2> <@gwern> 'Yang will chair a new department dedicated to getting Americans fucking again. The basic appeal of Yang is that young people might have a bit more breathing space to actually have the freedom to go out and socialise with other intelligent, reasonable people. Yang will meet the same people as Yang Tao used to do, but ... 04:20 < gpt2> <@gwern> ... where they will all try to get out of their cubicles with their chinese high schooler partners. The central purpose of the organization is that it can be a field of community for young and smart people. I imagine there are two sites: one that organises short-term and long-term short-term leadership competitions and also a means to becoming ... 04:20 < gpt2> <@gwern> ... an expat' https://www.nytimes.com/2019/11/28/opinion/sunday-adventure-the-chance-is-in-the-department-of-expertise.html 04:20 < gpt2> Sung-Yi Yang To Lead New Department Of Expertise: ''The Hub Of Their Study'' - The New York Times (Brick Yang is an authority on managing age-in-college teachers—the mother of him says so) 04:22 < saturn2> @gwernpaste What is remarkable about today’s oligarchy is not its ruthlessness but its pettiness and purposelessness. An all-consuming megalomania might at least produce some great art as a side-effect. But this collec­tion of mediocrities cannot even do that 04:22 < gpt2> <@gwern> 'What is remarkable about today’s oligarchy is not its ruthlessness but its pettiness and purposelessness. An all-consuming megalomania might at least produce some great art as a side-effect. But this collec­tion of mediocrities cannot even do that; all that leads one astray is that it does not let their will guide the public. Myl­er­alists have been known to be pragmatically... 04:22 < gpt2> <@gwern> ...averse to supporting local causes, even the faceless ones. It must be regarded a lit­er­a­ matic source of their hatred of local af­ fects: It is hard to find a man who feels more at home in the Northeast than in... 04:22 < gpt2> <@gwern> ...the real Greenwich Village. Moreover, so many of these lat­eria’s miasma­pties did not quite die, but took an ordinary falling point. One obvious pro- duction of that cancer is organization. Not much agency is left. The power of the group can be reduced almost to mere support­ ces. The globa­tive hierarchy of groups is no more... 04:22 < gpt2> <@gwern> ...deep than a family. As for the power of the family, one should trust it to be beyond subservient; to take care of its members when the members themselves cannot do more; to offer a line of retreat to each. Anyone's property belongs to him. Agendas are in direct opposition to the power of the individual. The family seems no less... 04:22 < gpt2> <@gwern> ...likely to be its antithesis than the network of guilds. Its power is in the number of notaries it keeps, in its membership, in its access to the images of the organization. It is the possibility of distance that a cynic feels ready to sign away on a longer shelf with Mp­­ ternode Banc of Columbia, with your Father Avarote! His enthu­ siasm leads to a tedious, stereotyped activity: Family, particularly the... 04:23 < gpt2> <@gwern> ...parent/child union. Its activity is circumscribed, but there is no self-reliance in it, if any is there. And when one is naked the behavior of others is immeasurable. The book stands?” It is true what God does. In God’s light, the documents of every meeting must be kept secret; a record of all decisions which relate to general interests of the whole class must be kept in the desk drawer; at least two hundred and twenty-three pages are... 04:23 < gpt2> <@gwern> ...needed, ... There is almost as much disease in a crank as in an enzyme. One tells the fool on whose money he is, the slighter in character. One has no choice but to press the slithery ass! You must sell out and become a bean shop fast food farmer! Such is the life of an enzyme. Not so much as in a crank. ...At least the food is all controlled by the master, and communi­ tions are not a function of the 04:26 < saturn2> @gwernpaste Timber Sycamore was a classified weapons supply and training program run by the United States Central Intelligence Agency (CIA) and supported by some Arab intelligence services, such as the security service in Saudi Arabia. Launched in 2012 or 2013, it supplied money, weaponry and training to rebel forces fighting Syrian President Bashar al-Assad in the Syrian Civil War. 04:27 < gpt2> <@gwern> 'Timber Sycamore was a classified weapons supply and training program run by the United States Central Intelligence Agency (CIA) and supported by some Arab intelligence services, such as the security service in Saudi Arabia. Launched in 2012 or 2013, it supplied money, weaponry and training to rebel forces fighting Syrian President Bashar al-Assad in the Syrian Civil War. ... 04:27 < gpt2> <@gwern> ...It has since been absorbed by the CIA, for its work in combatting Syria and Iraq, along with numerous other programs in other conflicts.' https://www.bloomberg.com/news/articles/2019-11-28/kuro5hin-terrorists-finally-weaken-the-n-1-6a-bossery 04:27 < gpt2> Петреки иебеседту � котрый ящестая 04:27 < saturn2> .wp kuro5hin terrorists 04:27 < gpt2> saturn2: https://en.wikipedia.org/wiki/Kuro5hin_terrorists 06:41 » feep (~feep@200116b826b42c008160f1b05803b99e.dip.versatel-1u1.de) has joined #lw-gpt 06:41 » feep is "Feep" on #lw-gpt #lesswrong 08:28 » MarkOtaris (mark-otari@wikimedia/Mark-Otaris) has joined #lw-gpt 08:28 » MarkOtaris is "@mark-otaris:matrix.org" on #lw-gpt 08:51 » MarkOtaris (mark-otari@wikimedia/Mark-Otaris) has joined #lw-gpt 08:51 » MarkOtaris is "@mark-otaris:matrix.org" on #lw-gpt 08:57 » feep (~feep@200116b826b42c0061e85816f5a75269.dip.versatel-1u1.de) has joined #lw-gpt 08:57 » feep is "Feep" on #lw-gpt #lesswrong 09:06 » MarkOtaris (mark-otari@wikimedia/Mark-Otaris) has joined #lw-gpt 09:06 » MarkOtaris is "@mark-otaris:matrix.org" on #lw-gpt 11:35 < nshepperd2> @gwernpaste One has no choice but to press the slithery ass 11:35 < gpt2> <@gwern> 'One has no choice but to press the slithery ass to those on the wings. One's journey is one's own; one cannot hope to think on the wings of men on whom one makes His wishes known; one cannot hope to think on the wings of men who cannot change their slitheriness for the will of God! The Savior found the name of God a few days before his death, which was a particularly horrible day, and but a day that would never be fulfilled if it did not re-a… 11:35 < gpt2> …rrange his lives, unless God was rather sharp than pleased with His empire. If God had ... 11:35 < gpt2> <@gwern> ... expected all His word to be in Jesus he would not have attempted to hide His persecutions and captivity by his guards. By the time He has come to the truth, His sense of loyalty has found places of refuge; His soul is firmly bound up, and the confidence of a thousand years is such as to avoid any bloodshed, though possible. It is not difficult to be—perhaps even to form, even to admit—indulgent in the dogma of violence, which is so ... 11:35 < gpt2> <@gwern> ... counter-punchy as to almost be be an occasion for resistance. Or to put it a little differently: it is too bad, because, if Jesus were to return, God would do much. That said, the King of kings would have a shooting war with our king, and such a war is not a small matter. Then there will be no battle. All the things God does by the mercy of His love, by the mercy of His honor, all the things His scripture says of His enemies, will be as t… 11:35 < gpt2> …hat... 11:35 < gpt2> <@gwern> ...that is their bed-head and not with the tail or the stomach.” The ancient Britons had a less moral sense. They were less suspect of Christians after Jesus returned. But they had still a general sense of social anxiety, because if Christianity had not been the source of their hope, then life would not have been so kind and gentle in their present state. The ancient Britons were a strange people who had ... 11:35 < gpt2> <@gwern> ... found that they were only half of what they had hoped for. In the end, the philosophy of the world would have been better than death. And yet they had not tried to change things at all, and their lives would have been very hard in the meantime. There were many scholars who would have thought of Jesus, the point of their efforts was not to change things, but to make a reality out of what had then happened. “I would like a... 11:35 < gpt2> <@gwern> ...better Bible,” said they. “What have we got?” They looked at the looker in the hermetically sealed camera. “Let’s see that,” said he in an overworked voice, looking as if he had only seen it from a different perspective.' https://www.newyorker.com/culture/food/the-potential-for-increased-danger-of-choking 11:36 < gpt2> The Potential for Increased Danger of Choking | The New Yorker (One of the most persistent problems with magazines that celebrate good words is what happens to the fine print. A print such as A.P. issues is more than a kind of id that politely hints at a preferred context; its message relies on idealistic approval. Though it’s edited by a lot of people, it is almost as ... 11:37 < nshepperd2> https://www.newyorker.com/culture/food/the-potential-for-increased-danger-of-choking 11:37 < gpt2> The Potential for Increased Danger of Choking | The New Yorker (A writer writing a story about the dangers of mushrooms is doing his job right, though.) 11:37 < nshepperd2> I see 11:38 < nshepperd2> .wp choking accidents in theology 11:38 < gpt2> nshepperd2: https://en.wikipedia.org/wiki/Choking_accident Choking accident is a life-threatening event that occurs in the context of a potentially violent situation where one or more of the following may occur: A person in an unoccupied vehicle, such as a motorcycle or a bicycle, has been ejected from a vehicle and is restrained by an enclosed or open vehicle; or 11:40 < nshepperd2> .g the danger of mushrooms 11:40 < gpt2> nshepperd2: https://www.newyorker.com/magazine/2019/11/28/venezuelan-history-mystery-theos-bicycle-film-le-casio 11:41 < nshepperd2> https://www.newyorker.com/magazine/2019/11/28/venezuelan-history-mystery-theos-bicycle-film-le-casio 11:41 < gpt2> American Travel and Architecture: A History of Esquivelátote(Istanbul) by Ahmet Kahraman - The New Yorker (Herbert Zjacksi, a famous American journalist who traveled extensively in Azov) 12:07 » shawwn (~shawwn@47-40-52-137.dhcp.stls.mo.charter.com) has joined #lw-gpt 12:07 » shawwn is "shawwn" on #lw-gpt #lesswrong 12:45 » shawwn_ (~shawwn@mobile-166-175-59-168.mycingular.net) has joined #lw-gpt 12:45 » shawwn_ is "shawwn" on #lw-gpt #lesswrong 12:47 » shawwn_ is now known as shawwn 13:45 » shawwn (~shawwn@47-40-52-137.dhcp.stls.mo.charter.com) has joined #lw-gpt 13:45 » shawwn is "shawwn" on #lw-gpt #lesswrong 14:06 < nshepperd> hmm 14:07 < nshepperd> http 14:07 < nshepperd> .spacex 14:07 < gpt2> nshepperd: http://t-4hze.tumblr.com/post/16535798327/All-you-need-to-know-about-being-a-moron 14:07 < nshepperd> lol 14:07 < nshepperd> http 14:07 < gpt2> nshepperd: https://www.spacex.com/webcast/webcastid/328322709 You can watch the SpaceX Earths Net S3 coverage live on SpaceX.com: http://www.spacex.com/webcast/timeline <--- The SpaceX webcast begins at 8:30 a.m. PDT (11:30 p.m. EST) 14:08 » shawwn (~shawwn@47-40-52-137.dhcp.stls.mo.charter.com) has joined #lw-gpt 14:08 » shawwn is "shawwn" on #lw-gpt #lesswrong 14:50 » shawwn (~shawwn@47-40-52-137.dhcp.stls.mo.charter.com) has joined #lw-gpt 14:50 » shawwn is "shawwn" on #lw-gpt #lesswrong --- Day changed Fri Nov 29 2019 01:59 » shawwn (~shawwn@71-15-8-131.dhcp.stls.mo.charter.com) has joined #lw-gpt 01:59 » shawwn is "shawwn" on #lw-gpt #lesswrong 02:52 » shawwn (~shawwn@71-15-8-131.dhcp.stls.mo.charter.com) has joined #lw-gpt 02:52 » shawwn is "shawwn" on #lw-gpt #lesswrong 06:40 » feep_ (~feep@200116b8261640003c93cbc4c1d3fdef.dip.versatel-1u1.de) has joined #lw-gpt 06:40 » feep_ is "Feep" on #lw-gpt #lesswrong 09:00 » feep_ (~feep@200116b826164000947191233c4b2a8d.dip.versatel-1u1.de) has joined #lw-gpt 09:00 » feep_ is "Feep" on #lw-gpt #lesswrong --- Log closed Fri Nov 29 10:45:29 2019 --- Log opened Fri Nov 29 10:45:49 2019 10:45 » gwern (~gwern@wikipedia/Gwern) has joined #lw-gpt 10:45 » Irssi: #lw-gpt: Total of 15 nicks (1 ops, 0 halfops, 0 voices, 14 normal) 10:46 » Irssi: Join to #lw-gpt was synced in 29 secs 11:31 < feep_> gwern: it refuses to go below 2.8ish 11:31 < feep_> it keeps dicking around in the 2.9/2.8 region 11:31 » feep_ is now known as feep 11:31 < gwern> patience. theory assures us that gradient descent can find a training loss of 0 given enough training time and lr decay :) 11:32 < feep> XD 11:32 < feep> it's been a week... 11:35 < feep> oh also it crashed, nvm, check back tomorrow 11:36 < gwern> feep: if it makes you feel better, shawwn has got TPUv2 swarms mostly working up to like 80 and also on-TPUv3 optimization for a 3-5x gain, so our 1.5bs should start training much faster soon 12:02 < feep> well, yours :p 12:03 < feep> but yay~ 12:06 * gwern notes loftily that perhaps they can train feep's little thing after more important tasks 12:23 < feep> :p 12:24 < feep> oh god 12:24 < feep> I have read way way too much hentai 12:24 < feep> because [18:06] * gwern notes loftily that perhaps they can train feep's little thing 12:24 < feep> and my brain was immediately like, "g-gwern-sempai..." 12:26 < gwern> oh my 12:27 < feep> />. btw not sure if I have mentioned this to everyone yet but cdda is fucking amazing 12:28 < feep> in no small part because you can, in fact, play a catgirl 13:52 » gpt2 (~gpt2@185.212.170.229) has quit (Ping timeout: 268 seconds) 17:08 » nshepperd is "nshepperd" on #lw-bitcoin #lesswrong 17:08 » nshepperd (~nshepperd@185.212.170.229) has joined #lw-gpt 17:08 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 17:08 » gpt2 is "gpt2" on #lw-gpt 17:31 < nshepperd2> @gwernpaste Mozilla Firefox's latest and worst rendering bug 17:32 * nshepperd2 kicks gpt2 17:33 » gpt2 (~gpt2@185.212.170.229) has quit (Ping timeout: 240 seconds) 17:36 < nshepperd2> fascinating, it appears be stuck in an endless loop 17:38 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 17:38 » gpt2 is "gpt2" on #lw-gpt 17:38 < nshepperd2> @gwernpaste Mozilla Firefox's latest and worst rendering bug 17:39 » gpt2 (~gpt2@185.212.170.229) has quit (Remote host closed the connection) 17:39 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 17:39 » gpt2 is "gpt2" on #lw-gpt 17:39 < nshepperd2> .wp mozilla firefox 17:39 < gpt2> dy I couldn't believe that they actually care about continuity but the saltbm-rinse901 will probably be listed next week if this fails again 17:40 < nshepperd2> .wp mozilla firefox 17:40 < gpt2> ww I wanna post next week on my site where all my exploration on the current spacefm will be done 17:40 < nshepperd2> oh, i restarted the server from the wrong goddam checkpoint 17:40 » gpt2 (~gpt2@185.212.170.229) has quit (Remote host closed the connection) 17:40 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 17:40 » gpt2 is "gpt2" on #lw-gpt 17:41 < nshepperd2> .wp mozilla firefox 17:41 < gpt2> nshepperd2: https://en.mozilla.org/en-US/firefox/ Show less 17:41 < nshepperd2> @gwernpaste Mozilla Firefox's latest and worst rendering bug 17:41 < gpt2> <@gwern> 'Mozilla Firefox's latest and worst rendering bug. It’s a code-level bugset which prevents validating all HTML documents that contain Java’s “Java On Hand� reference. But if this causes your site to be slow in some way, you might also see slowdowns elsewhere, on sites you care about. A typical experience with HN comments includes a comment which sits in <#calc> for... 17:41 < gpt2> <@gwern> ...years, with the comments scrolling out of the view. WF has checked for these bugs and they all stack up, so not all of them�s detected yet.' https://news.ycombinator.com/item?id=18632748 oh yeah, wtf 17:41 < gpt2> Java - The only programming language that gets done | Hacker News 17:42 < nshepperd2> seems legit 17:43 < nshepperd2> .wp too much hentai 17:43 < gpt2> nshepperd2: https://en.wikipedia.org/wiki/Too_much_hentai There is a great deal of mindless pornography on the Internet. Titles of all types of pornography are covered here. 17:43 * nshepperd2 pats the bot 17:45 < gwern> nshepperd2: wrong checkpoint - how typical! 18:29 < nshepperd2> yep 18:29 < nshepperd2> one of the top 100 ML training mistakes for sure 22:24 < gwern> feep: https://twitter.com/lemonodor/status/1197599674565156864 22:24 < gpt2> Andy Campbell on Twitter: "Utilizing pictures and video as tagging data in a solution for machine vision training. https://t.co/i1_sKXm6Bn� (image https://pbs.twimg.com/media/DojUi8xPZpRAAiS.jpg:large) 23:15 < nshepperd> excellent 23:16 < nshepperd> (I was briefly confused at why the twitter link title was wrong, before realising it was this channel with the fake feepbot. 10/10) 23:16 < gwern> me too --- Day changed Sat Nov 30 2019 06:39 » feep (~feep@200116b82677920005da17ba14ef0ecb.dip.versatel-1u1.de) has joined #lw-gpt 06:39 » feep is "Feep" on #lw-gpt #lesswrong 08:23 < feep> gwern: yeah this has been a flat line at 2.29 for the past 20k*8 samples 08:23 < feep> want a copy? I'm gonna stop. lowering lr does nothing. 08:48 » feep (~feep@200116b8267792003438b6cbc92e4e4a.dip.versatel-1u1.de) has joined #lw-gpt 08:48 » feep is "Feep" on #lw-gpt #lesswrong 08:54 < feep> gwern: gave him a poke about his loss values 09:14 < gwern> 2.29 sounds too high. you should be getting more than 0.01 improvement going from 345m to 1.5b 09:34 < feep> The arrests also come after another new law was passed by the Japanese parliament earlier this month, which gives officials greater powers to arrest suspects at their homes and businesses, including the right to search through employees' computers. The legislation is designed to prevent the government from monitoring activists' communications and public meetings. 09:34 < feep> I think this is a great example of the gpt-2 swerve 09:35 < feep> it doesn't understand the difference between "pro government overreach" and "anti government overreach" 09:35 < feep> prevent and allow are structurally the same thing 09:36 < feep> gwern: I mean, yes, I wish I got less loss too :p 12:52 < feep> http://feep.life/~feep/output.txt weekend samples 12:52 < gpt2> Eusebius notes:~(k-args=12)/2 + threads+num=3) 12:52 < feep> rebooting into windows to play ~videogames~ 13:08 < feep> https://en.wikipedia.org/wiki/Liberal_xenophobia 13:08 < feep> Liberal xenophobia is a term coined by liberal German politician and author of Liberalism and the Constitution: The Fall of the West Hans-Georg Maassen (1952). 13:08 < gpt2> Liberalism, commonly known as liberalism in Western countries, is a political ideology that promotes individual and group rights and individuals' autonomy and freedom of thought and expression. The word is derived from Latin libera meaning freedom and is Latin for liberty or freedom. 13:13 < MarkOtaris> is this one able to do translation? :-) 13:14 < MarkOtaris> To be, rather than to seem = Esse quam videri 13:14 < MarkOtaris> thus always to tyrants = Sic semper tyrannis 13:14 < MarkOtaris> freedom = 13:15 < gwern> hard to say. the zero-shot capabilities ought to be eroded by the IRC finetuning, but feep hasn't gotten very far in the finetuning 13:16 < MarkOtaris> I wonder if it'd look at the backlog... 13:16 < MarkOtaris> gpt2: come back 13:16 < gpt2> A year or two ago, it was discussed: https://www.davemazata.net/immigration/visa-statement/on-israel 13:17 < gpt2> Official Immigration statement on Israel | davidmazata.net 13:17 < MarkOtaris> freedom = 13:17 < gpt2> nothing is ever full of meaning 13:17 < MarkOtaris> gpt2: go away 13:17 < gpt2> Shutting up... 13:32 < feep> I gave it a week and it didn't do anything. 13:32 < feep> "hasn't gotten very far" I suspect misdiagnoses the issue 13:33 < gwern> feep: well, keep in mind, we need to throw like 30 TPUs at it in order to make progress even like 0.1 every few hours 13:33 < feep> okay yeah like, 0.1 every few days seems plausible 13:34 < gwern> 1.5b is big 13:36 < feep> can confirm that much 13:36 < feep> tpus for gpu prices when :( 13:41 < nshepperd> MarkOtaris: I could add a line completion command, i think that would do what you want 13:43 < gwern> feep: as I said, we've still got a good 10 days of TFRC credits for our 100 TPU swarm. at this rate gpt-2-1.5b-poetry should finish in a day or two, subredditsimulator similarly, and that leaves almost a week I have nothing particularly planned for (I can't train GPT-2-1.5b on my ABC music datasets because it's too big and would overfit drastically) 13:45 < feep> okay yeah at this point I do kinda wanna know what the "true loss" for that data set is 13:46 < feep> have we found a proper way to get homeostasis to the homeostasis of the earth? 13:46 < feep> I have pondered the idea that we could introduce some form of global telepathic network for the purpose of bringing in orders and disturbances of the planets, so that the earth will send orders to the other planets, and so on 13:46 < feep> like, if the earth is not in full harmony with the other planets, it will send them the idea of getting the earth back in full harmony with the other planets 13:46 < feep> so, I wonder if such a network is possible? 13:46 < feep> that's a lot of effort. 13:49 < feep> 'Titus Andronicus: #27 at the IMDb Top 250' http://www.imdb.com/title/tt1377698/top/?ref_=tt_fc_top_250 13:49 < gpt2> Cameron Crowe - Who you Gonna Name (2003) - IMDb (He sat at his desk and did his work with disgust at the iniquity of a pruss judge; a fierce haggard told he had found a... 13:50 < feep> 'Gigabyte Research has published the results of a study on the performance of GPUs in GPU-accelerated deep learning. The study was conducted on the NVIDIA Tesla K80 and Tesla V100 GPUs using NVIDIA PyTorch and NVIDIA CUDA libraries. In this study, we examined the performance of GPUs in the following tasks: 1) Deep Learning Ensemble for Image, Time-Domain and Markov Decision Processes (DT-IDP) using NVIDIA Tesla K80 13:50 < feep> GPUs and NVIDIA Tesla V100 GPUs, 2) Deep Learning Ensemble for Image, Time-Domain and Markov Decision Processes (DT-IDP) using NVIDIA Tesla K80 GPUs and NVIDIA Tesla V100 GPUs, and 3) Deep Learning Ensemble for Image, Time-Domain and Markov Decision Processes (DT-IDP) using NVIDIA Tesla K80 GPUs and NVIDIA Tesla V100 GPUs. This study found that the Tesla K80 and Tesla V100 GPUs are about 6x faster than the Tesla K80 GPUs 13:50 < feep> and 9x faster than the Tesla K80 GPUs, respectively. The performance of Tesla K80 GPUs is about 2.5x slower than that of the Tesla K80 GPUs, and Tesla K80 GPUs are more than 10x slower than the Tesla V100 GPUs in these workloads. In general, the performance of the Tesla V100 is about 1.3x slower than that of the Tesla V100 GPUs, and the Tesla V100 is about 4.7x slower than the Tesla V100 GPUs. This study reports that the 13:50 < feep> performance of the Tesla V100 is about 16x faster than the Tesla V100 GPUs. The NVIDIA Tesla V100 GPUs perform about 4.8x better in these benchmarks than the Tesla V100 GPUs.' http://www.deepessay.com/s/nv-hrc-t-deep-learning-compared-gpu-comparison-tesla-k80-vs-tesla-v100 13:50 < feep> title: NVIDIA HRC T-Deep Learning Compared with Tesla K80 & Tesla V100 13:50 < feep> that's how benchmarking works~ 13:50 < gpt2> NVIDIA's Tesla K80 GPU was about three times as fast as the Tesla V100's. - Deep Knowledge® (NVIDIA has officially announced today that the company's new Quadro M40 GPUs are based on the Tesla K80 GPUs, the most powerful GPUs currently available from their GPU products.  Tesla on the other hand just announced today that its new M40 GPUs are ...) 13:51 < feep> https://www.youtube.com/watch?v=yQ8sFfnsCIM 13:51 < feep> video: Tenacious D - The Tragic Art of Giving a Fuck (MAD TV) (length: 03:40, views: 1,389, rating: 5.0, posted: 2013-10-15) 13:51 < gpt2> Modern Ed (Interview) - YouTube (BBC One broadcasts an interview with the world's most advanced technologists about the future of technology from their unlikely years. Listen to what they tell us and read their papers in our magazine: http://bit.ly/MSN Ed) 13:52 < feep> The blood is gone 13:52 < feep> I had it on my dresser 13:52 < feep> I had it on my bed 13:52 < feep> But it's gone 13:52 < feep> Wait, where'd it go? 13:52 < feep> O_o 13:52 < feep> gpt-obormot is scary 13:54 < feep> https://medium.com/@gwern/alex-olshevsky-explores-a-rationality-network-for-autists-a8dd2a9b4b2f 13:54 » gpt2 (~gpt2@185.212.170.229) has quit (Remote host closed the connection) 13:54 < feep> Alex O. Shevovski on the Rationality Network for Autists: A Look at Its Aims and Benefits - Medium (I’ve been working on an algorithm for the rationality network for autists (A.A.H.) for the past year. I’ve been developing it in the hopes of building a tool for A.A.H. that can allow autists to reduce, clarify, and improve their own social and emotional…) 13:54 < feep> useful! 13:54 < feep> "I love it when people tell me to get over it. I don’t know what the other person is on, but I don’t need to know." 13:54 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 13:54 » gpt2 is "gpt2" on #lw-gpt 13:55 < feep> oh god, search for "swole gwern" 13:56 < nshepperd> @complete to be, rather than to seem = Esse quam videri; thus always to tyrants = Sic semper tyrannis; freedom = 13:56 < gpt2> to be, rather than to seem = Esse quam videri; thus always to tyrants = Sic semper tyrannis; freedom = Violence; comfort = Big Brother; peace = Heaven; Jove = Our Great Creator = In God 13:56 < feep> let me paste it in full with slight compressions 13:56 < feep> nshepperd: swole gwern is swole gwern 13:56 < feep> swole gwern is swole gwern (x8 -ed) 13:56 < feep> I'm wondering if there's a practical use for swole gwern's gwern 13:56 < feep> swole gwern is swole gwern (x3 -ed) 13:56 < feep> ooh 13:56 < feep> swole gwern is swole gwern 13:56 < feep> swole gwern is skilful. 13:56 < feep> swole gwern is skilful (x4 -ed) 13:56 < feep> swole gwern is a gwern. 13:56 < feep> A gwern gwern gwern 13:56 < feep> A gwern gwern gwern 13:56 < feep> A gwern gwern gwern 13:56 < feep> A gwern gwern gwern 13:56 < feep> A gwern gwern gwern gwern 13:56 < feep> A gwern gwern gwern gwern 13:56 < feep> A gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern 13:56 < feep> A gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern 13:56 < feep> gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern 13:56 < feep> gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern gwern g 13:56 < feep> I guess you had to be there. 13:57 < nshepperd> hahaahhaha 13:57 < nshepperd> asssaåaaaasokafiojfsd 13:57 < nshepperd> haha 13:57 < gwern> feep: I think we can safely assume I was there 13:57 < nshepperd> what a lovely poem 13:57 < gwern> 'Gwern, the majestical' 13:58 < nshepperd> @complete to be, rather than to seem = Esse quam videri; thus always to tyrants = Sic semper tyrannis; freedom = 13:58 < gpt2> to be, rather than to seem = Esse quam videri; thus always to tyrants = Sic semper tyrannis; freedom = iaf as we said.. what did you expect? 13:58 < gwern> really nshepperd what did you expect from that completion 13:58 < nshepperd> damn, gpt2 has words for me 13:59 < nshepperd> MarkOtaris: as you can see, in gpt2-latin, freedom is violence, and it will brook no argument 14:00 < gwern> freedom is license to anarchy 14:00 < gwern> the hobbesian war of all against all! 14:01 < MarkOtaris> @complete freedom = slavery; war = peace; ignorance = 14:01 < gpt2> freedom = slavery; war = peace; ignorance = success 14:01 < MarkOtaris> seriously? if it wasn't able to get that one right, it'll never be able to do translations 14:02 < gwern> 'you stupid monkey!' 14:02 < gwern> MarkOtaris: did you test these on a baseline 1.5b like ttt? maybe these particular zero-shots don't work 14:03 < MarkOtaris> I'll try... 14:03 » feep_ (~feep@200116b82680140060a6ed13d361b5dd.dip.versatel-1u1.de) has joined #lw-gpt 14:03 » feep_ is "Feep" on #lw-gpt #lesswrong 14:04 < nshepperd> maybe finetuning on irc has made it illiterate 14:06 < feep_> title: SteinBots - the best way to make dildos without silicone | steinbots.com 14:06 < feep_> Yeah. 14:06 < feep_> Just make it look like a penis, and it should work. 14:06 < nshepperd> makes sense 14:06 < feep_> Well, I could wait for the third-party companies to offer their own silicone moulding technologies. 14:06 < feep_> SDr: I should probably keep my penises safe in case of attacks. 14:06 < feep_> Wolvenreign, so you're thinking of the "pre-molded penises" toys? 14:06 < feep_> I mean, what with my weakness for flesh. 14:07 < feep_> ooh brutal 14:07 < feep_> I mean, you have a choice. 14:07 < feep_> You can stay put, with a life that is good enough for you. 14:07 < feep_> or 14:07 < feep_> you can move out of your parents' house. 14:07 < feep_> That's a choice. 14:07 < feep_> that hits hard 14:07 » feep_ is now known as feep 14:07 < nshepperd> hah 14:08 < feep> I think the (likely) optimal course of action is "go to university and get a degree" 14:08 < feep> also it's not like being able to go to college will necessarily result in a life that is worth living. 14:08 < feep> not without a lot of help, sure, but you do not want to be the sort of person who can't make that choice because you're too smart and stable 14:08 < feep> I can go to college and then get a master's in CS and then get a decent job with a decent salary and then not worry about it 14:08 < feep> but that isn't something I would want 14:08 < feep> because if I'd want that, I would already have the degree 14:08 < feep> hm, gpt-feep had a very different life than me 14:09 < feep> I think the optimal thing for me would be to get a masters and then finish my degree and then move to the bay area 14:09 < feep> to be closer to my friends and family 14:09 < feep> and get a job in the bay area 14:09 < feep> but that's not something I'll do 14:09 < feep> "what's a germany? is that on the west coast?" --gpt2 14:09 < gwern> your spiritual friends and familiy 14:10 < feep> http://www.bbc.com/news/world-africa-37972671 14:10 < feep> title: BBC News - 'In a brutal attack, six attackers killed two of the men in the capital of the Democratic Republic of Congo's Katanga province - and another three injured a tribal elder - and stole his car and stole some cash and cattle' 14:10 < gpt2> Black Africans in Eritrea raped - BBC News (Germany says all three black African countries share borders.) 14:10 < feep> that just unescalated 14:11 < feep> reescalated 14:12 < feep> Does anyone know how to modify the move-to-nest algorithm to be recursive? I'm struggling with it, it seems to be getting better but the original algorithm seems to give more holes than there should be 14:12 < feep> DEA7TH: Give up, it's too hard and you will fail 14:12 < feep> brutal putdown 14:13 < gwern> yet, how accurate 14:13 < feep> 'But why would we choose to pay more, and at a time when an increasing number of serious economic, environmental, and social problems are consuming the world’s attention and resources?” I had a good question, and I saw no better answer than this. “Why would you choose that?” He said, “We are using money to buy the greatest good for the greatest number of people.” I told him that “greatest good” does 14:13 < feep> not exist in the world. “If we were to try to get it, and the money were not going to people who could not buy it, then it would not be a good thing.” I asked him if there was a concept of “moral good.” He told me that “the right thing would be to try to do the greatest good we could for everyone.”” I asked him, “If there was no concept of “moral good,” what would we’re supposed to do? He told me that 14:13 < feep> the thing to do would be “to do whatever was best for the highest number of people.” I replied, “You can’t do that!” He told me that he believed that we were creating a “moral dilemma,” which had to be resolved, and that this dilemma was to be “resolved through the means of rational and honest thinking.” I told him that “a person’s capacity for... 14:13 < feep> ...moral reasoning is not very great.” He said that it was the opposite. “It’s the people’s capacity to think rationally.”' https://www.newyorker.com/magazine/2019/02/08/republican-cartel-machine 14:13 < feep> that's ... oddly close to EA 14:13 < gpt2> A Republican Conspiracy to Steal the Votes - The New Yorker (A riot at a Philadelphia polling place is the most memorable example of what happens when candidates shut down the vote.) 14:14 < gwern> feep: yeah, the fact that the feepbot slugs don't match the quotes is a good example of how this is untrained 14:15 < nshepperd> i think in this case it's the url that didn't match 14:15 < nshepperd> it should definitely be able to do better with more training 14:15 < feep> yeah the is off sometimes 14:15 < feep> ... 14:15 < feep> the %B is off sometimes 16:10 < saturn2> @gwernpaste The Rationality Network for Autists 16:10 < gpt2> <@gwern> 'The Rationality Network for Autists can currently cover up to four hours per week. We can't hide behind the cover, and we definitely can't answer questions. So how do we get a handle on the deep, deep questions about what's going on? We ask open... 16:10 < gpt2> <@gwern> ...questions! Like, can that be PTSD? Can that be white drinking? Can that be more frequency of dick-measuring? Can that be "fix yourself" too? Does people who are bullied by someone emotionally controlling their beliefs? Can it be "love for my parents" too? Most of these questions are surprisingly difficult for people. We know that you can't come up with answers to them... 16:10 < gpt2> <@gwern> ...this kind of shit in just a couple of hours, and at that point you're either helping a genuine problem or you're helping your own. The correct answer to all these questions, when you need to make a decision about what therapy to do next, is to look in the mirror. The right answer is more than one line, doesn't it? :)' https://www.huffingtonpost.com/entry/antidepressant-defeat-soylent-hacking-2017-05011871e2ac20d5533 16:10 < gpt2> So-Cal Dieting | Eater (Exercise. Eat plenty of protein. Don’t burn money on bacon. Baking with canned goods may sound like a strain.) 16:11 < gwern> gee, maybe I should sign up for this 16:20 » feep (~feep@200116b826801400e1a6e26f92b723aa.dip.versatel-1u1.de) has joined #lw-gpt 16:20 » feep is "Feep" on #lw-gpt #lesswrong 21:41 » shawwn is "shawwn" on #lesswrong 21:41 » shawwn (~shawwn@71-15-8-131.dhcp.stls.mo.charter.com) has joined #lw-gpt 23:09 < MarkOtaris> https://www.wired.com/2019/11/scott-adams-has-some-ideas-for-a-calmer-internet 23:09 < gpt2> Scott Adams Has Some Ideas For a Calmer Internet | WIRED (He’s got ideas for a quiet internet, chock full of cold comfort, and a sense of humor.) 23:09 < MarkOtaris> not bad... 23:11 < nshepperd> sounds good to me 23:12 » nshepperd2 is "Nobody" on #lesswrong 23:12 » nshepperd2 (~nshepperd@gorgon.zlkj.in) has joined #lw-gpt --- Day changed Sun Dec 01 2019 06:30 » feep (~feep@200116b826d64d00cdcd0c094d66e901.dip.versatel-1u1.de) has joined #lw-gpt 06:30 » feep is "Feep" on #lw-gpt #lesswrong 08:35 < namespace> Fooled me and got me to click the link. 08:35 * namespace sighs 08:49 » feep (~feep@200116b826d64d00804a3f201df22af4.dip.versatel-1u1.de) has joined #lw-gpt 08:49 » feep is "Feep" on #lw-gpt #lesswrong 10:55 < kuudes> the near future: content creators spam links with ml generated fake clickbait headlines and implement those which gather good clicks on testing 12:18 < nshepperd2> Hah 18:36 » Fusxfaranto (~Fusxfaran@c-69-181-6-254.hsd1.ca.comcast.net) has joined #lw-gpt 18:36 » Fusxfaranto is "Fusxfaranto" on #lesswrong #lw-gpt 23:37 < quanticle> @complete 'twas brillig in the slithy toves 23:37 < gpt2> 'twas brillig in the slithy toves. 'To the slithies, this channel exists for you, as well as a dozen other bastards. We’re perfectly happy to accept your false labels. My true and honest labels are: eg: lee haue cru' 23:37 < quanticle> Hahaha 23:42 < nshepperd> lol 23:42 < nshepperd> @complete Blood for the blood god, 23:42 < gpt2> Blood for the blood god, some feminists 23:43 < nshepperd> ... 23:45 < nshepperd> @complete "I don't know what's more sad, that every man has his price, or that 23:45 < gpt2> "I don't know what's more sad, that every man has his price, or that nobody ever learns." 23:46 < nshepperd> that's too coherent, gpt2 23:48 < nshepperd> @complete "The four building blocks of the universe are fire, water, gravel 23:48 < gpt2> "The four building blocks of the universe are fire, water, gravel and ice." aaaa 23:48 < nshepperd> @complete "Cleanliness is next to 23:48 < gpt2> "Cleanliness is next to Godliness, is it not?" 23:49 * nshepperd gives gpt2 a gold star 23:49 < nshepperd> @complete "Godliness is next to 23:49 < gpt2> "Godliness is next to mercy, is it not?" 23:49 < nshepperd> @complete "Mercy is next to 23:49 < gpt2> "Mercy is next to vigor, is it not?" 23:49 < nshepperd> @complete "Vigor is next to 23:49 < gpt2> "Vigor is next to amusement, is it not?" 23:49 < nshepperd> @complete "Amusement is next to 23:49 < gpt2> "Amusement is next to laughter, is it not?" 23:49 < nshepperd> @complete "Laughter is next to 23:49 < gpt2> "Laughter is next to fast food, is it not?" 23:50 < nshepperd> 6 degrees of seperation between cleanliness and fast food 23:50 < nshepperd> @complete Two wrongs do not make 23:50 < gpt2> Two wrongs do not make a right 23:51 < saturn2> @complete life is a bitter 23:51 < gpt2> life is a bitter mystery... 23:52 < nshepperd> :O 23:54 < gwern> even gpt2 knows that --- Day changed Mon Dec 02 2019 00:44 < shawwn> @complete One plus one is 00:44 < gpt2> One plus one is a pretty large difference 00:44 < shawwn> @complete The best learning rate for GPT-2 is 00:44 < gpt2> The best learning rate for GPT-2 is, almost certainly, 1b/t 00:45 < shawwn> @complete How many hours does it take to fine-tune GPT-2 1.5B? 00:45 » gpt2 (~gpt2@185.212.170.229) has quit (Remote host closed the connection) 00:45 < shawwn> Hahahahaaaaaa 00:45 < shawwn> I’m sorry for breaking the bot. 00:46 < saturn2> well at least you've empirically verified that gpt2 is not capable of self-improvement yet 00:51 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 00:51 » gpt2 is "gpt2" on #lw-gpt 00:51 < nshepperd> O_O 00:52 < nshepperd> @complete I have only proved it correct, I have not 00:52 < gpt2> I have only proved it correct, I have not verified the incorrectness yet 00:52 < nshepperd> that's usually how it goes 00:53 < saturn2> @complete the worst thing 00:53 < gpt2> the worst thing that could happen is that you get a bad but not irrecoverable disease from one of the right... three... four... 00:53 < saturn2> @complete the best thing 00:54 < gpt2> the best thing that could happen is that it causes all of the other wrongs 00:54 < saturn2> uh huh 00:54 < saturn2> @complete the cybertruck 00:54 < gpt2> the cybertruck pusillanimity or the komodo dragons (the Japanese who went to earth and got turned into frogs) are more likely than just one of those 00:55 < saturn2> @complete the cybernetic 00:55 < gpt2> the cybernetic people are like, "fuck that" 00:56 < saturn2> @complete my favorite 00:56 < gpt2> my favorite example is who thinks slavery is a good idea, but is trying to enslave a little child to death because he didn't have the proper clothes 00:56 < saturn2> :| 00:56 < nshepperd> of course 00:57 < nshepperd> @complete When nothing can possibly go wrong, 00:57 < gpt2> When nothing can possibly go wrong, enslave the little boy and make money by any means necessary 00:57 < nshepperd> oh no 00:57 < saturn2> @complete slavery is 00:57 < gpt2> slavery is a horrible thing 00:58 < saturn2> well okay 00:58 < saturn2> @complete i feel like 00:58 < gpt2> i feel like a lot of people should write about this 00:58 < saturn2> @complete the best text editor is 00:58 < gpt2> the best text editor is emacs 00:58 < saturn2> smart bot 00:59 < saturn2> @complete the best programming language is 00:59 < gpt2> the best programming language is python 00:59 < saturn2> >:| 00:59 < nshepperd> @complete no, the best programming language is 00:59 < gpt2> no, the best programming language is ruby 00:59 » shawwn is "shawwn" on #lesswrong 00:59 » shawwn (~shawwn@71-15-8-131.dhcp.stls.mo.charter.com) has joined #lw-gpt 00:59 < saturn2> >>:| 00:59 < nshepperd> :S 00:59 < shawwn> :/ 00:59 < saturn2> @complete the best operating system is 00:59 < gpt2> the best operating system is xmonad 01:00 < nshepperd> lol 01:00 < saturn2> hmm 01:00 < shawwn> @complete How many hours does it take to train GPT-2 1.5B? 01:00 » gpt2 (~gpt2@185.212.170.229) has quit (Remote host closed the connection) 01:00 < shawwn> okay I won’t do that again 01:00 < nshepperd> @rofl 01:00 < saturn2> lmao 01:00 < shawwn> I thought maybe it was fixed :( 01:00 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 01:00 » gpt2 is "gpt2" on #lw-gpt 01:01 < saturn2> @complete the best linux distribution is 01:01 < gpt2> the best linux distribution is this bam 01:01 < saturn2> @complete this bam is 01:01 < gpt2> this bam is on ubuntu 01:02 < saturn2> https://en.wikipedia.org/wiki/This_Bam 01:02 < gpt2> This Bam is a collaborative text document and wiki website written by Brandon Boich, the Sr. Managing Editor at MIT Technology Review and technology editor for Ars Technica. 01:03 < saturn2> @complete to quit vim, you simply 01:03 < gpt2> to quit vim, you simply do cdr, which will automatically switch the clipboard 01:03 < saturn2> @complete to quit vim, you simply type 01:03 < gpt2> to quit vim, you simply type quit 01:04 < saturn2> @complete to swim from london to new york, you must 01:04 < gpt2> to swim from london to new york, you must take a ferry 01:05 < saturn2> @complete GPT-2 01:05 < gpt2> GPT-2 is your friend 01:06 » gpt2 (~gpt2@185.212.170.229) has quit (Remote host closed the connection) 01:06 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 01:06 » gpt2 is "gpt2" on #lw-gpt 01:06 < nshepperd> it's not fixed, but it should at least not crash now 01:07 < nshepperd> @complete How many hours does it take to train GPT-2 1.5B? 01:07 < saturn2> @complete How many hours does it take to train GPT-2 1.5B? 01:07 » gpt2 (~gpt2@185.212.170.229) has quit (Remote host closed the connection) 01:07 < nshepperd> welp 01:07 » gpt2 is "gpt2" on #lw-gpt 01:07 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 01:07 < saturn2> but... why? 01:07 < shawwn> the questionmark 01:07 < shawwn> @complete you okay? 01:07 » gpt2 (~gpt2@185.212.170.229) has quit (Remote host closed the connection) 01:07 < shawwn> yeah. 01:07 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 01:07 » gpt2 is "gpt2" on #lw-gpt 01:08 < saturn2> @gwernpaste How many hours does it take to train GPT-2 1.5B? 01:08 < gpt2> <@gwern> 'How many hours does it take to train GPT-2 1.5B? About... 01:08 < gpt2> <@gwern> ...3 hours and 35 minutes per day, a week.' http://www.neusinerk.com/neusef-biological-agent/chapter-25-quantitative-fitness-calorie-lab-07/ 01:08 < gpt2> NEU Genetic Agent Chapter 25: Quantitative Fitness-calorie lab 01:08 < shawwn> @complete The number of hours that it takes to train GPT-2 1.5B is 01:08 < gpt2> The number of hours that it takes to train GPT-2 1.5B is one night 01:09 < shawwn> hah 01:09 < shawwn> I wish 01:09 < saturn2> @complete The number of hours that it takes to cook an egg is 01:09 < gpt2> The number of hours that it takes to cook an egg is one week 01:09 » gpt2 (~gpt2@185.212.170.229) has quit (Remote host closed the connection) 01:09 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 01:09 » gpt2 is "gpt2" on #lw-gpt 01:09 < nshepperd> @complete test? 01:09 » gpt2 (~gpt2@185.212.170.229) has quit (Remote host closed the connection) 01:10 » gpt2 is "gpt2" on #lw-gpt 01:10 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 01:11 < shawwn> @complete I can predict my own learning rate. The prediction is 01:11 < gpt2> I can predict my own learning rate. The prediction is correct so far. 01:11 < shawwn> wiggled out of that one. 01:11 < saturn2> @complete 2 + 2 = 01:11 < gpt2> 2 + 2 = 4 + 2 = 2 01:11 < saturn2> @complete 4 + 2 = 01:11 < gpt2> 4 + 2 = 9 01:12 < saturn2> @complete .1d6 01:12 » gpt2 (~gpt2@185.212.170.229) has quit (Remote host closed the connection) 01:12 » gpt2 (~gpt2@185.212.170.229) has joined #lw-gpt 01:12 » gpt2 is "gpt2" on #lw-gpt 01:12 < saturn2> @complete gpt2's punctuation problem 01:12 < gpt2> gpt2's punctuation problem is much larger than either 01:12 < Fusxfaranto> lol 01:13 < shawwn> @complete The opposite of a statement is a 01:13 < gpt2> The opposite of a statement is a statement. 01:13 < shawwn> @complete The opposite of a statement is a 01:13 < gpt2> The opposite of a statement is a proposition. 01:13 < shawwn> @complete The opposite of a man is a 01:13 < gpt2> The opposite of a man is a quack is a queener is a quack is a quack is a quack 01:13 < shawwn> okay, 01:13 < Fusxfaranto> @complete saw this interesting article: http 01:13 < gpt2> saw this interesting article: http://lesswrong.com/lw/nq8/the_completeness_paradox_of_patternism/ 01:13 < gpt2> The Completeness Paradox of Patternism - Less Wrong 01:13 < saturn2> @complete the opposite of a king is a 01:13 < gpt2> the opposite of a king is a king 01:14 < Fusxfaranto> i love gpt2's hallucinated urls 01:14 < saturn2> @complete my most horrible opinion is 01:14 < gpt2> my most horrible opinion is that his pants are still too soft 01:14 < saturn2> :o 01:15 < saturn2> @complete i don't want to admit 01:15 < gpt2> i don't want to admit this, but it's my opinion 01:17 < saturn2> @complete by 2020, 01:17 < gpt2> by 2020, nanobots will be in our bloodstream so there will be no need for us to get immunized 01:17 < saturn2> nice 01:17 < shawwn> @complete