×
top 200 commentsshow all 420

[–]D-CrOqMVaAVcCVn-cQs- 250 points251 points  (56 children)

you'd think they'd actually drop the model before releasing the announcement

[–]lashman[S] 96 points97 points  (49 children)

they probably should have, lol

[–]Aggressive_Sleep9942 26 points27 points  (46 children)

My question is: is there an official announcement that the model is going to be launched today, that is to say that it says so explicitly? I don't think so.

[–]StickiStickman 74 points75 points  (34 children)

[–]TizocWarrior 19 points20 points  (8 children)

It will be available "in a week or so".

[–]Sixhaunt 14 points15 points  (6 children)

but it is available on their API and stuff already. They JUST said about 5 seconds ago in their discord "if you want the weights you'll have to wait about an hour"

[–]mysteryguitarmJoe Penna - Stability Staff 35 points36 points  (4 children)

Yeah, /u/lashman is posting the announcement about availability on API and Amazon Bedrock and whatnot.

Open source is gonna be avail in... about 30 minutes from when I'm posting this.


Out of hundreds of thousands of votes, here's how well it tested!

[–]gunnerman2 2 points3 points  (2 children)

Was that a blind sample?

[–]mysteryguitarmJoe Penna - Stability Staff 4 points5 points  (1 child)

Yeah, it was. Also randomizing A vs B position, randomizing hyperparams, etc.

[–]Severin_Suveren 3 points4 points  (0 children)

They confirmed on Github about 30min ago that it will be out in an hour

Source

[–]gigglegenius 10 points11 points  (1 child)

Yes on their Discord. There is an event in an hour, live with the StabilityAI team

[–]Vivarevo 7 points8 points  (0 children)

getting so many scammy shitcoin marketing megahype to hype launchflashbacks.

the trauma (because the shit hit the fan at launch)

[–]AlinCviv 16 points17 points  (4 children)

hilariously enough, the official announcement says it's released now
but after asked, the dev said "in 2 hours"

[–]Tom_Neverwinter 17 points18 points  (2 children)

Probably uploading lol

Dl speed insane.

Ul speed trash.

Happens to the best of us

[–]hervalfreire 10 points11 points  (1 child)

someone forgot to tell the developer to upload the thing

[–]Erestyn 11 points12 points  (0 children)

They're hotspotting on their phone and just ran out of data.

F.

[–]D-CrOqMVaAVcCVn-cQs- 1 point2 points  (0 children)

think so

[–]lonewolfmcquaid 6 points7 points  (0 children)

that's stability...stabiliting? our bois doing us proud 😂😂

[–]Spyder638 91 points92 points  (14 children)

Sorry for the newbie question but I bet I’m not the only one wondering, so I’ll ask anyway:

What does one likely have to do to make use of this when the (presumably) safetensors file is released?

Update Automatic1111 to the newest version and plop the model into the usual folder? Or is there more to this version? I’ve been lurking a bit and it does seem like there has been more steps to it.

[–]red__dragon 37 points38 points  (6 children)

Update Automatic1111 to the newest version and plop the model into the usual folder? Or is there more to this version?

From what I saw from the A1111 update, there's no auto-refiner step yet, it requires img2img. Which, iirc, we were informed was a naive approach to using the refiner.

How exactly we're supposed to use it, I'm not sure. SAI's staff are saying 'use comfyui' but I think there should be a better explanation than that once the details are actually released. Or at least, I hope so.

[–]indignant_cat 6 points7 points  (5 children)

From the description on the HF it looks like you’re meant to apply the refiner directly to the latent representation output by the base model. But if using img2img in A1111 then it’s going back to image space between base and refiner. Does this impact how well it works?

[–]Torint 7 points8 points  (0 children)

Yes, latents contain some information that is lost when decoding to an image.

[–]maxinator80 2 points3 points  (3 children)

I tried generating in text2img with the base model and then using img2img with the refiner model. The problem I encountered was that the result looked very different from the intermediate picture. This can be somewhat fixed by lowering the denoising strength, but I believe this is not the intended workflow.

[–]smoowke 2 points3 points  (2 children)

So you'd have to switch models constantly?....hell...

[–]somerslot 21 points22 points  (0 children)

That should be enough, but you can watch the official announcement for more details, and I bet some SAI staff will come here to share some extra know-how after the official announcement is over.

[–]panchovix 82 points83 points  (15 children)

Joe said on discord that the model weights will be out in 2:30 hours or so.

Edit: message https://discord.com/channels/1002292111942635562/1089974139927920741/1133804758914834452

[–]Kosyne 145 points146 points  (13 children)

wish discord wasn't the primary source for announcements like this, but I feel like I'm just preaching to the choir at this point.

[–]mysteryguitarmJoe Penna - Stability Staff 70 points71 points  (11 children)

New base. New refiner. New VAE. And a bonus LoRA!


Screenshot this post. Whenever people post 0.9 vs 1.0 comparisons over the next few days claiming that 0.9 is better at this or that, tell them:

"1.0 was designed to be easier to finetune."

[–]acoolrocket 8 points9 points  (0 children)

You're not alone, Discord servers having so much information, but none of them being searchable/a quick Google search is why Reddit exists.

[–]hervalfreire 30 points31 points  (9 children)

Since it's now confirmed it's 2 models (base + refiner) - anyone knows how to use the refiner on auto1111?

[–]Alphyn 25 points26 points  (0 children)

Unfortunately, the imd2img workflow is not really how it's meant to be. It looks like the almost generated image with leftover noise should be sent to the refiner while still being in latent space. Without actually rendering it as an actual image and then sending it back into latent space and the Refiner. I've been using this workflow in comfyUI, that seems to utilize the refiner properly and it's also much faster than auto111 on my PC at least: https://github.com/markemicek/ComfyUI-SDXL-Workflow <-- Was made for 0.9, I'm not sure it works as intended with SDXL 1.0.

TLDR: steps 1-17 are done by the base model and steps 18-20 by the refiner.

If anyone knows better workflows, please share them. For the time being we'll have to wait for a better refiner implementation in Auto1111 and either use img2img or comfyui.

Edit: Oh, the official ComfyUI workflow is out: https://comfyanonymous.github.io/ComfyUI_examples/sdxl/ <--- After some testing, this workflow seems to be the fastest and gives the best results of the three.

Another WIP Workflow from Joe: https://pastebin.com/hPc2tPCP (download RAW, Rename to .json).

[–]Touitoui 26 points27 points  (4 children)

Use the base model with txt2img, then run your image in img2img with the refiner, denoise set to 0.25.
The process will probably be made automatic later on.

[–]Ashken 1 point2 points  (1 child)

just out of curiosity, If I'm already using img2img, do I not have to worry about it at all?

[–]Touitoui 5 points6 points  (0 children)

From my understanding, the refiner is used to add details, and is mainly used for images generated with the base model. So it depends on what result you want.
If you use the base model with img2img and the result is good enough for you, you can stop there. Or maybe try to run the refiner to check if the result is better.
If you use the refiner on a "non-SDXL" image and the result is good, you're good to go too.

[–]wywywywy 11 points12 points  (0 children)

You run the result through img2img using the refiner model but with fewer sampling steps

[–]TheDudeWithThePlan 8 points9 points  (0 children)

I've managed to get it to work by generating a txt2img using the base model and then img2img that using the refiner but it doesn't feel right.

Once you change the model to the refiner in the img2img tab you need to remember to change it back to base once you go back to txt2img or you'll have a bad time.

Check out my profile for example image with and without the refiner or click here

[–]TheForgottenOne69 2 points3 points  (0 children)

Sadly it’s not integrated well atm… try vladmandic automatic it works dirctely with text2image

[–]enormousaardvark 30 points31 points  (2 children)

They should seed a torrent.

[–]freebytes 3 points4 points  (1 child)

This is a good idea. Huggingface should host the official torrents.

[–]zefy_zef 2 points3 points  (0 children)

I'm sure that would help them a whole lot lol.

[–]Shagua 24 points25 points  (29 children)

How much VRAM does one need for SDXL. I have an 2060 with 6GB VRAM and sometimes struggle with 1.5. Should i even bother downloding this release?

[–]RayIsLazy 23 points24 points  (12 children)

idk, sdxl 0.9 worked just find on my 6GB 3060 through comfy ui.

[–]feralkitsune 13 points14 points  (11 children)

IDK what it is about comfy UI but it uses way less VRAM for me on my card. I can make way larger images in comfy, much faster than the same settings in A1111

[–]alohadave 14 points15 points  (10 children)

It's much better about managing memory. I tried SDXL.9 on my 2GB GPU, and while it was extremely painful (nearly two hours to generate a 1024x1024 image), it did work. It effectively froze the computer to do it, but it did work.

With A1111, I've had OOM messages trying to generate on 1.5 models larger than 768x768.

[–]Nucaranlaeg 6 points7 points  (1 child)

I can't generate 1024x1024 on my 6GB card on SD1.5 - unless I generate one image (at any resolution) with a controlnet set to "Low VRAM". Then I can generate 1024x1024 all day.

Something's screwy with A1111's memory management, for sure.

[–]spacetug 2 points3 points  (1 child)

Pretty sure it would be faster to run it on CPU only at that point. With a 1.5 model, you could get a 512x512 image in a few minutes on a midrange CPU, assuming you don't run out of system RAM. Multiple hours sounds like it wasn't just spilling into RAM but also a pagefile.

[–]mrmczebra 14 points15 points  (1 child)

I only have 4GB of VRAM, but 32GB of RAM, and I've learned to work with this just fine with 1.5. I sure hope there's a way to get SDXL to work with low specs. I don't mind if it takes longer to render.

[–]fernandollb 2 points3 points  (0 children)

I am a bit of a noob but I have read there are ways to make it work in 6GB cards so I think you will be fine, just with some limitations that I have no idea what those would be, maybe lower resolution.

[–]Lodarich 9 points10 points  (2 children)

0.9 runs fine on my gtx 1060 6gb

[–]lordpuddingcup 2 points3 points  (0 children)

8gb vram 16gb ram I believe is the recommended minimum

[–]Connect_Metal1539 1 point2 points  (0 children)

SDXL 0.9 works fine for my RTX 3050 4GB

[–]SDGenius 45 points46 points  (0 children)

I think my doctor called this syndrome premature exclamation.

[–]enormousaardvark 21 points22 points  (1 child)

R.I.P huggingface for the next 24 hours lol

[–]Touitoui 17 points18 points  (0 children)

CivitAI seem to be ready for SDXL 1.0 (search settings have the button "SDXL1.0") so...
R.I.P CivitAI for the next 24 hours too, hahaha

[–]Whipit 30 points31 points  (3 children)

Feel like this thread title should be edited until SDXL 1.0 is ACTUALLY released.

People will want a clear thread and link where to download as soon as it goes up. This thread just serves to confuse.

[–]StickiStickman 10 points11 points  (0 children)

You cant edit thread titles

[–]detractor_Una 21 points22 points  (4 children)

[–]ninjasaid13 7 points8 points  (0 children)

*Accidentally presses F4*

[–]detractor_Una 2 points3 points  (0 children)

Here we go everyone

[–]KrawallHenni 11 points12 points  (1 child)

Is it enough to download the safetensor and drop them in the models folder or do I need to so some more?

[–]saintbrodie 32 points33 points  (41 children)

Images generated with our code use the invisible-watermark library to embed an invisible watermark into the model output. We also provide a script to easily detect that watermark. Please note that this watermark is not the same as in previous Stable Diffusion 1.x/2.x versions.

Watermarks on SDXL?

[–]__Hello_my_name_is__ 40 points41 points  (16 children)

Invisible watermarks to let everyone know the image is AI generated.

[–]R33v3n 25 points26 points  (7 children)

Can probably be disabled if it's added in post through a library. SD 1.5 does it too and Automatic1111 has a setting to turn it off.

[–]AuryGlenz 18 points19 points  (6 children)

The setting in Automatic1111 never worked - images were never watermarked one way or the other. The setting was eventually removed.

[–]thoughtlow 12 points13 points  (2 children)

I wonder how fast they'll be able to reverse-engineer this thing.

[–]demoran 2 points3 points  (0 children)

So it's a scarlet letter. That's comforting.

[–]michalsrb 37 points38 points  (10 children)

A watermark is applied by the provided txt2img code: https://github.com/Stability-AI/stablediffusion/blob/cf1d67a6fd5ea1aa600c4df58e5b47da45f6bdbf/scripts/txt2img.py#L206

It would be easily removed and it won't be done by A1111 when using the model, unless A1111 authors decide to include it.

It is property of the accompanying code, not the model itself. Unless another watermarking is somehow trained into the model itself, which I doubt.

[–]saintbrodie 3 points4 points  (0 children)

Thanks! That answers my question.

[–]Doug_Fripon 4 points5 points  (0 children)

Thanks for reviewing the code!

[–]Relocator 6 points7 points  (0 children)

Ideally the watermarks are stored in the file so that any future image training will know to skip these images to maintain fidelity. We don't really want new models trained on half AI images accidentally.

[–]fernandollb 25 points26 points  (21 children)

First noob to comment, how do I actually download the model? I accessed the GitHub page but cannot see any safe tensor to download just a very light file.

[–]rerri 35 points36 points  (13 children)

When it drops, probably huggingface. (not there yet)

https://huggingface.co/stabilityai

[–]mfish001188 11 points12 points  (9 children)

Looks like the VAE is up

[–]fernandollb 1 point2 points  (8 children)

Do we have to change the VAE once the model drops to make it work? if so how do you do that in 1111? Thanks for the info btw

[–]mysteryguitarmJoe Penna - Stability Staff 11 points12 points  (0 children)

New VAE will be included in both the base and the refiner.

[–]metrolobo 6 points7 points  (2 children)

Nah VAE is baked in both in diffusers and singlefile safetensors versions. Or was for the 0.9 XL beta and all previous SD versions at least so very unlikely to change now.

[–]fernandollb 4 points5 points  (1 child)

so if that's the case we just have to leave the VAE setting in automatic right?

[–]metrolobo 3 points4 points  (0 children)

yep

[–]mfish001188 4 points5 points  (3 children)

Great question. Probably?

VAE is usually selected automatically, idk if A1111 will auto-select the XL one or not. But there is a setting in the settings menu to change the VAE. You can also add it to the main UI in the UI settings. Sorry I don't have it open atm so I can't be more specific. But it's not that hard once you find the setting

[–]99deathnotes 5 points6 points  (0 children)

they are listed here:https://github.com/Stability-AI/generative-models

but yo get a 404 when u click the links to d/l

[–]lashman[S] 6 points7 points  (6 children)

Guess they put up the announcement a tad early, don't think the files are up on github just yet. Any minute now, though

[–]mysteryguitarmJoe Penna - Stability Staff 6 points7 points  (3 children)

The announcement is true for API / DreamStudio / Clipdrop / AmazonSagemaker.

Open source weights are set to go live at 12:30pm PST on HuggingFace.

[–]utkarshmttl 4 points5 points  (0 children)

How does one access the API? Dreamstudio?

Edit: got it! https://api.stability.ai/docs I wonder why is Replicate more popular over the official APIs, any ideas?

Edit2: why doesn't official API has Lora/Dreambooth endpoints?

[–]AlinCviv 17 points18 points  (0 children)

"SDXL 1.0 is out"
no, it is not, but we just announce it cause we not

why not just say, its coming out soon

"now released"

[–]batter159 18 points19 points  (1 child)

Narrator : it was, in fact, not out

[–]farcaller899 4 points5 points  (0 children)

What Michael really meant, was that it was out, but couldn't be downloaded...yet.

[–]nevada2000 17 points18 points  (5 children)

Most important question: is it censored?

[–]sjukfan 4 points5 points  (0 children)

This should've been in the tl;dr

[–]zefy_zef 1 point2 points  (2 children)

course not :D They need this un-neutered from the start. They want the creations made from this to be good and they'll be letting down a very large part of the userbase if they begin with a censored base. Everything SDXL has to be built on top of this.

[–]Grdosjek 4 points5 points  (1 child)

Oh boy! Oh boy! Oh boy! Oh boy!

I wouldnt like to be hugginface server for next 24 hours

[–]frequenttimetraveler 4 points5 points  (0 children)

Do they release torrents?

[–]massiveboner911 5 points6 points  (2 children)

Where is the model or am I an idiot?

[–]Touitoui 2 points3 points  (1 child)

Not available yet, they are currently talking about it on a discord event. Should be available at the end of the event or something.

[–]ARTISTAI 1 point2 points  (0 children)

The Huggingface links are live now.

[–]MikuIncarnator1 4 points5 points  (0 children)

While we are waiting for the models, could you please drop the latest processes for ComfyUI ?

[–]Bat_Fruit 4 points5 points  (1 child)

Deep Joy!!, Just a puny 3060M 6GB.

<image>

[–]No-Blood7224 4 points5 points  (1 child)

Would they release XL inpainting models?

[–]squareOfTwo 1 point2 points  (0 children)

I guess the model is already trained for inpainting ... i never tried it

[–]Whipit 4 points5 points  (5 children)

On Discord people are saying SDXL 1.0 will be released in 16 minutes from now :)

[–]HumbleSousVideGeek 2 points3 points  (0 children)

It’s there

[–]Mysterion320 1 point2 points  (3 children)

Do I go onto discord to download or will it be on github?

[–]reality_comes 2 points3 points  (0 children)

huggingface

[–]Whipit 1 point2 points  (1 child)

I'd imagine it will be a Huggingface link.

This is the link I'm seeing that people are watching...

https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0

Not sure what the Refiner link will be. We should start getting answers in about 10 minutes. Excited! :)

[–]Mysterion320 2 points3 points  (0 children)

I have those links open as well. pressing F5 just in case, lol.

[–]Vyviel 4 points5 points  (0 children)

Is there an idiots guide to getting it working in A1111?

[–]Aethelredditor 3 points4 points  (0 children)

My excitement for Stable Diffusion XL has been tempered by memory issues and difficulties with AUTOMATIC1111's Stable Diffusion Web UI. I am also a little disappointed by the prevalent stock image style, extreme depth of field, and the fact that all the people I generate look like supermodels. However, it definitely handles complex backgrounds and smaller details better than previous versions of Stable Diffusion (though hands still appear troublesome). I am eager to see what I can generate after some experimentation and experience.

[–]lordpuddingcup 13 points14 points  (7 children)

Where the heck is realistic visionXL 1.0 man these model tuners are taking forever, even deliberateXL isn’t out yet Jesus so slow….

Just kidding lol but it is funny cause you know as soon as SDXL 1.0 is out we’re gonna have people actually hitching that the. Stupid model makers haven’t released a 1.0 xl model finetune yet

It’s gonna be like those job requirements that require 5 years experience for something that came out last week

[–]lost-mars 5 points6 points  (1 child)

I think we might have to take a step back...

Where the heck is SDXL 1.0 man these model makers are taking forever, Jesus so slow….

There corrected it it for you :)

[–]lordpuddingcup 2 points3 points  (0 children)

Haha well that’s a mistake on their announcement but I’m just laughing about even when that’s out people will start complaining that tunes are taking too long I’m surprised we don’t see that already… where’s deliberate xl… before sdxl is out lol

[–]Magnesus 2 points3 points  (1 child)

The fact that the core model isn't out yet either makes your joke even funnier.

[–]lordpuddingcup 1 point2 points  (0 children)

Yep that’s what mad me think of it

[–]HeralaiasYak 1 point2 points  (0 children)

Emad hinted that they will be given access ahead of time, so they can start training before the official release

[–]msesen 4 points5 points  (0 children)

How do I update guys? I have the AUTOMATIC111 repo cloned with the 1.5 model.

Do I just run git pull on a command line to get the update, and then download the 1.0 model and place it into the models folder?

[–]99deathnotes 2 points3 points  (2 children)

it is available on ClipDrop but we cant access it yet on HuggingFace

[–]suby 2 points3 points  (2 children)

Can this be used for commercial purposes? I seem to remember something about a newer StableDiffusion model having limitations here, but i'm not sure if i imagined that.

[–]TeutonJon78 4 points5 points  (0 children)

I think you can use the output for anything you want (copyright issues notwithstanding). It's using the models for commercial uses that has restrictions usually (like hosting it on a paid generation service).

I may be wrong though, IANAL.

[–]ninjasaid13 1 point2 points  (0 children)

Yes.

[–]jingo6969 2 points3 points  (1 child)

Downloading! Do we just replace the 0.9 versions in ComfyUI?

[–]kolyamba69 2 points3 points  (0 children)

what about control net sdxl?

[–]DisorderlyBoat 2 points3 points  (2 children)

Huzzah! Good on them, this will be a great tool.

Can it be used in Automatic 1111 now? Basically by downloading it and putting it in the models folder so it's selectable from the checkpoint drop down?

[–]BjornHafthor 2 points3 points  (1 child)

Yes. What I still can't figure out is how to use the refiner in A1111.

[–]XBThodler 2 points3 points  (5 children)

Has anyone managed to get SDXL actually working on Automatic 1111?

[–]some_onions 1 point2 points  (0 children)

Yes, I did a fresh install and it works fine.

[–]Turkino 1 point2 points  (2 children)

I was able to update mine, put both the model and refiner into the models directory and selected the model file for my... well... model and it worked fine.

[–]markdarkness 2 points3 points  (0 children)

I got it to run and all, but... it's kind of okay at best? I'm sure in time as it gets worked on by the community it will see a jump like we saw between Base 1.5 and EpicRealism... but honestly, right now it eats a massive amount of resources to deliver somewhat better results -- in some cases. Mainly it's consistently better at backgrounds, that much is true. But eh.

[–]Sofian375 6 points7 points  (8 children)

[–]Bat_Fruit 9 points10 points  (2 children)

word is wait a couple of hours from now.

edit : A1111 needs an update for 1.0 but ComfyUI is solid.

its 20:15 here ... 15 mins to go apparently!

20:31 - ITS LIVE!!!!

[–]cpt-derp 14 points15 points  (1 child)

That's some edging blueballs shit.

[–]Actual_Possible3009 1 point2 points  (0 children)

Yeah I have extra interrupted my "toilet routine".... for nothing. Will check later today again

[–]Working_Amphibian 6 points7 points  (1 child)

Not out yet. Why are you misleading people?

[–]Sandbwoy2 2 points3 points  (0 children)

It is available to use now on clipdrop.co

[–]junguler 3 points4 points  (0 children)

i'll wait until there is a torrent since i wasted 2 hours last night trying to download the 0.9 and it errored out after 9 gb

[–]SomeKindOfWonderfull 4 points5 points  (2 children)

While I'm waiting for the models to drop I thought I might try 1.0 out on ClipDrop "People running towards camera" ...I was kinda hoping for a better result TBH

<image>

[–]Stunning_Duck_373 4 points5 points  (0 children)

Big disappointment so far.

[–]iia 3 points4 points  (1 child)

It's out and I'm downloading it.

Edit: 130 seconds prompt-to-image on a P5000. Karras, 20 steps. Plug-and-play on ComfyUI.

[–]99deathnotes 2 points3 points  (0 children)

ditto

[–]NeverduskX 3 points4 points  (0 children)

Can confirm it works on Auto (or at least the UX branch I'm on, which follows the main Auto branch). Uses a lot more VRAM, memory, and generation is slower. For now I'll probably stick to 1.5 until some good community models come out of XL.

[–]first_timeSFV 6 points7 points  (7 children)

Is it censored?

[–]GeomanticArts 7 points8 points  (3 children)

Almost certainly. They've dodged the question every time it has been asked, mostly responding with 'you can fine tune it'. I take that to mean it has as dramatically reduced nsfw training set as they can get away with. Probably close to none at all.

[–]Oubastet 2 points3 points  (1 child)

I tried for about ten minutes with 0.9 out of curiosity. Everything was very modest or artful nude with crossed arms and legs, backs to the "camera", etc. Nothing wrong with that but yeah, it appears that NSFW is at least suppressed.

The subject matter is likely there but may require some training to bring it out. Not sure myself, I've never tried a fine tune or Lora.

[–]ptitrainvaloin 3 points4 points  (1 child)

Congrats but why no huggingface (yet, too soon?) *the SDXL 1.0 VAE is up on it! "Come join us on stage with Emad and Applied-Team in an hour for all your burning questions! Get all the details LIVE!" Link?

*It's out now!

[–]I-am_Sleepy 2 points3 points  (0 children)

I think it will be release in an hour (see release announcement)

[–]ImUrFrand 4 points5 points  (2 children)

when will it be compatible with automatic1111

[–]Affectionate_Foot_27 5 points6 points  (0 children)

Immediately, or when SDXL 1.0 this is actually released

[–]Touitoui 2 points3 points  (0 children)

SDXL 0.9 works already, it should be compatible for 1.0 too.

[–]DavesEmployee 1 point2 points  (0 children)

What’s the model name on their API?

[–]Philipp 1 point2 points  (2 children)

Is there a trick to always generate words?

I tried e.g. coffee with cream text "dream big" but it's hit and miss...

[–]Opening-Ad5541 1 point2 points  (0 children)

lets fucking go!

[–]joaocamu 1 point2 points  (1 child)

Is there any difference in terms of VRAM consumption over SD 1.5? i ask this because i'am a "lowvram" user myself, just want to know if i should have any expectations

[–]TeutonJon78 2 points3 points  (0 children)

If you're lowvram already, expect to not run it (or at least not till people optimize it). They bumped the minimum recommended reqs to 8GB VRAM.

Nvidia 6GB people have been running it on ComfyUI though.

[–]LuchoSabeIngles 1 point2 points  (0 children)

They gonna have a HuggingFace space for this? My laptop’s not gonna be able to handle that locally

[–]Dorian606 1 point2 points  (2 children)

Kinda a noob-ish question: what's the difference between a normal model and a refiner?

[–]detractor_Una 5 points6 points  (1 child)

Normal is for initial image, refiner used to add more detail. Just join discord. https://discord.gg/stablediffusion

[–]Dorian606 1 point2 points  (0 children)

Oh cool thanks!

[–]Longjumping-Fly-2516 1 point2 points  (0 children)

they just said 1hr-ish on discord.

[–]powersdomo 1 point2 points  (1 child)

Awesome! Question: there are two tokenizers - I assume one is the original leaked one and the new one is completely open source - do both of them understand all the new subtleties like 'red box on top of a blue box' or only the new one?

[–]powersdomo 1 point2 points  (0 children)

Saw an article that says the language model is a combination of OpenAI (original) and then SD's Clip model it introduced in SD 2.0?

'The language model (the module that understands your prompts) is a combination of the largest OpenClip model (ViT-G/14) and OpenAI’s proprietary CLIP ViT-L'

[–]HumbleSousVideGeek 1 point2 points  (0 children)

Its available !!!! Huggingface down in 1mn

[–]Noeyiax 1 point2 points  (0 children)

i was here, congrats, can't wait!!

[–]Roubbes 1 point2 points  (0 children)

Waiting for an ELI5 tutorial

[–]RD_Garrison 1 point2 points  (1 child)

At Clipdrop they now stick the big, ugly, intrusive watermark on the image by default, which deeply sucks.

[–]monsieur__A 1 point2 points  (0 children)

Amazing. Stability.ai communicated on it saying that controlnet will be support immediately. Is it true?

[–]AnyNamesLeftAnymore 1 point2 points  (2 children)

All I want to know is a) can I drop it in the same models folder with all my old 1.5 stuff and b) can I still use that stuff if I do?

[–]Entire_Telephone3124 1 point2 points  (1 child)

Yes it goes in model folder, if you get errors check the console and some extensions are freaking it the fuck out, so disable them (prompt fusion for example)

[–]seandkiller 1 point2 points  (0 children)

It seems fairly capable so far, but I imagine I'll wait for LORAs and such to release before I use it more. I was surprised that it can generate a variety of anime styles even without loras, though. Generated some nice looking stylistic things like Tarot cards well enough, too.

It would be nice if it generated faster and if I could actually use the refiner more reliably without getting an out of memory error from Auto, but those were both more or less expected for me.

[–]Brianposburn 1 point2 points  (0 children)

So hyped to see this, but being a complete noob, I have some questions. I'm using the SDUI - did a new clean install of 1.5.0 in a different directory.

I want to make sure my understanding is right of how it works with the new SDXL: * Lora's don't work (yet!)? Is that an accurate statement? * Textual inversions will still work(ie,deepnegative,badhands, etc?) * I thought I read controlnet and roop won't work with the new model yet...that right?

Probably simple questions, but wanted to make sure I understood before I started copying over stuff to my nice shiny clean environment...

[–]tronathan 1 point2 points  (0 children)

Any recommendations for a plug-and-play docker image?

[–]ptitrainvaloin 1 point2 points  (0 children)

Now, need an inpainting model of SDXL 1.0

[–]uberfunstuff 1 point2 points  (0 children)

Anybody got it to work with AMD stuff on a mac?

[–]PlayBackgammon 1 point2 points  (0 children)

Does it work on Apple Silicon? Seems no