×
all 36 comments

[–]PiyarSquare[S] 24 points25 points  (19 children)

This model usually generates black and white designs that are easy to vectorize and cut with a programmable cutting machine, like a Silhouette or Cricut. Available at https://huggingface.co/PiyarSquare/stable_diffusion_silz.

[–]prozacgod 15 points16 points  (6 children)

This is a brilliant idea. It's so simple, yet so f'in useful... and the results are not only better than algorithmic software, but can be customized, and fiddled with or... just generate 100 until it looks good.

[–]PiyarSquare[S] 9 points10 points  (5 children)

Thanks.

Vinyl designs were my first practical use of stable diffusion. I cut a cover image for my ebook reader. There is something magical about turning computer dreams into physical objects.

Check out txt2vector and the PaperCut dreambooth model.

[–]Light_Diffuse 5 points6 points  (4 children)

If you've got a laser printer, check out foiling too:

  1. Print out the design at maximum quality so the maximum amount of toner is laid down on the paper/card.

  2. Tape the special foil (cheapest bought on AliExpress) over the printed area.

  3. Run the paper with taped foil through the printer, printing a blank sheet

  4. Behold your shiny design!

When you run the sheet through the second time, the toner melts and acts like glue, the foil sticks only where there is toner.

[–]NinjaAmbush 0 points1 point  (3 children)

That sounds neat, could you link an example of the product?

[–]Light_Diffuse 0 points1 point  (2 children)

These are the ones I've got, expensive because you get lots

These are cheaper and 5m is plenty for experimenting.

There are quite a few sellers, just make sure the description includes "laser printer" or "toner" because there are other methods for transferring foil like "hot stamping" which don't use toner and therefore the foil wouldn't work.

[–]NinjaAmbush 0 points1 point  (1 child)

1 cent with free shipping? Wtf?

Also, thanks!

[–]Light_Diffuse 0 points1 point  (0 children)

1 cent with free shipping? Wtf?

That doesn't sound right. They're not that cheap!

[–]prozacgod 1 point2 points  (8 children)

I wonder if a model could be trained on multi-layered art, like this - https://www.michaelfrankpeterson-artist.com/a-walk-in-the-woods-painted-layered-glass-sculpture-michael-frank-peterson.html

I guess you could divide the pictures into quadrants, each being a layer in the original art piece, and then train on that? That's not to dissimilar to the way videos were being trained on a diffusion model.

[–]QualifiedNemesis 3 points4 points  (1 child)

You might be interested to know that in Computer Vision research, this multi-layered concept has recently been well studied as a 3D image representation. The academic term is "Multiplane Image" (MPI)"Multiplane Image" (MPI).

I think it would be challenging to go directly from text to MPI, but I wouldn't rule it out!

[–]GBJI 0 points1 point  (0 children)

That's actually a very good idea.

[–]milleniumsentry 2 points3 points  (2 children)

<image>

that is using the papercut model, and a quickly cobbled together prompt. I am sure with a bit of patience, you could do far, far better...

A beautiful layered paper sculpture of an autumn path, by aaron horkey, trending on artstation, parallax, parallax layers, 3d illusion, multilayered, quilling, natural flat colors,PaperCut, glossy paper, highly detailed and realistic

Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 2768116468, Size: 512x512, Model hash: 08f7a22d

[–]prozacgod 1 point2 points  (1 child)

Whoa! I was hoping to get layers out of it, but damn this is nice.

[–]milleniumsentry 1 point2 points  (0 children)

With some masking work and a pair of scissors, you could layer these fairly well. Likewise, generating a bunch, and scaling down the layers behind would also work for the perspective trick, instead of relying on one image for the whole piece.

[–]CrystalLight 0 points1 point  (2 children)

From what I know (which is not a ton, but I've trained on DB, embeddings, and hypernetworks) and from looking at those photos, it looks like a serious challenge.

I'm not sure you could really train on that as a style because without being literally conscious of how they're made, it's not at all obvious simply from photos. You probably need a LOT of photos of something like that to work with. I guess if you actually owned one you really could make a model because you could take all of the pics yourself.

This is just my initial impression. I thought "Hey, maybe I can do that. It sounds neat!" then I looked at the pics and was just like "Oh no."

[–]prozacgod 0 points1 point  (1 child)

yeah, like where would you get the source art materials to train from? Like I can imagine the difficulty.

I mean, imagine if the art piece had 4 layers, dismantle it, arange it in a 2x2 grid... and train on those images.

And then you prompted in: "a woman standing in a in a forest"

There's no real obvious geometric connection to anything else in the scenary, like... why is the woman in the 3rd quadrant, and not the second? Why are there 2 trees in the first quadrant? Why is the background will almost fully, with few transparent sections?

You'd have to train a model much like the waifu diffusion team did.

I think a subset of this sort of thing could be trained, in dreambooth, if you were pedantic about it. And especially if you created it for a particular design goal.

EDIT: I just wondered if I could create these sort things with help from SD, gonna try it.

[–]CrystalLight 0 points1 point  (0 children)

Good luck and share the results if successful. It would be cool!

[–]_WhoisMrBilly_ 1 point2 points  (2 children)

This is what I was looking for for both Projection Mapping and for Laser cutting. Awesome!

[–]NinjaAmbush 1 point2 points  (1 child)

Yes, was totally thinking of using this to generate vectors for laser cutting / engraving!

[–]_WhoisMrBilly_ 0 points1 point  (0 children)

Pretty new to hugging face and just kinda fumbling my way through, is it possible to use this online? I guess I’ve tried a few models online (plug-ins? Methods?) on the site directly, and I don’t understand why I can do some through the browser and some I can’t.

Was able to use the Disney character style one before through safari on IOS, but can’t seem to use this one the same way. What am I doing wrong?

I also installed a SD program in my Mac, but haven’t used any additional models or plugins.

[–]ArtifartX 2 points3 points  (0 children)

Really loving the models people are training, especially these ones with a real utility.

[–]alumiqu 2 points3 points  (0 children)

This is so much better than I would have expected. Very cool.

[–]ciavolella 2 points3 points  (1 child)

Op: can you give us an idea of your workflow for these? My wife runs a crafting business using a cricut and we've used the txt2vector plugin to make some stuff but it still requires a bit of "clean up" work in a vector art program. Does this model solve any of that? Can you use it in conjunction with txt2vector? Thanks! And great work BTW. These look awesome and I can't wait to play around with it.

[–]PiyarSquare[S] 2 points3 points  (0 children)

That is exactly what this model is for. I found that the shapes are smoother and better spaced than you get from the standard model with the txt2vector modifiers.

I have been putting "silz style" in the front of the prompt followed by a short description. The most complicated prompt above was "silz style. a woman with long hair made from musical notes. inspirational. spiritual." Using Euler sampler at 50 steps with CFG between 15 and 20.

I load the png into Adobe Illustrator and use image trace. I do any clean-up in Illustrator. The trace settings have more flexibility than you get with the txt2vector output. I then send the resulting paths to my cutting software.

That said, I just tried txt2vector using this model and had excellent results with the automatic tracing.

Happy crafting!

[–]nometalaquiferzone 1 point2 points  (0 children)

Great job!

[–]NateBerukAnjing 1 point2 points  (1 child)

can it be automatically converted to vector if you use txt2vector ?? i dont get it

[–]PiyarSquare[S] 4 points5 points  (0 children)

I trained the model on black and white images with simple lines that would be easy to cut. It should be enough to use "silz style" in your prompt, but the other terms added by txt2vector ("black and white, vector graphics, tribal tattoo") don't hurt and sometimes help. I personally use Illustrator to convert black and white into vector images for finer control, but the POTRACE built into txt2vector is pretty good.

[–]NateBerukAnjing 1 point2 points  (1 child)

what's the prompt for the mountaint range and the city

[–]PiyarSquare[S] 2 points3 points  (0 children)

"silz style. Sequoia National Park." and "silz style. Times Square."

[–]2peteshakur 1 point2 points  (0 children)

lovely, thx op! :)

[–]icefreez 0 points1 point  (0 children)

Fantastic for Icons and logos!

[–]Sandzaun 0 points1 point  (3 children)

Is it somehow possible to combine this with a dreambooth model of my face?

[–]PiyarSquare[S] 1 point2 points  (0 children)

I had no luck with DreamArtist.

But I had some success using Checkpoint merger with a personal model. Mixed at 35-65 style model to person model. But it might work better if I retrain the person starting from the style model.

[–]fanofhumanbehavior 1 point2 points  (0 children)

If you're still interested in this--here's what I would try: take all the training images for your personal model and run them through img2img using silz style ckpt (tweak parameters to get them looking great) then use dreambooth and train with the new vector-looking images. Try using silz or standard 1.5 ckpt as the source ckpt, and if you do try, please report back with results!

[–]PiyarSquare[S] 0 points1 point  (0 children)

I want to try the DreamArtist extension for automatic which promises one image DreamBooth. I imagine that could work with this as the base model?

[–]Expicot 0 points1 point  (0 children)

It's great, but would there be a way to make it working the same with img2img ?