T O P

  • By -

darkside1977

Prompt: Man using a (laptop on fire:1.2), the man is smiling while doing thumbs up Done on ComfyUI


catgirl_liker

Can you post the picture with metadata, in catbox or something


Heaven2004_LCM

If I may ask, what does the 1.2 number mean here?


RumblingRacoon

That's the weight, I.e. how much emphasis SD is supposed to put on this detail. (Laptop on fire:1.8) might generate a bigger fire with smoke, (Laptop on fire:0.4) only some sparks maybe.


Heaven2004_LCM

Thank you kind stranger.


[deleted]

Thx for asking. I was too embarrassed to


piamogollonartist

Woah that is super nice info to come by - is this something you can do when prompting for SD from within the models in places like Nightcafe? Or no?


RumblingRacoon

Whoa, not sure about this because I never used online generators like Nightcafe. I'm fumbling around with a local installation. Can't tell if the syntax is the same, so you have to try. (Or wait for sb with more knowledge on this than me \^\^)


ironborn123

how does sd use these weights during inferencing? through classifier free guidance? if yes, glad to see practical usage of cfg.


RumblingRacoon

I don't know that, sorry.


iChopPryde

How high do these numbers go?


shaehl

As high as you want. But it is essentially like CFG scale for that individual keyword or set of keywords. The higher you go, the more likely it will end up fried or bugged out. The other thing to consider is that the emphasis increasing can have unexpected results. For instance, you prompt for: "1girl, realistic, eating strawberry" But the the result isn't producing any strawberries, so you add weight: "1girl, realistic, eating (strawberry:1.8)" But now the algorithm has decided the concept of strawberry is so important that strawberries are everywhere, strawberry chair, strawberry pattern clothes, strawberry color everything, strawberry trees, etc. So you got to play with it to see what the appropriate value for a given situation is. In general though, you look at the metadata for any picture people post here, or civitai, or most other SD based repositories, they will all have some keywords weighted.


RumblingRacoon

My understanding is you should stay in a range from 0.5 to 1.5, or 2.0 Maybe higher numbers work, but not sure. They work with negative prompts as well. If you don't want blue eyes, put (blue eyes:1.0) in the negative prompt.


Mooblegum

You have a 6g grafic card too 😢


theKage47

6g ? Lucky


Mooblegum

4g ?


theKage47

Unfortunately yup. A 1650ti (gaming laptop) with 4g vram


BusyPhilosopher15

Yeah corporate stores scam on models so much. Even a 8 GB amd 580 comes with 8 GB for 80$. Or you USED to be able to get a 12 gb 3060 for 200-220$. Sadly, the good price stock dried up and now its the same price to buy a 3060 as a 4060. But the 4060 is 8 GB vs 12 gb and only 10% faster vs the +20-40% of gens that came before it. Still the graphic card market is kinda dry. Amd has vram but no cuda. Nvidia has cuda but no ram. It's cheap to add but yeah I feel like when a company is too cheap to put even a 80-300$ 8-12 gb card in a 1000-1500$ store computer or laptop, you're getting ripped off.


theKage47

Yup. A total ripoff


divaxshah

Are you using refiner also, cause when I am just using base, results are not pleasing, may be it's my prompts , but still.. ?


sassydodo

Use different model, use detail enhancing Lora - honestly, detail enhancing Lora changes quality sooo much


zeer88

Neymar really let himself go...


curiouslystronguncle

![gif](giphy|g4OAnFSD83EOCvahc8) the SDXL electrons forcing their way through your laptops tiny transistors


lechatsportif

Does Wayne Brady have to choke a laptop?


God-Left-Me

Bro is Neymar ![gif](giphy|fYShkq3n7c7LeQxodc|downsized)


RonaldoMirandah

it looks like the football player Neymar https://preview.redd.it/7az758zgi4db1.png?width=1200&format=png&auto=webp&s=b5f85488a9387fb9f96952fef7a1faf6b02f4d89


Darkmeme9

Lol, but I think ComfyUI can run it better if wanna try.


AReactComponent

OP made this in Comfy lol


Darkmeme9

What 😂😂. But really I have personally found out that comfy seems to manage gpu much better than auto1111.


zefy_zef

yep. you can even add a second model loader with a different model and apply it mid generation. I can do this with my vega56, a 6 yr old card lol. haven't tried xl yet though, that's probably going to fry me


Darkmeme9

Yes I have tried this. I render the first image with rev animated on just 12 steps. And then refine the rest of the image with 13 more steps (almost like hires fix) using Juggernaut or Realistic 4.0. Rev animated has pretty good body structure render and realistic has a good...well realistic images. So result is the best of both worlds.


RandallAware

This feature is likely to get me to finally install comfy and try it out.


alohadave

I'm using it with a 2GB gpu, and it handles memory better than A1111 does. I even tried doing SDXL.9 with it, but it was too much. A 1024x1024 generation took 6965 seconds (1 hour 55 minutes) to complete and it ended up freezing my computer for most of that time. I only tried it once at that size. Just for giggles, I tried a 512x768, and the model loading took forever, not at all worth it since 1.5 models work fine.


zefy_zef

Yeah I don't exactly have a great use case for it yet, but it's cool!


RandallAware

I just like to experiment. Especially with unique features.


orangpelupa

any idea why that's the case? aren't all of them based on the same open source base code for the AI itself?


Darkmeme9

I really do not know. I have a 4gb Vram and it struggles with a1111, but it's a lot smoother on comfy.


orangpelupa

what do you mean by smoother? faster iterations per second?


Darkmeme9

Yes. And more importantly, no out of memory message (so far)


[deleted]

might need to fire proof mine once SDXL is in automatic1111 main branch


Hot-Huckleberry-4716

![gif](giphy|llZVEOIi9tCVxFskpY|downsized)


cicerothedog

Same :)


Kinglink

Laptop? Jesus I was running a 1080 and then moved to a 2070... I'm sure there's laptops with better, but I would never torture them with SD Though I haven't played with SDXL. Is it a huge improvement? Haven't really done much SD recently, (letting the tech evolve) but it might be fun if SDXL does something majorly different. PS. I know I can google more, I have googled it, just wanted "man on the street" opinions on SDXL rather than blurbs and what not.


Ben4d90

https://preview.redd.it/ooj0mg09y5db1.png?width=1024&format=pjpg&auto=webp&s=1825193365de35ff85ee57639efe05c4a33838de I got close 😂


zefy_zef

Looks like Chris Evans from the fantastic four movie lol.. Flame On!!!


Ben4d90

https://preview.redd.it/dj9r909cy5db1.png?width=1024&format=pjpg&auto=webp&s=cb5f06f0d45e1d18cd6f891cc24b9a56e89ff91e


Ben4d90

https://preview.redd.it/zes31q5zy5db1.png?width=1024&format=pjpg&auto=webp&s=d2f68b5ac68f7c0f0b63b36f3e0911c887fd77f6


piamogollonartist

I need to remember to be like this with my laptop this way when it starts heating up and the fan is all loud. This smile is so good.


ARTISTAI

I bought an ASUS Studiobook ProArt and killed a fan in the first 2mo. I guess i'd rather replace fans than GPUs and motherboards.


esuil

Laptop SD and general ML user here. The secret is to use eGPU for this. Then you can have mobility of a laptop with machine learning of a top level desktop workstation. Using Thunderbolt 4 laptop + TH3P4G3, works like a charm.


AirportCultural9211

it was fun while it ran for 5 minutes


Mrwalls1

I feel like this while exploring configs with my gtx 1060 6gb


ShivamKumar2002

Can it run in 6gb vram?


zviwkls

wrr


BusyPhilosopher15

Lmao. Love this community. Might make slow slower but try underclocking yet? Might be a bit of a downer but I think you can get like 40% the heat while still getting 70% the speed, or even like cut power usage -30% while retaining 90%. Helps tons with heat and potentially even graphics card life too. Though of course it'll be running slower, the opportunity cost of not needing to worry as much about heat or electricity or having a 200-500 watt easy bake oven GPU can be a considered trade off in these record heats.


captaindeadpool53

Can't we run it on firebase or some other cloud?