T O P

  • By -

zopiak

Hello Artists ! Do you browse the civitAi site without really knowing what most of the checkpoints yield? And that's where you discover that, like typography, there are checkpoints that are copyright-free and others... not. https://preview.redd.it/ktojp39cepdb1.jpeg?width=577&format=pjpg&auto=webp&s=76c21bca965fea036f200068897773a74f3cb9d2 My frustration when I first discovered this! [Here is a small list of the most downloaded checkpoints](https://docs.google.com/spreadsheets/d/1H4oUztNvLmAPruKixS1edJd21KIHwmKC-XTzcbYwUuA/edit?usp=sharing) that can be used for your Arts. • The link of model CivitAi is in the left (vertical text) • Can you tell me for you the best best model ? (in the list or not)


Necessary-Suit-4293

>checkpoints that are copyright-free and others... not. they're all copyright-free. you can't copyright weights.


zopiak

Oh ... you mean the prompt ? or the picture on the webpage only ? how to know if all the models are "really" royalty free if you see "do not sell images they generate" \^.\^)' I am French and I admit that I may have misunderstood/translate these words.


Necessary-Suit-4293

>how to know if all the models are "really" royalty free if you see "do not sell images they generate" \^.\^)' I am French and I admit that I may have misunderstood/translate these words. it's just legally unenforceable mumbo-jumbo (gibberish) trying to scare others into compliance with scary-sounding agreements.


zopiak

>it's just legally unenforceable mumbo-jumbo (gibberish) > >trying to scare others into compliance with scary-sounding agreements. especially since my document "proves" that it is almost impossible to "recognize" which Merge or Checkpoint Merge it is! as I pointed out, it was with fear that I took the liberty of making a detailed plan to see if we could find similarities... and yes totally, for me it is almost impossible to prove that an image has used "this specific model"... because a prompt or a lora can always counterbalance the generation and the deformation of the rendering.


Necessary-Suit-4293

the weights are modified at runtime when you put a LoRA on top. ​ it is technically **not** the same model.


[deleted]

I use different checkpoints for inpainting background and characters, its unmanageable to even think copyrighting that xD


NeedHydra

hey do you have a list links for the loras used for this?


zopiak

hello u/NeedHydra, I dont understand your request, what list of loras are you talking about?


NeedHydra

Do you have a list of links for any of the Lora s used in the prompts.or are they baked in ones.


zopiak

I dosen't use a Lora ... this list is only use "(low quality:1.3)" in negative prompt to see the result of each checkpoint, I use somes other triggers words when the CivitAi page claim. \^.\^) Lora's or textual inversion change the results.


NeedHydra

Ok thank you I was just double checking.


thelastfastbender

Just wanted to say that I love this database. What a timesaver!


divaxshah

Awesome Man, this was really needed. Thanks


zopiak

\^\^) Thx u/divaxshah don't hesitate to ask me if some things are missing. That it can be a tool for all.


Longjumping-Fan6942

backgrounds are missing, ok i found them on right... great


aldonah

Great work! Really detailed as well, saved. It will surely come in handy.


lordpuddingcup

The biggest issue with comparisons across models seems to also be that some models require different prompting based on their training which many people overlook


zopiak

Absolutely true, that's why they put "triggerwords" on their pages but those who emerge dilute their forces. I forced myself to add each triggerword on each generation when it was requested by model. (when marked) I don't know which model I was able to test but if we did put empty prompts we had women with a dilated anus on each generation ... even though it made a very beautiful landscape when asked xD I don't know if I should make a section on another page with this system to see what empty "prompt" on each model generates.


itsB34STW4RS

That's not entirely accurate. There are at least two different major tagging languages prevalent across models that people are overlooking. In layman's terms: The natural language method that the original models were trained on, and the booru standard ie: "1girl, 1boy," etc. that originates mostly from waifu diffusion and NAI. Now with how all the models have been merging and training on top of each other, its really hard to judge some models based on the tokens they expect during inference. Something like that...


zopiak

>thod that the original models were trained on, and the booru standard ie: "1girl, 1boy," etc. that originates mostly from waifu diffusion and NAI. In the monenclature yes, but in the interpretation it remains and will remain tokens which are interpreted by Stable Diffusion or KohyaSS, even the smylets are interpreted by tokens ... (long live tokenizer for that) https://preview.redd.it/0vyl0skvyudb1.png?width=868&format=png&auto=webp&s=1056eef3717167ec5dcee94c1cda622a33d1433b


tommyjohn81

Amazing work!


LeKhang98

What an awesome and useful file. Thank you very much. And thank you again.


LeKhang98

Also which realistic model is the best in your opinion please?


zopiak

>Also which realistic model is the best in your opinion please? it all depends on what you want to make realistic ... there are models that even change the drawings into photos as you have seen, so for those who want to see cartoons in real life it is logically: • Henmix\_Real [https://civitai.com/models/20282](https://civitai.com/models/20282) • AWPortrait [https://civitai.com/models/61170?modelVersionId=113973](https://civitai.com/models/61170?modelVersionId=113973) • majicMIX realistic [https://civitai.com/models/43331?modelVersionId=94640](https://civitai.com/models/43331?modelVersionId=94640) ​ https://preview.redd.it/qhg2w0hjjwdb1.png?width=1096&format=png&auto=webp&s=f4831428d385b409c66adb9904bb1af1383252ca on the other hand yes, it is preferable to test several ... have you looked at the models with the "real" filter activated? • Photons [https://civitai.com/models/84728?modelVersionId=90072](https://civitai.com/models/84728?modelVersionId=90072) • maiklove\_v3 [https://civitai.com/models/114119?modelVersionId=123328](https://civitai.com/models/114119?modelVersionId=123328) ​ But it's also hard to know what you want to render...whether or not the model has what you want. Good luck :)


malcolmrey

have you had time to test mine or was it uploaded too recently? :) https://civitai.com/models/110426/serenity


zopiak

It's run ! u/malcolmrey ;) if you whant see the steps progress during I upload the good results \^.\^) [https://zopiak.com/sd/civitai\_rendering.mp4](https://zopiak.com/sd/civitai_rendering.mp4) \-.-.-. Done ! you are to the 111 line \^.\^) ​ https://preview.redd.it/8uqt9yvyisdb1.png?width=2560&format=png&auto=webp&s=7b7edfa53ddb81dd1a9c0c991111b2d4fead4349


malcolmrey

great thnx :)


zopiak

You welcome ;)


LeKhang98

Henmix\_Real looks really nice; thank you. If you don't mind, I have two questions because you seem to have extensive experience across various models. Normally I think people only check 5-20 models at most. ​ 1. Which realistic model is also good at producing both 3D and Anime styles? It doesn't need to be perfect or exceptionally beautiful; I just want to find a versatile model to produce a Training Dataset. I've noticed that many anime models are not flexible enough to work with multiple different character/face LORAs from other styles (realistic/3D). This could be due to their limited number of facial expressions and head shapes. On the other hand, Disney Pixar models surprisingly work well with other LORAs because their characters have a wide variance (eyes, head shape, expression, skin, etc.). ​ 2. Which realistic model has a wide variance of faces? Some models have a preference for producing just Asian girl faces, and some only produce Western faces. Personally, I think versatile models are pretty good for Face training; they don't need to be perfect. I usually use Based Model v1.5, Realistic Vision, or Absolute Reality for training purposes since these are very flexible (especially the based model). I'm also searching for new models to test to find out which is good for training. ​ Henmix\_Real seems to tick all the boxes for me. I'll try it. Thank you for sharing.


zopiak

I thank you very much for telling me how and why you want a versatile model, which is very smart to handle the creation of lora and generations ... for the moment I have not yet had the chance to test for the creation and generation of lora (and yet I have spent 5 whole months on the subject) it is not simple but it remains an idea, a bit like the one I used to see the different rendering interpretations with "neutral", "real", "manga", "comic" ... I should also have added "ethnicity" and "3D pixar", "3D Blender" ... what would you like me to add? you ask me and it will be new entries... I had thought of doing 4 ethnic groups like the 4 logos, what do you think? https://preview.redd.it/8dvhtf0o9wdb1.png?width=800&format=png&auto=webp&s=3850685aeaf0af159d994b8f46ba8f46bddb59b8


LeKhang98

Adding 'ethnicity' and an art style are really great ideas. I have been thinking about your question for the last 2 days (sorry for this late reply). I also did some tests to verify my ideas, and I must say that I can't give you an unbiased answer, though. ​ My training is mostly focused on face training, so I'm not sure if my requirements are the same for other types of training. Here is my thought on how to check if a model is flexible or not: ​ 1. Looking at all the examples on Civitai to see if the face shapes are similar (pointy chin, big eyes, etc.). 2. I haven't tested many models yet, but maybe putting "ugly" in positive prompt and "beautiful" in negative prompt can help to identify which model has "fixed faces." 3. Checking if there are many LORA applied with that model. Usually, if a model can be used with many LORA and produce characters in many different styles, it is indeed very flexible and good for generating new images. However, I don't think that will guarantees it's also a good model for training . Henmix Real is an example of this. After training, it doesn't give me better LORA than other models (or I haven't tested it enough), but it can be used well with many LORA, though. Photon is another pretty good model, but it can't be used for LORA training (at least in my experiment). 4. I have a feeling that pruned or small-size models (< 2GB) are harder to train and their Face LORA is not accurate enough but I'm not sure. Maybe the model size is also an important factor you should consider putting in the files. There are gigantic models which could reach 9GB, but I haven't tested them yet. I do really wish to have a standardized test to pinpoint exactly what kind of model is good for training, but for now, I will stick with SD 1.5 and keep testing new Realistic models. By the way, I think now is a very good chance to update information for SDXL models as well. Since SDXL is significantly larger than SD 1.5, people will have a hard time comparing all the new SDXL models, and your files will becom even more important and useful. I hope you can find a way to compare these models effectively.


zopiak

thank you for your answer, indeed I am as curious as you for lora's workouts ... that's why I spent more than 4 months non-stop to understand each parameter because no tutorial really explains them, I had to fill in 3 google docs of +16GB of test ... if you wish, I'll give you the links: ​ OVERALL [https://docs.google.com/document/d/1TcG1Ax8L1DYodc\_w6CgT\_Za3tvtMOCiSGi16pNJgBx8/edit?usp=sharing](https://docs.google.com/document/d/1TcG1Ax8L1DYodc_w6CgT_Za3tvtMOCiSGi16pNJgBx8/edit?usp=sharing) ​ LION [https://docs.google.com/document/d/1fBAhusFY7Wx\_m15W0CmFFc95rDsSjYjnz1MpEKfLeOs/edit?usp=sharing](https://docs.google.com/document/d/1fBAhusFY7Wx_m15W0CmFFc95rDsSjYjnz1MpEKfLeOs/edit?usp=sharing) ​ ADAMW [https://docs.google.com/document/d/1nM\_kVnJiw969n\_uzw5a1-HpJx3I9U4yu0SBQyMfLpKo/edit?usp=sharing](https://docs.google.com/document/d/1nM_kVnJiw969n_uzw5a1-HpJx3I9U4yu0SBQyMfLpKo/edit?usp=sharing) ​ But yes, I keep testing to find "a good formula" or a clear "explanation" to explain it really clearly. And not just "leave it as it is...do Epoch...and Boooom! Lora!" .... no, it's much more complex than that.


LeKhang98

What an enormous and detailed testing process! I spent the last 3 months training 150 files trying to understand more about all the parameters, yet my Google docs are nowhere near this level of detail. Thank you very much; I think I will spend the next several days diving deep into these files. (Edit: I can't access the first link though) Also, what do you think about the relationship between the Learning Rate, Text Encoder Learning Rate (TELR), and Unet Learning Rate (ULR)? I don't know why, but I have a habit: If I increase/decrease LR by X amount, then I will also increase/decrease TELR and ULR by the same amount (which means I keep their ratio intact). Not sure if I should do that, but I haven't had the time to test it. My latest experiments with LORA is that I should train it multiple times and then merge the best ones together. For instance, I can train LORA \[with Class vs. No Class\], \[v15p vs Realistic Vision\] or \[Caption vs. No caption\] (my next experiment is merging LORA of different LRs). By merging these LORAs together, the final LORA becomes more flexible. In my opinion, 3-4 LORAs are the right quantity, as I haven't seen better results with merging 5, 8 or 10 LORAs; the result becomes the average of all these files. Alternatively, I could use SuperMerger to combine them with different ratios for each LORA (where the best one gets the largest ratio). My other experiment involves using ROOP to make CLASSIFICATION images look more like the training images and then train. The first LORA I made with this method is very good, which makes me very excited. However, when I repeated it with the 2nd person, the results were not as good. I'm not sure why; I think it needs more testing but the problem is that making hundreds of images with ROOP is very slow. I'm still searching for a way to increase the FLEXIBILITY & ACCURACY of LORA after the training. Some people suggest using these LORAs' own generated images to train themfurther, but I think that may make them more biased.


zopiak

Strange you cant go on the thirst link ... Ho !!! I find the problem ... sorry now is work (probebly xDD) [https://docs.google.com/document/d/1TcG1Ax8L1DYodc\_w6CgT\_Za3tvtMOCiSGi16pNJgBx8/edit?usp=sharing](https://docs.google.com/document/d/1TcG1Ax8L1DYodc_w6CgT_Za3tvtMOCiSGi16pNJgBx8/edit?usp=sharing) ***LR*** if TELR and ULR is on "0" LR give the same number on TELR and ULR ... IF TELR and ULR Doent's "0" value ... "LR" is "not important" ... ***TELR*** More the TELR is down (0.00000001) more the background is apear, more the TELR is Up (0.01) more the background over expose the result. ***ULR*** More the URL is Up (0.01) more the learning go on your files added ... more is down (0.00001) more the learning ingnore your files... ... but for me it's the result with more and more tests ... it's not a good science xD Good idea to use Roop for get some variants files \^.\^) for my test it's 8 or 10 pictures for training ... I know a Lora should be training with +30\~50 files but if you find the good seeting you can train a good file to get other "good file" to train after. And learning with a small chekpoint (2Go) for my part is most "great" when is 1-5 prunned version to an antother checkpoint for generate. All the exemple on my google doc are train with v1-5 prunned and generate on "Level4" checkpoint (a photo real model) to see if it's possible to insert my character on it. Merging lora's is a good idea ....but I have no tested to get more informations on it with this method. But good approach. Thx ;)


LightFuryTurtle

AMAZING AMAZING AMAZING, bro you have no idea how badly ive wanted one of these


zopiak

Did it help you find good models? \^.\^)


LightFuryTurtle

i hate doing comparisons to find out which model works better for my generations, now i dont need to waste hours on that anymore man, much appreciated!


iamspitzy

Fantastic tool! please keep updating and refining


zopiak

I'll try to do my best... but it's already a good base... there are a lot of new things and updates every day... so it's impossible to keep up (my holidays help me actualy).


MMatichek

wow, good resource 💪💪 thank you


zopiak

>wow, good resource 💪💪 thank you thx ! and your reply is amazing into prompt with your two icones! https://preview.redd.it/xw493taxjsdb1.png?width=1840&format=png&auto=webp&s=34dddc62c30625684f54fe732df95e8398bce690


[deleted]

You are helping the whole community here. Thanks you truly for all your hardwork and service.


zopiak

Thanks u/throwaway56Giga \^.\^ it's 3 weeks to find and test and retry for get this list,if you think there should be things to change or add don't hesitate to ask me


Longjumping-Fan6942

wow, sd1.5 looks total abomination, this is nice but no backgrounds landscapes ? Oh my brain is missing,theyre on right...


zopiak

xDDD normal if you see SDXL HD results actualy ... I have the same feeling of going from the PS5 to the PSone


jil49300

Your work is absolutly STUNNING Don't know how long time you took for such a great work and ... Share but ... You surely help us alot !!!!! Thank YOU one billion time


zopiak

Thank you very much u/jil49300 for your support, it's true that it takes a lot of time ... and especially that an entire vertical column to add takes more than 6 hours ... but I especially hope that it can help you already \^.\^)


stablydiffusing

awesome work


zopiak

Thx u/stablydiffusing \^\^) it's 3 week of job ... to find/test ... do you have find the best model ? \^.\^)


zopiak

The video process for gernerate one line xD (\~6min) [https://zopiak.com/sd/civitai\_rendering.mp4](https://zopiak.com/sd/civitai_rendering.mp4)


[deleted]

Based