Ksampler efficient reddit Any one ready to help or give support are welcome and I am revamping and recycling all the nodes and may be some are still under edit and will be updated in the coming week. Jan 17, 2025 · KSampler Settings . This works for both Schnell and Dev. 2. HighRes-Fix. dtype: torch. Node that the gives user the ability to upscale KSampler results through variety of different methods. Out of interest, is there a reason you’re effectively denoising the image three times? (Once to the left in your first simple ksampler, then twice in the efficient ksampler - the node itself will do one denoising pass first and then do at least one more due to the use of the hires fix I can get it to show a live preview in the ksampler. Reply reply Samurai_zero Impact pack does this weird thing where it tries to git clone (!) another repo during startup. anyhow just some But then I hit the KSampler, and. 00 as Reddit seems to do SaveAs to . If not, here is a link to ComfyUI Manager on GitHub-- just follow the instructions on the page to install it. You can prove this to yourself by taking your positive and negative prompts and switching them, then running that through a ksampler with negative [whatever your initial CFG was]. Do you have the ComfyUI Manager installed? If so, it will be in the main menu when you open ComfyUI in your browser. Please share your tips, tricks, and workflows for using this software to create your AI art. If you want an alternative that works you should try SDXL nodes from 'Efficiency Nodes' pack or TinyTerraNodes. Welcome to the unofficial ComfyUI subreddit. 0% indefinitely (even left it over night). A combination of common initialization nodes. TL;DR: Instead of using on the “batch size” feature, send a sequential list of Welcome to the unofficial ComfyUI subreddit. KSampler and Advanced are both giving me errors I'd not seen before until this morning. Reload to refresh your session. Now if all is left the same, ksampler2 will overwrite the latent image from ksampler 1 with its own seed, as it will assume it is receiving a blank latent, unless you tell it otherwise. Can you please shed some light on it? Thanks Welcome to the unofficial ComfyUI subreddit. You should find that the iterative mixing path generates outputs with richer background details and it should be more faithful to the original low resolution image We would like to show you a description here but the site won’t allow us. 0+ A collection of ComfyUI custom nodes to help streamline workflows and reduce total node count. Welcome to the TickTick Reddit! This community is devoted to the discussion of TicTick, regarding questions during use, tips/tricks, ideas to discuss, news and updates, anything to make TickTick better to use for you! *Note: Most efficient way to reach support team: sending tickets via Feedback&Suggestion in the app! Welcome to the unofficial ComfyUI subreddit. For example, I'm doing an img2img with a denoise of 0. 5, or 1024 for XL models) Use NNLatentUpscale to double the latent resolution Run this through Iterative Mixing KSampler at full strength (1. When I run the t2i models, I see no effect, as if controlnet isn't working at all. On any sampler node (face detailer / ksampler) where I change the scheduler from a widget to input, it wont let me attach the scheduler selector to it. Image Overlay. I set it at either fixed/random before converting --- not working for both case. The Efficiency Nodes updates and new improvements are now in working mode and you can check same in the forked branch repository. Oct 27, 2023 · Any updates to moving this to dev branch, out of the 10 or so here posting about the issue prob 100's are having it and not using the nodes anymore :/ . I might be missing something but 3 steps didn't work for me -- I got blurry unresolved images as normally expected. Using the same seed with identical settings produces the exact same image every time. In my opinion, this approach is the “proper” way to generate a “batch” of images that can be individually reproducible. sample(model, noise Welcome to the unofficial ComfyUI subreddit. We ask that you please take a minute to read through the rules and check out the resources provided before creating a post, especially if you are new here. I was running some tests last night with SD1. Similarly, I think the VAE is also different such that you can't just pass it through. int32 query. Your first KSampler says to denoise this image with a single step. r/Garmin is the community to discuss and share everything and anything related to Garmin. By posting it here I hope to find a solution that might… I was using efficiency node and it allows for the generated images in the step process to be viewed. Seed. Currently I have the Lora Stacker from efficiency nodes, but it works only with the propietary Efficient KSampler node, and to make it worse the repository has been archived on Jan 9, 2024, meaning it could permanently stop working with the next comfyui update any minute now. Or check it out in the app stores Ksampler Efficient/ Animatediff broken after update all in manager . Is there a difference in how these official controlnet lora models are created vs the ControlLoraSave in comfy? I've been testing different ranks derived from the diffusers SDXL controlnet depth model, and while the different rank loras seem to follow a predictable trend of losing accuracy with fewer ranks, all of the derived lora models even up to 512 are substantially different from the full Node that allows users to specify parameters for the Efficiency KSamplers to plot on a grid. Using the default ksampler I can use the Select From Batch node to pick one image from a batch to generate, but I cant seem to find a way to do that with the Efficient loader since it lacks a latent input to attach to. Normally this would just be a git submodule. In you case you may want to stop your first Ksampler at 14 and continue in a new one from step 15 to 20 as to finished picture. And above all, BE NICE. 0) KSampler(Efficient) Warning: No vae input detected, proceeding as if vae We would like to show you a description here but the site won’t allow us. The key is that denoising depends on which step it is, so we can not separate a 20-step process into 20 1-step processes. I can only get the seed of the ksampler to randomize once per queued generation- When doing batches/repeated processes during a single queued generation, how can I make the seed change with each batched iteration? You can try to use the ModelMergeSimple node, it allows you to put in 2 models and then put them into a single KSampler. We would like to show you a description here but the site won’t allow us. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. also I think Comfy Devs need to figure out good sort of unit testing , maybe we as a group create a few templates with the Efficient pack and then before pushing out changes they could be run as a test to see what breaks. However, its impossible for me to make it show in other nodes such as Ultimate SD Upscaler. If you do so, the entire Refiner section goes away and so do the switches that you need to configure to use the Refiner. In general the aesthetics is very simple and far from what the chosen model would have with another Ksampler. Thanks - I’ll replicate this when I get home tonight. 5 on 20 steps. To access UntypedStorage directly, use tensor. I have never used it myself, but worth experimenting with For Ksamplers you can just pass the latent output of a ksampler into another ksampler, just make sure to put the denoising lower in the 2nd Ksampler You can use advanced ksampler and sdxl refiner cliptextencode, or just use custom loaders/samplers that have the functionality built in, such as the efficiency nodes pack. How do I debug this? We would like to show you a description here but the site won’t allow us. [w/NOTE: This node is originally created by LucianoCirino, but the a/original repository is no longer maintained and has been forked by a new maintainer. webp for Just finding those Efficiency Nodes have Welcome to the unofficial ComfyUI subreddit. Oct 4, 2024 · Ksampler (Efficient) A modded KSampler with the ability to preview/output images and run scripts. I did have some OOM errors before but not anymore. So, I pretty much hacked it into place in the middle of the SDXL workflow, just as a test, and while the Efficient Loader and KSampler nodes are really convenient - to the point that I'll probably make my own SDXL workflow using them - I still can't figure out how to make it do what I want. Well I got into img2img last week, which made me switch back to the regular KSampler for simplified denoising, and then I got into Turbo just to see how fast it was. It changes the image too much and often adds mutations. The new update to Efficiency added a bunch of new nodes for XY Plotting, and you can add inputs on the fly. Oct 28, 2023 · You signed in with another tab or window. Query/Key/Value should either all have the same dtype, or (in the quantized case) Key/Value should have dtype torch. Oh my goodness, I've been wrestling with this for a few days, even tired to post this question with the exact same copy and paste text but Reddit decided to block my posts! I can't even delete the account, had to make a new one. But the efficiency nodes are not working anymore even though i have it installed. fget. The Sampler also now has a new option for seeds which is a nice feature. When you run a normal KSampler with step 20, you ask the sampler to denoise the image from noisy image to what it thinks should be the clean image in 20 steps. And you can have a checkpoint do the first 5 steps then swap checkpoint to do the next let's say 5 or so steps then another checkpoint for like the next 5. There is nothing special about it - Ksampler algorithms are same for most (if not all) nodes. I will share a workflow soon from a new custom_node that implements the Iterative Mixing KSampler. Both of them provide a lot of extra detail. The simplest configuration to have a working XY Plot is by using the new Efficient Loader and Efficient KSampler nodes, part of the Efficiency Node Suite. A collection of nodes that allows users to specify parameters for the KSampler (Efficient) to plot on a grid. I've used these workflows for months without issue, but now anytime I import one of my previous workflows using previous outputs, they fail on the scheduler inputs. 1 Dec 16, 2024 · Given that end_at_step >= steps a KSampler Advanced node will denoise a latent in the exact same way a KSampler node would with a denoise setting of: denoise = (steps - start_at_step) / steps. When the KSampler receives the empty latent image, it uses this seed number to create a specific pattern of noise. I can only get the seed of the ksampler to randomize once per queued generation- When doing batches/repeated processes during a single queued generation, how can I make the seed change with each batched iteration? Welcome to the unofficial ComfyUI subreddit. 17 votes, 25 comments. The image created is flat, devoid of details and nuances, as if it were cut out or vector-based. true. I did a plot of all the samplers and schedulers as a test at 50 steps. This subreddit is an unofficial, non-affiliated community, run by the users, to embrace and have conversation about the products we love! We would like to show you a description here but the site won’t allow us. The results are a bit different, but I would not say they are better, just a bit different. 5 and SDXL use different conditioners, you can't just pass one to the other as far as I'm aware. GPU is at 100% yet I know it's not doing anything because the fan is not spinning (any img generation causes it to hit max right away). Get the Reddit app Scan this QR code to download the app now. The KSampler (efficient) then adds these integers to the current seed, resulting in image outputs for seed+0, seed+1, and seed+2. It only appears if i do the following every single time i want to generate an image: Click "Queue Prompt" -> Click on "manager" -> Click "preview method" -> change it -> click on it again and change it We would like to show you a description here but the site won’t allow us. 5 and I was able to get some decent images by running my prompt through a sampler to get a decent form, then refining while doing an iterative upscale for 4-6 iterations with a low noise and bilinear model, negating the need for an advanced sampler to refine the image. What those nodes are doing is inverting the mask to then stitch the rest of the image back into the result from the sampler. This will keep the shape of the swapped face and increase the resolution of the face. That plus how complicated the advanced KSampler is made latent too frustrating. storage() return self. Its inherent efficiency makes it an ideal choice for applications requiring quick turnaround or run ComfyUI online solutions, like cloud-based setups which utilize services such as ComfyAI Run. XY Plotter Nodes. A lot of people are just discovering this technology, and want to show off what they created. Doing it this way makes reproducible builds a huge pain; I had to add an extra step in my build process to manually clone it to a known good commit hash just to keep that node pack from messing with my source files. To upscale 4x well with the Iterative Mixing KSampler node, do this: Generate your initial image at 512x512 (for SD1. Here's how it works: I'm not releasing this workflow because there are a million issues that I'd have to fix and don't have the time right now. Also, if this is new and exciting to you, feel free to post The efficient loader has the checkpoint for the initial image being made in the KSampler. The "artifacts" you get in your example are from the double generation as what you do there is generating a new image ontop existing one, not continuing building on existing one. So you can img2img but with a Denoise of 1 which isn't really helpful in most cases. Is the KSampler is the first thing to go green? Definitely no nodes before that quickly flick green before the KSampler? The seed number shown in the rgthree is the same each time? The image generated is identical? Any clues in the command prompt window? Welcome to the unofficial ComfyUI subreddit. The nodes on the top for the mask shenanigan are necessary for now, the efficient ksampler seems ignore the mask for the VAE part. 40 votes, 23 comments. untyped_storage() instead of tensor. First KSampler: steps 14, cfg 8. 0 denoise) KSample the outputs at perhaps denoise=0. Using the Iterative Mixing KSampler to noise up the 2x latent before passing it to a few steps of refinement in a regular KSampler. Well this KSampler Node doesn't have a "Denoise" factor. Setup a quick workflow to do the first part of the denoising process on the base model but instead of finishing it stop early and pass the noisy result on to the refiner to finish the process. I used the ControlNet extension & the Realistic Vision checkpoint, and it keeps giving me this error, "AttributeError: 'NoneType' object has no… The Tiled KSampler forces the generation to obtain a seamless tile but t change the aesthetics considerably. So is there another way to view the images being generate through the steps? Side question: it seems ComfyUI cant do inpainting? Welcome to the unofficial ComfyUI subreddit. SDXL most definitely doesn't work with the old control net. Maybe it will get fixed later on, it works fine with the mask nodes. Actually on second thought the KSampler efficient I don’t even know if it should stay at -1 because I always use the rgtree seed node and convert the KSampler seed to input and hook it up and the rgtree seed node has that issue sometimes that the random seed will bug out and switch to fixed every queue. You switched accounts on another tab or window. The output of the node goes to the positive input on the KSampler. As such you should use the advanced ksampler, to set a starting step of higher than 0 (ideally around the same number as the previous ksampler ended). it seems KSampler Advanced manages its own SEED and not affected even when convert SEED/NOISE SEED to input . CFG must be set to 1 in the KSampler The performance timings for KSampler and the Guided Sampler seems to be the same. In theory nodes can be 'colorized' in levels, which will then enable parallelism, but the lightgraph library doesn't colorize that way. I noticed that the efficient ksampler entries were out of whack when I first loaded the workflow (my nodes might be slightly newer) but aside from choosing a different VAE and model, I don't think I changed anything. 0,1. . I only have 8GB VRAM but it's sitting around 7. sample. Efficient Loader. " I did run the manager and installed missing custom nodes, but I am still getting this. Keep in mind that when using an acyclic graph-based ui like comfyui, usually one node is being executed at a time. Do you have any suggestions on Extension: Efficiency Nodes for ComfyUI Version 2. I have no idea what your discussion about CFG 0 is intended to establish. The one with the denoise factor has no option to return with leftover noise. Ksampler (efficient) HiRes Fix Reactor faceswap Pretext (prompt box) control net stacker I suspect the efficiency node is the main issue as I read that it may control other nodes that seem to be failing to update for me. But that doesn't seem to be the case. Are you saying you want to use a different checkpoint for the upscale then the one you use to make the first image? If so, you can follow the high-res example from the GitHub. The KSampler (Efficient) node is designed to empower users to perform data sampling with minimal latency and computational demands. You signed out in another tab or window. Comes out of the box with popular Neural Network Latent Upscalers such as Ttl's ComfyUi_NNLatentUpscale and City96's SD-Latent-Upscaler. __get__(instance, owner)() ----- Efficient Loader Models Cache: Ckpt: [1] dreamshaper_8 Lora: [1] base_ckpt: dreamshaper_8 lora(mod,clip): epi_noiseoffset2(1. If you want to select one of the results from the first KSampler and generate 4 to 8 images using i2i in the second KSampler, you can use the ImageSelector to choose an image and then use RepeatImageBatch or RepeatLatentBatch to create a batch of copies of the same image. On the regular ksampler, there is a nice denoise option (i'm trying to do img2img btw) but because I'm using SDXL and need the 1st pass to be… Skip to main content Open menu Open navigation Go to Reddit Home Adding in Iterative Mixing KSampler from the early work on DemoFusion produces far more spatially consistent results as shown in the second image. I found the Flux workflow for Schnell and Dev provided by comfyanonymous to be a little complicated, so I decided to experiment with KSampler and I am getting identical results. Start with the HighResFix script of KSampler (efficient), that is close to A1111's HiResFix. Node that allows for flexible image We would like to show you a description here but the site won’t allow us. 0, dpmpp_sde_gpu, karras, denoise 1. It would be cool to be able to have your ksampler with lets say 30 steps. There is also Kohya's HiresFix node, that provides a way to generate 1024x1024 images (using sd1. Tried the fooocus Ksampler using the same prompt, same number of steps, same seed and same samplers than with my usual workflow. Since adding endless lora nodes tends to mess the simplest workflow, I'm looking for a plugin with a lora stacker node. The issues are as follows. nothing. SD1. Please keep posted images SFW. Belittling their efforts will get you banned. Once a low res face swap is performed you can pass the image through a lineart controlnet and a ksampler with a low denoise. Or check it out in the app stores in common_ksampler samples = comfy. The Ultimate SD upscale is one of the nicest things in Auto11, it first upscales your image using GAN or any other old school upscaler, then cuts it into tiles small enough to be digestable by SD, typically 512x512, the pieces are overlapping each other and can be bigger. We are Reddit's primary hub for all things modding, from troubleshooting for beginners to creation of mods by experts. I can only get the seed of the ksampler to randomize once per queued generation- When doing batches/repeated processes during a single queued generation, how can I make the seed change with each batched iteration? Welcome to /r/SkyrimMods! We are Reddit's primary hub for all things modding, from troubleshooting for beginners to creation of mods by experts. KSampler (Efficient) GMFSS Fortuna VFI ConditioningSetMaskAndCombine GrowMaskWithBlur INTConstant Nodes that have failed to load will show as red on the graph. Hi, I understand that the Efficiency node pack's XY plot functionality enables the automatic variation and testing of parameters such as "CFG", "Seeds", and "Checkpoints" within the KSampler (Efficiency). A seed is just a number, but it plays a crucial role in image generation. I’m not seeing many others have this problem on discord or Reddit, so I’m a bit lost! all of them works as expected on KSampler nodes BUT not at all on KSampler Advanced which should be used for SDXL workflow. This workflow can be greatly reduced in size by using the new Efficiency Loader SDXL and Efficiency KSampler SDXL nodes, by LucianoCirino, which also support a ControlNet Stack as input. float1… Thanks for the tips on Comfy! I'm enjoying it a lot so far. Just doing some refinement in a regular KSampler. 5 models) without weird artifact and extra limbs. qiwfninkzpzqiotazjztsjnhtpauhnirgnwvruwvaqozatydmae