Comfyui batch processing reddit It's significantly faster. com Needless to say, everything other than OpenBLAS uses GPU, so it essentially works as GPU acceleration of prompt ingestion process. I also updated everything using the Manager and then "Update All" button and also tried it with the ". Would any of you knowledgable souls be able to guide me on how to achieve this? Once I've amassed a collection of noteworthy images, my plan is to compile them into a folder and execute a 2x upscale in a batch. I explain again. You can look at the EXIF data to get the SEED used. This causes my steps to take up a lot of RAM, leading to killed RAM. py (By the way - you can and should, if you understand Python, do a git diff inside ComfyUI-VideoHeperSuite to review what's changed) Auto queue on, Batch from latent image set to 1, batch size on manager set to 1. It cancels the entire process altogether. com) Comfy will process the images one by one. 0, which includes an I2I function with a dedicated node for batch image loading: https://www. Is there a way for a node to process only first bacth element with first element, second with second etc? As a reference picture of node processing batch of 3 elems * batch of 3 elems = 9 output pictures. jpg to select only JPEG images in the directory specified. Thank you for the quick answer, but I fear I formulated my question badly. Disclaimer: (I love ComfyUI for how it effortlessly optimizes the backend and keeps me out of that shit. I have tried it (a) with one copy of SDXL running on each GPU and (b) with two copies of SDXL running per GPU. Imo it'd be great to be able to vary a parameter from a low to high number when running a batch to be able to quickly test and compare the parameters impact or have different wildcards chosen in each batch image. This functionality is useful for operations that require multiple instances of the same image, such as batch processing or data augmentation. Thanks for the n_repeat_batch_size tip. . Check out an example here: We would like to show you a description here but the site won’t allow us. Batch index counts from 0 and is used to select a target in your batched images Length defines the ammount of images after the target to send ahead. But if you saved one of the still/frames using Save Image node OR EVEN if you saved a generated CN image using Save Image it would transport it over. The script uses a prompt json ( not a workflow json) to send. /update/update_comfyui" Batch File, it said that everything is up to date. Also, if this is new and exciting to you, feel free to post Welcome to the unofficial ComfyUI subreddit. I tried it once in Kobold and it sped up the evaluation by a lot. Im using these Custom Nodes: controlnet_aux FizzNodes Advanced-ControlNet AnimateDiff-Evolved Manager We would like to show you a description here but the site won’t allow us. When ComfyUI loads the image, I have all the parameters ready for next round of adjustments. map file that defines where the prompt and other values should be Replace ComfyUI-VideoHeperSuite\videohelpersuite\load_images_nodes. Install the comfyui manager, then after you restart comfyui, click on the manager button and for Preview method select 'TAESD (slow)'. true. In 1111 using image to image, you can batch load all frames of a video, batch load control net images, or even masks, and as long as they share the same name as the main video frames they will be associated with the image when batch processing. And i tried to reinstall FizzNodes using manager. Are there any custom nodes (post process, samplers) that would help to target some kind of color palette, color coherence or color match ? I already tried Ultimate SD without upscale and tile controlnet without success. “Processing is a flexible software sketchbook and a language for learning how to code within the context of the visual arts. If you're starting with a quick sampled latent (like your prediffusion group), yeah, you need to decode that latent to pixelspace before the journey to the image processing group, but don't vae needlessly! I switched from a1111 to comfy ui and i couldn't find a workflow that allows me to batch img2img (from folder) and use controlnets on them. My question is: is it possible to use two GPUs in the same instance for parallel processing? To clarify, I don't mean using both GPUs for the same generation, but using both GPUs for parallel generations. Its 100 images! It just saves time I guess. Please keep posted images SFW. Is there a way to load each image in a video (or a batch) to save memory? Feb 25, 2025 路 Batch Count (4 images): Total generation time of 62. The batch count is under extra options in the menu. Input your batched latent and vae. Change that batch size from 512 to 1024 and see what happens. Haven't been able to find any batch image videos yet (I could just be missing them). Every time you run the . 5 by using XL in comfy. Next do the same thing for Dir2/Subdir2, and so on in a batch We would like to show you a description here but the site won’t allow us. Since 2001, Processing has promoted software literacy within the visual arts and visual literacy within technology. Combining them into one might result in a loss of quality for the other images. Looking for a way that would let me process multiple controlnet openpose models as a batch within img2img, currently for gif creations from img2imge i've been opening the openpose files 1 by 1 and the generating, repeating this process until the last openpose model There are apps and nodes which can read in generation data, but they fail for complex ComfyUI node setups. Is there a node that allows processing of list of prompts or text files containing one prompt per line list or better still - a node that would allow processing of parameter sets in csv or similar spreadsheet format, one parameter set per row, so I can design 100K worth of prompts in Excel and let ComfyUI Welcome to the unofficial ComfyUI subreddit. I tried renaming every file in my queue to "queue_[randomString]" and filter by "queue_" but it didn't work and I didn't see the correct results, it seems they didn't get updated unless I first manually load one of ComfyUI has several types of sequential processing, but by using the "list" format, you can process them one by one. All images in the batch will have consistent dimensions, matching those of image1. Finally, someone adds to ComfyUI what should have already been there! I know, I know, learning & experimenting. They will then behave the same as if you have generated a batch of images using a ksampler. xy plots are great to visualize small quality improvements in Loras or overcooking signs. Thanks for your help What I'd like the process to ultimately look like: I have a folder of 100 images at "C:\input_images" with names like "_frame_01_image. working on a basic animation tutorial but wanna get the workflow right so its not terrible lol. Sometimes you need to do 10x10 (weights x epochs). Where is the "image" input searching images from? Because when I left click it I see a huge list of images. Welcome to the unofficial ComfyUI subreddit. For example, this is mine: can prettymuch be scaled to whatever batch size by repetition. When running the batch, the images will be saved in the output folder, and in the next batch it will read the image generated in the previous batch. I think CHAINNER and ComfyUI would be worth learning to make such custom workflows. outputs[unique_id] = (counter, output_process) # Send frames to ffmpeg in batches for batch in image_generator(images): output_process. You can also specifically save the workflow from the floating ComfyUI menu I have made a workflow to enhance my images, but right now I have to load the image I want to enhance, and then, upload the next one, and so on, how can I make my workflow to grab images from a folder and for each queued gen, it loads the 001 image from the folder, and for the next gen, grab the 002 image from the same folder? We would like to show you a description here but the site won’t allow us. (Image Batch to Image List node was very useful) 銉籘hese functions can be flexibly enabled/disabled by connecting different nodes. I have a LIST if 6 items, each item has _DIFFERENT_ number of images in BATCH. So I write an extension to acquire the prompt. So segmenting the car with SAM & DINO, inverting the mask and putting the car in the scene, got some great compositions, only issue is I feel with some of them is, while the lighting works I feel as if the colours between the inpainted car and the overall scene aren't matching up. I believe he does, the seed is fixed so ComfyUI skips the processes that have already executed. Doing 4 images in batch sizes is faster than doing four queues of 1. The screenshot shows what im trying to do. jpg" In ComfyUI I load that folder into "Load Image Batch from Dir (Inspire)" Do a little Comfy magic and the output of the VAE_Decode somehow outputs to: C:\output_images\_frame_01\_frame_01_image001. I would love to know if there is any way to process a folder of images, with a list of pre-created prompt for each image? I am currently using webui for such things however ComfyUI has given me a lot of creative flexibility compared to what’s possible with webui, so I would like to know. By applying both a prompt to improve detail and to increase resolution (indicating as percentage, for example 200% or 300%). Hello, I conneted batches to a node (Latent Composite), and a node processes each element of one batch with each element of another batch. I prefer the load video node from the ComfyUI-N-nodes pack because it has a built in 'batch size' function. A batch will process the batch as one step, so if a batch of 1 takes 10 seconds, a batch of 10 will take 100 seconds. I would like to ask about saving files in batches. Hello! I have created a workflow that uses an IP adapter + a shuffle controlnet to create variations on an input mask the first step is the generation of a base image, then the base image gets piped through a second ksampler where another graphic overlay layer is inpainted using the mask generated using controlnet shuffle I belive there could be a node called , cycle something or repeat , that is meant for batch that solid give you the option to cycle throw all prompts or images that have the node , I've used it from prompts and orange to use it for images, can you keep me updated on the results, I've search on reddit on found this information, I di belive it is called something cycle , you could search all Find your ComfyUI main directory (usually something like C:\ComfyUI_windows_portable) and just put your arguments in the run_nvidia_gpu. A lot of people are just discovering this technology, and want to show off what they created. json Feb 25, 2025 路 Did you know that using the wrong batch settings in ComfyUI could slow down your workflow by up to 400%? In this guide, we'll explore everything you need to know about batch processing in ComfyUI, from basic concepts to advanced workflow optimization techniques. Posted by u/Klutzy-Fishing-3764 - 2 votes and 11 comments It's not so simple. Is there any way around that? Pressing go or cancel while a process isn't running causes that option to automatically be used on the next queue. The queue will go through the workflow one at a time as it clears the queue, and you can cancel items in the queue, or add to it. py. Mar 26, 2025 路 The output is a single batch of images concatenated along the batch dimension. Pressing cancel doesn't allow other paths to keep moving forward. send(None) # Proceed to first yield if batch_manager is not None: batch_manager. So when I render batches, comfyui is processing them simultaneously, previewing just the first image. Ideally, I'd love to leverage the prompt loaded from the image metadata (optional), but more crucially, I'm seeking guidance on how to efficiently batch load images from a folder for subsequent upscaling. This is done after the refined image is upscaled and encoded into a latent. It has now taken upwards of 10 minutes to do seemingly the same run. They work exactly the same as the corresponding terms in A1111/SD. Basically this allows you to use the dialogue file output by Tes5edit or the creation kit to generate audio in batches. It worked way more faster once I decreased it to 1. I hope this is helpful馃槑 I've been googling around for a couple hours and I haven't found a great solution for this. It works beautifully to select images from a batch, but only if I have everything enabled when I first run the workflow. So if you don't have a GPU, you use OpenBLAS which is the default option for KoboldCPP. Idea is to load up all the images from Dir1 root, process them in ComfyUI then save output in Subdir1. The batch image node takes single images and makes them into a batch. Even better than the JSON workflow files, images produced by ComfyUI have that JSON info embedded within each image. Batch processing, debugging text node. I am running ComfyUI on a machine with 2xRTX4090 and am trying to use the ComfyUI_NetDist custom node to run multiple copies of ComfyUI server, each using separate GPU, to speed up batch generation. If you want to grow your userbase, make your app USER FRIENDLY. ) batch works as you say. Look for batch_size in the Empty Latent Image node. To create a seamless workflow in ComfyUI that can handle rendering any image and produce a clean mask (with accurate hair details) for compositing onto any background, you will need to use nodes designed for high-quality image processing and precise masking. Hello, my goal is to run a workflow using a specific set of checkpoints (5-10 out of dozens in the folder. With Automatic1111, it does seem like there are more built in tools perhaps that are helping process the image that may not be on for ComfyUI? I have a 2 minute video that I am processing with IPA and CN, but I have to batch process the video as it is too long for a single run. Go to img2img -> batch tab. Hey guys, So I've been doing some inpainting, putting a car into other scenes using masked inpainting. Once ComfyUI gets to the choosing it continues the process with whatever new computations need to be done. no no. I wanted to know what difference they make, and they do! Credit to Sytan's SDXL workflow, which I reverse engineered, mostly because I'm new to ComfyUI and wanted to figure it all out. really need a pixel motion detection component for sampling. I want to load it into comfyui, push a button, and come back in several hours to a hard drive full of images. bat file, it will load the arguments. Batch processing in comfyui is already possible, so its essentially just a workflow that takes a reference image and "flavors" the source image. How to load two different images into a workflow for batch processing (WAS Node Suite) Using ComfyUI I've made a workflow for face inpainting but I want to use my own masks instead of generating them in Comfy with models based on the images itself I'm currently troubleshooting the occasional system crash when processing big batches via WAS node suite (but I had a similar issue under A111 too)… I use Load Images [Deprecated], index load cap 0, start index 0, then an Ultimate Upscale, then an Image Save. After 24, Integer wraps back to 1. jpg how could I make it so that each new image gets slapped on top of a base image, meaning an additive collage? Right now I have the Paste Image node connected to the end of the pipeline, but instead of outputting the base image with each new generation slapped ontop of it additively , it creates a new collage for each generated image with only 2 layers instead of a layer for each loop executed If I set up 4 color options as part of a prompt like "{red|yellow|pink|blue}" and set the batch size to 4, Comfy spits out 4 "red" images. 20 seconds; While batch size appears faster at first glance, here's what's actually happening: When we try to process multiple images at once using batch size, it's like trying to stuff too many cookie trays in a small oven. Getting latent batch sizes to each have different prompts can be a pain, but getting an actual queue batch to have different prompts should be no issue. If you give me a workflow, I can cook something up quickly. ”—from Processing. This batch will include image1 and any of the optional images (image2 to image6) that were provided. The desired behavior: Comfy UI sends a different integer to switch every time auto queue fires, in sequence, starting with 1 and ending in 24. I'm used to the a1111 img2img batch for my ai videos Any help please? Welcome to the unofficial ComfyUI subreddit. A seed defines a value for a deterministic "random" process that means it can be repeated in the future. pattern is a glob that allows you to do things like **/* to get all files in the directory and subdirectory or things like *. See pic A. Mar 22, 2025 路 I am trying to batch load some images from different directories, process them, then place the output also in separate directories, Like this: Directory 1: Images, Subdirectory1. In this case if you enter 4 in the Latent Selector, it continues computing the process with the 4th image in the batch. The different from original Video Helper Suite node is that the original loads all images as batch. Activate ControlNet (don't load a picture in ControlNet, as this makes it reuse that same image every time) Set the prompt & parameters, the input & output folders Set denoising to 1 if you only want ControlNet to influence the result. And ComfyUI-VideoHeperSuite\videohelpersuite\nodes. Oct 11, 2024 路 Posted in r/comfyui by u/the_parthenon • 3 points and 2 comments Feb 23, 2023 路 A way to do batch processing in the GUI itself is planned but right now you can do it with scripts: https://github. Will try to make opensource when done. I have a video and I want to run SD on each frame of that video. Drop the image back into ComfyUI to load it, then change the seed to what was in the exif data from the one you liked. Comfy has wildcards which are great, but they're only chosen once when running a batch (so all the images have the same wildcard). I don't think the generation info in ComfyUI gets saved with the video files. I’m trying to set Mar 26, 2025 路 Supports batch processing of single to multiple images. Will reset it's place if the path, or pattern is changed. In ComfyUI using Juggernaut XL, it would usually take 30 seconds to a minute to run a batch of 4 images. I made a tool for batch generating voices for Npcs using elevenlabs ai voice generator. May 12, 2025 路 Learn about the RepeatImageBatch node in ComfyUI, which is designed for replicating a given image a specified number of times, creating a batch of identical images. So, I create a batch size of 48 (an animatediff limit), and batch my video which works fine, except every 48 frames the video radically changes, so my 2 minunte video becoems 30 4-sec (@12fps) videos stitched together with no continuity. 25 seconds n_repeat_batch_size:2 -> Prompt executed in 215. I just tested with a batch_size of 2, and it only displayed the first item in the sampler preview, but it did process both items in the batch. In the process of saving the file, I use 'image save' as shown in the picture, but it will be renamed to 'prefix_xxxx' instead. ) I realize I can 'pull' the checkpoint to a primitive node in order to set the process to "incremental" if I wanted to run every single checkpoint within said folder, but really hoping there's a way to 'highlight' a specific few checkpoints in order to run batches with just those Few other notes, please bring back the batch processing option, where we can select a folder. , and then re-enable once I make my selections. We ask that you please take a minute to read through the rules and check out the resources provided before creating a post, especially if you are new here. 63 seconds; Batch Size (4 images): Total generation time of 16. Currently they all save into a single folder. I have been running sequences through AnimateDiff that are up to a 1000 frames long, and I haven't found an alternate way of automatically processing that many frames without this particular node. Anyone have a decent turoial or workflow for batch img2img for comfyui? I'm looking at doing more vudeo render type deals but comfyui tutorials are all about sdxl. Open the . There are tens of thousands of students, artists, designers, researchers, and hobbyists who use Processing for learning and prototyping. reddit. Also, a prerequisite for processing batches simultaneously is that the images must have the exact same size. py with the following code: nodes. I need C be again the same as A, the same number of list items, and the same number images in batch (inside each item in LIST). For example, if I set a batch size of 8, I want 4 images to be generated by one GPU and 4 by the other. For such a small task, it may seem so, but implementing such a batch process is not so easy to do when you consider larger workflows. 15 votes, 14 comments. That's one setting I'm not super familiar with, but my understanding is that going higher makes it evaluate faster, at the cost of some accuracy. This is mostly since I want to process stuff without having to manually switch out prompts and settings. Share Add a Comment Sort by: Load Batch Images Increment images in a folder, or fetch a single image out of a batch. Could temporal net be used to maintain such color coherence ? edit : added current workflow in comment Thanks for any help We would like to show you a description here but the site won’t allow us. At the same time is gonna be a tall order, but you can batch upscale if you want. Thanks, already have that but run into the same issue I had earlier where the Load Image node is missing the Upload button, fixed it earlier by doing Update All in Manager and then running the ComfyUI and Python dependencies batch files but that hasn't worked this time, so only going top be able to do prompts from text until I've figured it out. I am trying to create a workflow which currently create batch of images which each have different prompt. com/comfyanonymous/ComfyUI/blob/master/script_examples/basic_api_example. I would like to save them to a new folder for each generation so I can better data manage. I am new to ComfyUI and I am already in love with it. If I want to get all 4 color options output at once, what's a more efficient method than setting up 4 separate ksamplers with their own color options? Welcome to the unofficial ComfyUI subreddit. That led me to the problem, which is that I git cloned the repo in the ComfyUI root folder `ComfyUI_windows_portable` not the `\ComfyUI` folder. Jan 6, 2025 路 use the following search parameters to narrow your results: subreddit:subreddit find submissions in "subreddit" author:username find submissions by "username" site:example. My team is working on building a pipeline for processing images. Next whataI need to do is using the LoadImage node to load the pic itself into the workflow and turn the switch to feed that image through VAE - Run the batches: for that, click on the 'extra options' and set the number of batch count the same as the number of frames of the video. making it reset can be done by changing batch name or sending 1 via boolean input while qued up . Btw what I mean by 8x are 8 different generations. I've been doing it using the img2img -> batch tab. So starting with a batch of 1000 frames does not work, memory shortage Welcome to the unofficial ComfyUI subreddit. But each 20 frames, the next image is not completely linked to the others as if the seed changed (but i set it on "fixed") so you can see it's not completely smooth. com/r/comfyui/comments/17ehsj7/ap_workflow_50_for_comfyui_now_with_face_swapper/ Hi everyone, I’m new to ComfyUI, transitioning from A1111, and exploring its automation capabilities for image generation. 1 dev workflow is is included as an example; any arbitrary ComfyUI workflow can be adapted by creating a corresponding . Then I am choosing which images to upscale using Preview Chooser node. You can check my AP Workflow 5. You can add --previer-method auto to your startup bat file and it'll show each step as the sampler processes. It should show you each of the batch items as they process, but you'd need to verify. Future tutorials planned: Prompting practices, post processing images, batch trickery, networking comfyUI in your home network, Masking and clipseg awesomeness, many more. output_process. Please share your tips, tricks, and workflows for using this software to create your AI art. After processing we have B matrix - its flat LIST where each item has ONE picture. Add the standard "Load Image" node Right click it, "Convert Widget to Input" -> "Convert Image to Input" Welcome to the unofficial ComfyUI subreddit. We can share this when we are done if it would be helpful? WAS nodes has a batch image loader if that's what you're searching for. 88 seconds /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Trying to move away from Auto1111 I totally get what you're after, and it's a bit tricky to explain to others. Hi reddit! I am fairly new to ComfyUI and was thinking if anyone could help me out. I want to generate batches of images (like 4 or 8) and then select only specific latents/images of the batch (one or more images) to be used in the rest of the workflow for further processing like upscaling/Facedetailer. unfortunately your examples didn't work. I've got everything working with non-custom nodes, (it's awesome!) and am getting some issues with custom nodes - mostly I think they are my fault with my kind of jank environment. In regards to batch size I don't know if you mean latent batch or actual batch. Belittling their efforts will get you banned. How can I process them sequentially, one after another? How to iterate over files in a folder? : comfyui (reddit. Using this workflow, I txt2img a batch of images, pick out the best one, drag n drop it back to ComfyUI. I essentially want to automatically upscale all images in a folder with a specific upscaler model, then run do img2img with low denoise to remove the upscaling artefacts. I tried installing the ComfyUI-Image-Selector plugin, which claims that I can simple mute or disconnect the Save Image node, etc. I've been doing this manually in Forge but it takes an ungodly amount of time to have to churn through on a prompt-by-prompt, artist-by-artist Hi all, The title explains it, I am repeating the same action over and over on a number of input images and I would like, instead of having to manually load each image and then pressing on the "queue prompt", to be able to select a folder and have Comfy process all input images in that folder. And please have it optional for uploading, cause uploading tens of thousands of images breaks it and takes up additional space. I am completely new to Comfyui, I wanted to load a video of 3min and process 20 frames by 20 frames using VHS suite with batch manager set on 20 frames (I have only 8Gb Vram). I didn't know if that was intentional or not, but it has led me to some fairly annoying misclicks As the title says, say I want to batch process 8x of prompt A 1024x1024, 12x of prompt B at 640x1024, 6x of prompt C at 1280x720 then let it run in the background. Any way to batch process a bunch of images and have the program run back to back autonomously? Currently I have been processing images 1 at a time, using a ton of steps in euler A (40 steps) so I'm taking about 5 minutes to process since I'm using some extra lora motion nodes as well. bat file with notepad, make your changes, then save it. Having a batch size larger than 1 indicates you would like "X" number of pictures that are all different. py with the following code: load_images_nodes. Then the images from each step will appear in the Ksampler node. However, for batch images, the detected regions, quantity, and sizes vary for each image. tobytes()) # Handle pingpong effect if required if pingpong: We would like to show you a description here but the site won’t allow us. Hello, I did some testing of KSampler schedulers used during an upscale pass in ComfyUI. Luckily I found the simplest solution: Just link the Loadcheckpoint Node to Batch Prompt Schedule (Fizznodes), then directly to Ksampler like this, without any other nodes in between. And above all, BE NICE. send(batch. This works but I want (-->>memory issues) to save this 1 image a time! When I start with a batch of 10, then Comfyui does his thing and then saves 10 upscaled ones. One alternate method is to use the api and a python script. Next. A new FreeU v2 node to test the updated implementation* of the Free Lunch technique. I’m trying to build a batch processing pipeline which transforms CSVs into some common JSON schema. It's quite simple. We would like to show you a description here but the site won’t allow us. But your requirements for number 2 was: Toggle switch where you get to control branching based on given conditions. I also had the same issue with blending images using Batch Prompt Schedule. They should allow you to load a folder and put all images through the same process A new Image2Image function: choose an existing image, or a batch of images from a folder, and pass it through the Hand Detailer, Face Detailer, Upscaler, or Face Swapper functions. Basically I want to choose a folder and process all the images inside it. Rather, the idea is to take an existing batch of images and stylistically re-align them, while preserving what the individual imag compositions show. Currently, I use "Load image batch from Dir (inspired)" to work on SUPIR. The extra options from the control panel, from what I can see, have a batch count (no batch size) option; the only thing the option does, I think, is queueing up a number of batches of size 1 one after the other (basically the same thing as if I clicked on queue prompt the same number of times). If you can afford to put all the images in ComfyUI's "input" folder, a simple "native" way to do it is: As mentioned, put all the images you want to work on in ComfyUI's "input" folder. bat file. What are good ways of achieving that? The pipeline needs to: start a job on signal (perhaps airflow/dagster) process csv file (~10GB) load json into document store perform other tasks eventually, like validation, diffing, notifying, etc. Hey all- I'm attempting to replicate my workflow from 1111 and SD1. heres a super rough diagram of the process, you start with a sampler, go through a latent from batch node which breaks your images out (you can just copy from the batch image preview if you want but individual image previews might be easier for you. Simple command-line interface allows you to quickly queue up hundreds/thousands of prompts from a plain text file and send them to ComfyUI via the API (the Flux. Putting here the execution time for a quick comparison: n_repeat_batch_size:1 -> Prompt executed in 53. I tried the load methods from Was-nodesuite-comfyUI and ComfyUI-N-Nodes on comfyUI, but they seem to load all of my images into RAM at once. Thanks for the answer! I almost got it working but I have questions. That seems to cover lots of poor UI dev. In the Inspire pack, there is a LoadImageListFromDir //Inspire node that loads images as a list. This requires all of the images to be the same size, and processing batches can hurt VRAM, and not all nodes support batches. org Workflow for batch processing multiple prompts? Hi, I'm trying to make a model cards, which breakdown how different models/merges/finetunes handle different artist styles across different subjects. Batch Images One or More Usage Tips: We are Reddit's primary hub for all things modding, from troubleshooting for beginners to creation of mods by experts. I just made a few tests and the Comfyui batch process doesn't assign the right seed for each image but only assign one seed for all images. eg: batch index 2, Length 2 would send image number 3 and 4 to preview img in this example. The VAE process takes time, and only degrades the quality of the resultant image being fed into your image processing. I have a text file full of prompts. Hello i am running some batch processing and I have setup a save image node for my controlnet outputs. Is it possible to iterate a ComfyUI workflow over a batch of video clips within a folder for a vid2vid workflow? I'm trying to remaster a CGI video with realism, but it's got about 90 clips I need to process, and I don't want to have to hold the whole video in memory to process all 7 minutes. But I'm having great results with batch counts alone, but batch size would speed thing. The correct way to define the batch size is in the empty latent image node where you set the resolution. Having been generating very large batches for character training (per this tutorial which worked really well for me the first time), it occurs to me that the lack of interactivity of the process might make it an ideal use case for ComfyUI, and the lower overhead of ComfyUI might make it a bit quicker.
vrxb ekfxp fyuh ckwdst oqqtuk qjqss qnbr oipfq kyylj xwimi