Image size comfyui reddit


Image size comfyui reddit. 0 denoise, due to vae, maybe there is an obvious solution but i don't know it. No, in txt2img. 2 denoise to fix the blur and soft details, you can just use the latent without decoding and encoding to make it much faster but it causes problems with anything less than 1. - comfyanonymous/ComfyUI Posted by u/tobi1577 - 216 votes and 49 comments You can't enter a latent image size larger than 8192. We would like to show you a description here but the site won’t allow us. 0 would be a totally new image, and 0. Cropping Parameters - during the training process, some cropping happens as not all aspect ratios are supported. Insert the new image in again in the workflow and inpaint something else rinse and repeat until you loose interest :-) I improved on my previous expressions workflow for ComfyUI by replacing the attention couple nodes by area composition ones. this is like copy paste basically and doesnt save the files to disk. So, if you want to change the size of the image, you change the size of the latent image. This will be follow-along type step-by-step tutorials where we start from an empty ComfyUI canvas and slowly implement SDXL. You can encode then decode bck to a normal ksampler with an 1. . please help me. Also, if this is new and exciting to you, feel free to post Hello, Stable Diffusion enthusiasts! We decided to create a new educational series on SDXL and ComfyUI (it's free, no paywall or anything). The goal is to take an input image and a float between 0->1the float determines how different the output image should be. Hello Fellow ComfyUI users, this is my workflow for testing different methods to improve image resolution. That’s a cost of abou A transparent PNG in the original size with only the newly inpainted part will be generated. In truth, 'AI' never stole anything, any more than you 'steal' from the people who's images you have looked at when their images influence your own art; and while anyone can use an AI tool to make art, having an idea for a picture in your head, and getting any generative system to actually replicate that takes a considerable amount of skill and effort. 5 with lcm with 4 steps and 0. 5 is trained on images 512 x 512. I can obviously pick a size when doing Text2Image but when prompting off an existing image my final image will always just be the same size as the inspiration image. if I need a few ideas. I think the bare minimum would be the following but having the rest of the defaults next to it could be handy if you want to make other changes. can prettymuch be scaled to whatever batch size by repetition. In the process, we also discuss SDXL architecture, how it is supp I save only best images with their respective data. Enjoy a comfortable and intuitive painting app. And above all, BE NICE. css and change the font-size to something higher than 10px and you should see a difference. I have a workflow I use fairly often where I convert or upscale images using ControlNet. How do I do the same with ComfyUI? Welcome to the unofficial ComfyUI subreddit. this will open the live painting thing you are looking for. So you have the preview and a button to continue the workflow, but no mask and you would need to add a save image after this node in your workflow. Automatic1111 would let you pick the final image size no matter what and give you options for crop, just resize, etc. The one that is shown in the "post view" is a "preview JPEG" (even though it looks as if it is full size) which does not have the metadata. Probably not what you want but, the preview chooser\image chooser node is a custom node that pauses the flow while you choose which image (or latent) to pass on to the rest of the workflow. The denoise on the video generation KSampler is at 0. This workflow generates an image with SD1. I have a workflow that is basically two user branches. io/ComfyUI_examples/flux/flux_dev_example. The problem is when i need to make alterations but keep the image the same, ive tried inpainting to change eye colour or add a bit of hair etc but the image quality goes to shit and the inpainting isnt really doing what i want. May 14, 2024 · A basic description on a couple of ways to resize your photos or images so that they will work in ComfyUI. To open ComfyShop, simply right click on any image node that outputs an image and mask and you will see the ComfyShop option much in the same way you would see MaskEditor. Hello, Stable Diffusion enthusiasts! We decided to create a new educational series on SDXL and ComfyUI (it's free, no paywall or anything). However, image size (height and width of the image) is fed into the model. I have tried many times. In the process, we also discuss SDXL architecture, how it is supp With Masquerades nodes (install using comfyui node manager), you can maskToregion, cropByregion (both the image and the large mask), inpaint the smaller image, pasteByMask into the smaller image, then pasteByRegion into the bigger image. Yes, in general comfyui is great to create custom pipelines and workflows, but not as much if you want full creative control in the construction of a specific image, start to finish. The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. So ive used openpose to get the pose right and prompt to create the image which im happy with as a version 1. So when I am a big fan of both A1111 and ComfyUI. Along with normal image preview other methods are: Latent Upscaled 2x Hires fix 2x(two pass img) Howdy! I'm not too advanced with ComfyUI for SD generation yet, but I've made a lot of progress thanks to your help. So I use batch picker, but I cant use that with efficiency nodes. It is much more coherent and relies heavily on the IPAdapter source image as you can see in the gallery. you wont get obvious seams or strange lines Welcome to the unofficial ComfyUI subreddit. They are completely different. Layer copy & paste this PNG on top of the original in your go to image editing software. here Tip, for speed, you can load image using the (clipspace) method using right click on images you generate. Very curious to hear what approaches folks would recommend! Thanks You cannot expect an even similar result unless you use the exact same seed, image dimensions, sampler etc if you are using token merging or --opt-sdp-attention flag in A1111 or ancestral samplers anywhere in your workflow your results are non deterministic so they will be nearly impossible to reproduce. ComfyShop has been introduced to the ComfyI2I family. I want to upscale my image with a model, and then select the final size of it. So 0. oh, because in SD i noticed the aspect ratio of the latent image will influence the result of the output - like if you wanted a tall, standing person, but had the aspect ratio of a standard desktop (1920x1080, or 1. The image in the left (directly after generation) is blurry and lost some tiny details; the image on the right (after mask-compose node) retains the sharpness, but you can see clearly the bad composition line, with sharp transition. You don't need to switch to one or the other. Save the new image. Hey everyone, I've been exploring the possibility of using an image as input and generating an output image that retains the original input's dimensions. The only way I can think of is just Upscale Image Model (4xultrasharp), get my image to 4096 , and then downscale with nearest-extact back to 1500. KSampler Welcome to the unofficial ComfyUI subreddit. 2 would give a kinda-sorta similar image, 1. Nearest-exact is a crude image upscaling algorithm that, when combined with your low denoise strength and step count in the KSampler, means you are basically doing nothing to the image when you denoise it, leaving all the jagged pixels introduced from your initial upscale. github. Batch index counts from 0 and is used to select a target in your batched images Length defines the ammount of images after the target to send ahead. Stable Diffusion 1. Image Size - instead of discarding a significant portion of the dataset below a certain resolution threshold, they decided to use smaller images. Copy that into user. Image Resize (Image Resize): Adjust image dimensions for specific requirements, maintaining quality through resampling methods. A lot of people are just discovering this technology, and want to show off what they created. lol, thats silly, its a chance to learn stuff you dont know, and thats always worth a look. 1’s 200,000 GPU hours. As an input I use various image sizes and find I have to manually enter the image size in the Empty Latent Image node that leads to the KSampler each time I work on a new image. In the process, we also discuss SDXL architecture, how it is supp New users of civitai should be aware the PNG (which contains the metadata) can only be downloaded from the "image view". from a folder Welcome to the unofficial ComfyUI subreddit. I used the same checkpoint, sample method, prompt, step, but i got completely different images from webui and comfyUI, I mean they have different style and color, I don't know why. The first branch has: Txt to Image and then Image to SDVID with the new SD vid models that came out. You set the height and the width to change the image size in pixel space. A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) Hello, Stable Diffusion enthusiasts! We decided to create a new educational series on SDXL and ComfyUI (it's free, no paywall or anything). 5, then uses Grounding Dino to mask portions of the image to animate with AnimateLCM. If you have created a 4 image batch, and later you drop the 3rd one into comfy to generate with that image, you dont get the third image, you get the first. Please share your tips, tricks, and workflows for using this software to create your AI art. Increasing the tile size to half the image's dimensions (1536) does improve image quality, but the speed benefit diminishes. Works great. Then I have a nice result I do composition ( Image 2). The graphic style We would like to show you a description here but the site won’t allow us. If you just want to see the size of an image you can open an image in a seperate tab of your browser and look up top to find the resolution too. sft file in your: ComfyUI/models/unet/ folder. In Image 3 I compare pre-compose with post-compose results. anyway. I am searching for a node that does the following: I want to generate batches of images (like 4 or 8) and then select only specific latents/images of the batch (one or more images) to be used in the rest of the workflow for further processing like upscaling/Facedetailer. I would like to know if that is due to some reason other than images that large take a long time. ComfyShop phase 1 is to establish the basic painting features for ComfyUI. If I were to make some type of custom node or modify the core node and allow a larger latent image size, would that break the whole process and there is some larger reason that 8192 is the hard In this technique, noise images increase the amount of detail. Using "batch_size" as part of the latent creation (say, using ComfyUI's `Empty Latent Image` node) Simply running the executing the prompt multiple times, either by smashing the "Queue Prompt" button multiple times in comfyUI, or changing the "Batch count" in the "extra options" under the button. In the provided sample image from ComfyUI_Dave_CustomNode, the Empty Latent Image node features inputs that somehow connect width and height from the MultiAreaConditioning node in a very elegant fashion. How to Magically Resize Your Images: The 1024px Rule That Will Change Everything. Its solvable, ive been working on a workflow for this for like 2 weeks trying to perfect it for comfyUI but man no matter what you do there are usually some kind of artifacting, its a challenging problem to solve, unless you really want to use this process, my advice would be to generate subject smaller and then crop in and upscale instead. You can then load or drag the following image in ComfyUI to get the workflow: This image contains the workflow (https://comfyanonymous. Please keep posted images SFW. it is a simple way to compare these methods, it is a bit messy as I have no artistic cell in my body. The extra options from the control panel, from what I can see, have a batch count (no batch size) option; the only thing the option does, I think, is queueing up a number of batches of size 1 one after the other (basically the same thing as if I clicked on queue prompt the same number of times). And this is generated by webui. Jul 6, 2024 · The size of the latent image is proportional to the actual image in the pixel space. png) Jul 31, 2024 · Scale Down To Size (ImageScaleDownToSize): Resize images while maintaining aspect ratio for AI artists, offering flexibility in scaling dimensions. During my img2img experiments with 3072x3072 images, I noticed a quality drop using Hypertile with standard settings (tile size 256, swap size = 2, max depth = 0). Ignore the LoRA node that makes the result look EXACTLY like my girlfriend. Stable Diffusion XL is Aug 2, 2024 · Put the flux1-dev. 7777) the person often comes kneeling. The new versions uses two ControlNet inputs : a 9x9 openpose faces, and a single openpose face. Here, you can also set the batch size, which is how many images you generate in each run. i do that alot. This results in a significant increase in the details due to the disorderly nature of the noise. Welcome to the unofficial ComfyUI subreddit. I really like the extensions library and ecosystem that already exists around A1111- in particular stuff like 'OneButtonPrompt' which is great for inspiration on styles, etc. 8 so that some of the structure of the original image generated is retained. ” From the paper, training the entire Würschten model (the predecessor to Stable Cascade) cost about 1/10th of Stable Diffusion. you can just plug the width and height from get image size directly into nodes where you need it too. Belittling their efforts will get you banned. Also, sometimes put images from the same generation batch to different folders, for example Best, Good etc. When there are 3 images worth the log file that shows 100-200 generations, it's hard to quickly find the information I need. comfy-multiline-input { font-size: 10px; } Welcome to the unofficial ComfyUI subreddit. /* Put custom styles here */ . load your image to be inpainted into the mask node then right click on it and go to edit mask. so i tested with aspect ratios < 1 (more vertical) and it definitely changed the output. Size([3072, 1280]). 😋 the workflow is basically an image loader combined with a whole bunch of little modules for doing various tasks like build a prompt with an image, generate a color gradient, batchload images. The noise image can be monochrome, as lineart doesn’t reference color saturation. “The training requirements of our approach consists of 24,602 A100-GPU hours – compared to Stable Diffusion 2. This way its an end-to-end txt to animation. This is generated by ComfyUI. 01 would be a very very similar image. eg: batch index 2, Length 2 would send image number 3 and 4 to preview img in this example. It animates 16 frames and uses the looping context options to make a video that loops. I have a ComfyUI workflow that produces great results. I am trying to fix that by creating a different UI extending and using comfyui to define "brushes", while working on an infinite canvas, replicating the experience Welcome to the unofficial ComfyUI subreddit. It's based on the wonderful example from Sytan, but I un-collapsed it and removed upscaling to make it very simple to understand. so I would assume generating 4 images (with the `batch_size` property) would give me four images with seeds `1`, `2 A bit of an obtuse take. I'm using the wolf image from your after the protest of Reddit killing open The problem here is the step after your image loading, where you scale up the image using the "Image Scale to Side" node. model is torch. Here's a simple script (also a Custom Node in ComfyUI thanks to u/CapsAdmin), to calculate and automatically set the recommended initial latent size for SDXL image generation and its Upscale Factor based on the desired Final Resolution output. Input your batched latent and vae. Tile Model: Used for image enlargement but here repurposed for transferring color from the noise. hxbbjpb mlxj qulj izqo rkva qicob irgzxtr pncnaek wyacph tnmyg