- Comfyui arguments reddit. Belittling their efforts will get you banned. Welcome to the unofficial ComfyUI subreddit. bat 7 other things: used firefox with hardware acceleration disabled in settings on previous attempts I also tried --opt-channelslast --force-enable-xformers but in this last run i got 28it/s without them for some reason Welcome to the unofficial ComfyUI subreddit. There should be a file called webui-user. These images might not be enough (in numbers) for my argument, so I invite you to try it out yourselves and see if its any different in your case. I down loaded the Windows 7-Zip file and ended up once unzipped with a large folder of files. 5 add --xformers to web-user. exe -s -m pip install -r requirements. fou Update ComfyUI and all your custom nodes, and make sure you are using the correct models. This has been driving me crazy trying to figure it out . You can construct an image generation workflow by chaining different blocks (called nodes) together. I had installed the ComfyUI extension in Automatic1111 and was running it within Automatic1111. Some commonly used blocks are Loading a Checkpoint Model, entering a prompt, specifying a sampler, etc. Is it even Welcome to the ComfyUI Community Docs!¶ This is the community-maintained repository of documentation related to ComfyUI, a powerful and modular stable diffusion GUI and backend. 10:8188. For example, this is mine: Hopefully, some of the most important extensions such as Adetailer will be ported to ComfyUI. Went to Updates as suggested below and ran the ComfyUI and Python Dependencies batch files, but that didn't work for me. 8 GHz. Also I don't know when it has been changed, but ComfyUI is not a conda packet enviroment anymore, it depends from an python_embeded package, and generate an venv from it results in no tkinter. I'm not sure why and I don't know if it's specific to Comfy or if it's a general rule for Python. After playing around with it for a while, here are 3 basic workflows that work with older models (here, AbsoluteReality). Open the . I'm running a GTX 1660 Ti, 6gb of VRAM. py with the following code: nodes. I use an 8GB GTX 1070 without comfyui launch options and I can see from the console output that it chooses NORMAL_VRAM by default for me. For some reason when I launch comfy it now sets my vram to normal_vram. “The training requirements of our approach consists of 24,602 A100-GPU hours – compared to Stable Diffusion 2. Open your ComfyUI Manager. Had the same issue. The new versions uses two ControlNet inputs : a 9x9 openpose faces, and a single openpose face. For example, this is mine: Wondering if there a ways to speedup comfy generations? Use it on rtx4090 mostly with sd1. bat just contains. Aug 2, 2024 · You can use t5xxl_fp8_e4m3fn. Anyway, whenever you define a function, never forget the self argument! I have barely scratched the surface, but through personal experience, you will go much further! Using ComfyUI was a better experience the images took around 1:50mns to 2:25mns 1024x1024 / 1024x768 all with the refiner. And then connect same primitive node to 5 other nodes to change them in one place instead of each node. Data to create a command line tool to improve the ergonomics of using ComfyUI. 55 it/s for SD1. The aim of this page is to get you up and running with ComfyUI, running your first gen, and providing some suggestions for the next steps to explore. I tested with different SDXL models and tested without the Lora but the result is always the same. bat file, it will load the arguments. on the front page it says "Use --preview-method auto to enable previews. You can prefix the start command with CUDA_VISIBLE_DEVICES=0 to force comfyui to use that specifici card. VFX artists are also typically very familiar with node based UIs as they are very common in that space. 5 including Multi-ControlNet, LoRA, Aspect Ratio, Process Switches, and many more nodes. Jul 6, 2024 · What is ComfyUI? ComfyUI is a node-based GUI for Stable Diffusion. Finally I gave up with ComfyUI nodes and wanted my extensions back in A1111. And the new interface is also an improvement as it's cleaner and tighter. Can you let me know how to fix this issue? I have the following arguments: --windows-standalone-build --disable-cuda-malloc --lowvram --fp16-vae --disable-smart-memory Comfyui is much better suited for studio use than other GUIs available now. A1111 is probably easier to start with: everything is siloed, easy to get results. And above all, BE NICE. Welcome to the ComfyUI Community Docs!¶ This is the community-maintained repository of documentation related to ComfyUI, a powerful and modular stable diffusion GUI and backend. The VAE can be found here and should go in your ComfyUI/models/vae/ folder. These include Stable Diffusion and other platforms like Flux, AuraFlow, PixArt, etc. Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. FETCH DATA from: H:\Stable Diffusion Apps\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map. 75s/it with the 14 frame model. . The biggest tip for comfy - you can turn most node settings into itput buy RMB - convert to input, then connect primitive node to that input. So, for those of us behind the curve, I have a question about arguments. py --normalvram. While I primarily utilize PyTorch cross attention (SDP) I also tested xformers to no avail. Go to your FizzNodes folder ("D:\Comfy\ComfyUI\custom_nodes\ComfyUI_FizzNodes" for me) Run this, make sure to also adapt the beginning match with where you put your comfyui folder: "D:\Comfy\python_embeded\python. I want to set a new pc build, is this config sufficient for comfyUI and a1111 : CPU : AMD Ryzen™ 7 5700X 8-Core, 16-TH @ 3. 2 denoise to fix the blur and soft details, you can just use the latent without decoding and encoding to make it much faster but it causes problems with anything less than 1. Learning the intricacies of Web UI launching arguments, addressing xformers errors, delving into CUDA, and more became my daily routine. 10:7862, previously 10. That helped the speed a bit Welcome to the unofficial ComfyUI subreddit. However, I kept getting a black image. It said follow the instructions for manually installing for Windows. Find your ComfyUI main directory (usually something like C:\ComfyUI_windows_portable) and just put your arguments in the run_nvidia_gpu. that did not I have comfyui installed on my computer with a lot of custom nodes, Loras, controlnets I would like to have automatic1111 also installed to be able to use it. ComfyUI: An extremely powerful Stable Diffusion GUI with a graph/nodes interface for advanced users that gives you precise control over the diffusion process without coding anything now supports ControlNets I am trying out using SDXL in ComfyUI. It is much more coherent and relies heavily on the IPAdapter source image as you can see in the gallery. SD1. 1. bat file set CUDA_VISIBLE_DEVICES=1. Please share your tips, tricks, and workflows for using this software to create your AI art… Command line arguments can be put in the bat files used to run comfyui like this separated by a space after each command Jul 28, 2023 · if you cd into your comfy directory, just run: python main. Options:--install-completion: Install completion for the current shell. 86s/it on a 4070 with the 25 frame model, 2. 53 it/s for SDXL and approximately 4. -- and you'll should see arguments that can be passed on the command line: ie: ~/ComfyUI| python3 main. The graphic style I am using ComfyUI with its default settings. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. Unfortunately I dont have much space left on my computer, so I am wondering if I could install a version of automatic1111 that use the Loras and controlnet from ComfyUI. Installation¶ Welcome to the unofficial ComfyUI subreddit. bat command arguments 6 add model run webui-user. I noticed that in the Terminal window you could launch the ComfyUI directly in a browser with a URL link. But with Comfy UI this doesn't seem to work! Thanks! Welcome to the unofficial ComfyUI subreddit. 5, SD2. There are some important arguments about how artists could be exploited- but that’s also not a new problem. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. This time about arpeggiators - how to design your own arp on the Daisy Seed using Arduino and C++ classes. 4. 5 models, and my comfy. Workflows are much more easily reproducible and versionable. I've ensured both CUDA 11. 1 are updated and used by ComfyUI. Hi r/comfyui, we worked with Dr. Anything that works well gets adopted by the larger community and finds it's way into other Stable Diffusion software eventually. I learned about MeshGraphormer from this youtube video of Scott Detweiler, but felt like simple inpainting does not do the trick for me, especially with SDXL. Comparisons and discussions across different platforms are encouraged. 0 with refiner. Hello, community! I'm happy to announce I have finally finished my ComfyUI SD Krita plugin. 0 that ads controlnet and a node based backend that you can use for plugins etc so seems a big teams finally taking node based expansion serious i love comfy but a bigger team and really nice ui with node plugin support gives serious potential to them… wonder if comfy and invoke will somehow work together or if things will stay fragmented between all the various Welcome to the unofficial ComfyUI subreddit. I don't find ComfyUI faster, I can make an SDXL image in Automatic 1111 in 4 . I tried installing the dependencies by running the pip install in the terminal window in ComfyUI folder. Invoke just released 3. Are there additional arguments to use? r/comfyui: Welcome to the unofficial ComfyUI subreddit. It's possible that MoonDream is competitive if the user spends a lot of time crafting the perfect prompt, but if the prompt simply is "Caption the image" or "Describe the image", Florence2 wins. Now it also can save the animations in other formats apart from gif. 5 while creating a 896x1152 image via the Euler-A sampler. 1 or not. On my 12GB 3060, A1111 can't generate a single SDXL 1024x1024 image without using RAM for VRAM at some point near the end of generation, even with --medvram set. 0. On vacation for a few days, I installed ComfyUI portable on a USB key, and plugged it into a laptop that wasn't too powerful (just the minimum 4 gigabytes of Vram). And the clever tricks discovered from using ComfyUI will be ported to the Automatic1111-WebUI. You can encode then decode bck to a normal ksampler with an 1. 5 with lcm with 4 steps and 0. I know SDXL and ComfyUi are the hotness right now, but I'm still kind of in the kiddie pool in terms of SD. python C:\Users\***\ComfyUI\main. Im just getting comfortable with automatic1111, using different models, VAE's, upscalers, etc. 1’s 200,000 GPU hours. Some main features are: Automatically install ComfyUI dependencies. bat and what they do? After having issues from the last update I realized my args are just thrown together from random thread suggestions and troubleshooting but I really have no full understanding of what all the possible args are and what they do. Image Chooser is described on reddit here but the github link is gone (?!) GitHub - gokayfem/ComfyUI-Texture-Simple: Visualize your textures inside ComfyUI. Seems very hit and miss, most of what I'm getting look like 2d camera pans. TLDR, workflow: link. And ComfyUI-VideoHeperSuite\videohelpersuite\nodes. Basic img2img. Supports: Basic txt2img. I think for me at least for now with my current laptop using comfyUI is the way to go. I run some tests this morning. It seems that the path always look to the root of ComfyUI not relative to the custom_node folder "comfyui-popup_preview". I am not sure what kind of settings ComfyUI used to achieve such optimization, but if you are using Auto111, you could disable live preview and enable xformers (what I did before switching to ComfyUI). That’s a cost of abou /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. ComfyUI is meant for people who: like node-based editors (and are rigorous enough not to get lost in their own architecture); Source image. for CR Seamless Checker. Inpainting (with auto-generated transparency masks). Scoured the internet and came across multiple posts saying to add the arguments --xformers --medvram. I have 2 instances, 1 for each graphics card. Anyone that has made art “for exposure”, had their work ripped off, received a pittance while the gallery/auction house/art collector enjoy’s the lion’s share of the profits, knows this. I did't quite understand the part where you can use the venv folder from other webui like A1111 to launch it instead and bypass all the requirements to launch comfyui. json got prompt… I improved on my previous expressions workflow for ComfyUI by replacing the attention couple nodes by area composition ones. r/synthdiy • We're going live with a workshop in an hour. Lt. Jul 28, 2023 · The first was installed using the ComfyUI_windows_portable install and it runs through Automatic1111. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. Please keep posted images SFW. We would like to show you a description here but the site won’t allow us. 0 denoise, due to vae, maybe there is an obvious solution but i don't know it. On Linux with the latest ComfyUI I am getting 3. I somehow got it to magically run with AMD despite to lack of clarity and explanation on the github and literally no video tutorial on it. Wanted to share my approach to generate multiple hand fix options and then choose the best. I have read the section of the Github page on installing ComfyUI. 8 and PyTorch 2. Here are some examples I did generate using comfyUI + SDXL 1. Replace ComfyUI-VideoHeperSuite\videohelpersuite\load_images_nodes. ComfyUI breaks down a workflow into rearrangeable elements so you can easily make your own. Right click and edit the file adding —xformers to the set COMMANDLINE_ARGS. 3. ComfyUI is also trivial to extend with custom nodes. Then, a miraculous moment unfolded when I installed Stable Diffusion Forge. 20. I don't know why this changed, nothing changed on my pc. - or python3 main. Find your ComfyUI main directory (usually something like C:\ComfyUI_windows_portable) and just put your arguments in the run_nvidia_gpu. Where ever you launch comfyui from is where you need to set the launch options, like so: python main. py. And with comfyui my commandline arguments are : " --directml --use-split-cross-attention --lowvram" The most important thing is use tiled vae for decoding that ensures no out of memory at that step. If you go to the folder that you installed the webgui. A lot of people are just discovering this technology, and want to show off what they created. Are there any guides that explain all the possible COMMANDLINE_ARGS that could be set in the webui-user. py (By the way - you can and should, if you understand Python, do a git diff inside ComfyUI-VideoHeperSuite to review what's changed) Welcome to the unofficial ComfyUI subreddit. This only possible if the node's author set the nickname parameter in their info. This is a plugin that allows users to run their favorite features from ComfyUI and at the same time, being able to work on a canvas. --show-completion: Show completion for the current shell, to copy it or customize the installation. Every time you run the . ComfyUI was written with experimentation in mind and so it's easier to do different things in it. RAM: 32GB DDR4 @ 3200MHz Kosinkadink developer of ComfyUI-AnimateDiff-Evolved has updated the cutsom node with a new funcionality in the AnimateDiff Loader Advanced node, that can reach higher number of frames. You have to run two instances of it and use the --port argument to set a different port. txt" It is actually written on the FizzNodes github here Welcome to the unofficial ComfyUI subreddit. Florence2 (large, not FT, in more_detailed_captioning mode) beats MoonDream v1 and v2 in out-of-the-box captioning. Please share your tips, tricks, and workflows for using this software to create your AI art. bat file with notepad, make your changes, then save it. Although ComfyUI and A1111 ultimately do the same thing, they are not targeting the same audience. Any idea why the qualty is much better in Comfy? I like InvokeAI - its more user-friendly, and although I aspire to master Comfy, it is disheartening to see a much easier UI give sub-par results. 2 seconds, with TensorRT. The big current advantage of ComfyUI over Automatic1111 is it appears to handle VRAM much better. Activate "Nickname" on the "Badge" dropdown list. I just moved my ComfyUI machine to my IoT VLAN 10. With this combo it is now rarely gives out of memory (unless you try crazy things) Before I couldn't even generate with sdxl on comfyui or anything Hey all, is there a way to set a command line argument on startup for ComfyUI to use the second GPU in the system, with Auto1111 you add the following to the Webui-user. Launch and run workflows from the command line Install and manage custom nodes via cm-cli (ComfyUI-Manager as a cli) /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. bat file. 6 seconds in ComfyUI) and I cannot get TensorRT to work in ComfyUI as the installation is pretty complicated and I don't have 3 hours to burn doing it. bat. I think a function must always have "self" as its first argument. So that is how I was running ComfyUI. py -h. It has --listen and --port but since the move, Auto1111 works and Koyha works, but Comfy has been unreachable. Hi amazing ComfyUI community. safetensors instead for lower memory usage but the fp16 one is recommended if you have more than 32GB ram. Before, it automatically set it to high_vram. Most of them already are if you are using the DEV branch by the way. In ComfyUI Manager- Activate Badge: Nickname If the author set their package nickname you will see it on the top-right of each node. Did Update All in ComfyUI manager but when it restarted I just got a blank screen and nothing loaded, not even the manager. But these arguments did not work for me, --xformers gave me a minor bump in performance (8s/it vs 11s/it) but still taking about 10mins per image. py with the following code: load_images_nodes. Only if you want it early. GitHub - Suzie1/ComfyUI_Comfyroll_CustomNodes: Custom nodes for SDXL and SD1. Aug 2, 2024 · All posts must be Open-source/Local AI image generation related Posts should be related to open-source and/or Local AI image generation only. py --listen --use-split-cross-attention. ” From the paper, training the entire Würschten model (the predecessor to Stable Cascade) cost about 1/10th of Stable Diffusion. (Same image takes 5. 1, and SDXL are all trained on different resolutions, and so models for one will not work with the others. " so I spent ages online looking how/where I enter this command. kvkog ojxw swwozz xjrev wvi guxyzyh wwj qgjq hknfv gqzovrg