
Sign up to save your podcasts
Or
AI Video Summary: The video demonstrates a workflow combining an IP adapter and a segmentation model to apply a Hawaiian shirt style to a selected part of an image, emphasizing fine-tuned adjustments, the ability to batch process and giving a detailed step-by-step guide on setting up and running the required custom nodes and models for this task, while also noting the processās VRAM requirements and limitations in style transfer consistency.
ComfyUI Workflow Download Link: Changing Styles with IPAdapter and Grounding Dino
When we combine anĀ IP adapterĀ with aĀ segmentation model, we can make fine-tuned adjustments to specific areas of an image. These adjustments can look significantly better than those achieved through traditional in-painting coupled with a ControlNet method.
The workflow featured in the video is broken into three components:
Weāre using four different nodes in this area, which include the following:Ā Load Image,Ā IPAdapter Unified Loader,Ā Load Clip VisionĀ model, andĀ IPAdapter Advanced.
The overall goal here is to understand the style of the reference image and pass that information along with the model so the output accurately understands what that reference image is about and applies it accordingly.
While in the example provided in the video uses a Hawaiian shirt, the input image could be any image that you want to extract the style from. This could be other shirt or fabric styles, paintings, monograms, etc.
Power-up: You can learn more about IPAdapters in this in-depth guide provided by HuggingFace.
Borrowing from the popularĀ SD WebUI Segment Anything extension, theĀ Segment Anything custom nodesĀ for ComfyUI provided by storyicon allow you to provide a textual input and it will, if found, segment that object from an image. Both extensions are based onĀ Grounding Dino.
The video demonstrates how to connect aĀ Load ImageĀ node along withĀ SAMModelLoader,Ā GroundingDinoModelLoader, and theĀ GroundingDinoSAMSegmentĀ nodes.
One important variable to touch on that wasnāt included in the video is the threshold value in theĀ GroundingDinoSAMSegmentĀ node. Essentially, a lower value may result in more of an area being selected, whereas a higher value will be more specific. The problem is that too high of a value may result in nothing being selected at all.
Generally, the default value of 0.30 works well for most cases. Use theĀ Convert Mask to ImageĀ node if you want to review the segment.
Have any questions on this workflow? Drop it below
Thanks for the video and tutorial. I love the concept of this.
Could you help with my issue please? Output completely ignoring clothes.
SETUP:
Comment
NameSave my name, email, and website in this browser for the next time I comment.
Nodes:
The post Quickly Change Styles, Clothes, and More with Precision in ComfyUI appeared first on Prompting Pixels.
AI Video Summary: The video demonstrates a workflow combining an IP adapter and a segmentation model to apply a Hawaiian shirt style to a selected part of an image, emphasizing fine-tuned adjustments, the ability to batch process and giving a detailed step-by-step guide on setting up and running the required custom nodes and models for this task, while also noting the processās VRAM requirements and limitations in style transfer consistency.
ComfyUI Workflow Download Link: Changing Styles with IPAdapter and Grounding Dino
When we combine anĀ IP adapterĀ with aĀ segmentation model, we can make fine-tuned adjustments to specific areas of an image. These adjustments can look significantly better than those achieved through traditional in-painting coupled with a ControlNet method.
The workflow featured in the video is broken into three components:
Weāre using four different nodes in this area, which include the following:Ā Load Image,Ā IPAdapter Unified Loader,Ā Load Clip VisionĀ model, andĀ IPAdapter Advanced.
The overall goal here is to understand the style of the reference image and pass that information along with the model so the output accurately understands what that reference image is about and applies it accordingly.
While in the example provided in the video uses a Hawaiian shirt, the input image could be any image that you want to extract the style from. This could be other shirt or fabric styles, paintings, monograms, etc.
Power-up: You can learn more about IPAdapters in this in-depth guide provided by HuggingFace.
Borrowing from the popularĀ SD WebUI Segment Anything extension, theĀ Segment Anything custom nodesĀ for ComfyUI provided by storyicon allow you to provide a textual input and it will, if found, segment that object from an image. Both extensions are based onĀ Grounding Dino.
The video demonstrates how to connect aĀ Load ImageĀ node along withĀ SAMModelLoader,Ā GroundingDinoModelLoader, and theĀ GroundingDinoSAMSegmentĀ nodes.
One important variable to touch on that wasnāt included in the video is the threshold value in theĀ GroundingDinoSAMSegmentĀ node. Essentially, a lower value may result in more of an area being selected, whereas a higher value will be more specific. The problem is that too high of a value may result in nothing being selected at all.
Generally, the default value of 0.30 works well for most cases. Use theĀ Convert Mask to ImageĀ node if you want to review the segment.
Have any questions on this workflow? Drop it below
Thanks for the video and tutorial. I love the concept of this.
Could you help with my issue please? Output completely ignoring clothes.
SETUP:
Comment
NameSave my name, email, and website in this browser for the next time I comment.
Nodes:
The post Quickly Change Styles, Clothes, and More with Precision in ComfyUI appeared first on Prompting Pixels.