M
D
Installation
M00D-node Memory Issues

M00D-Node will make a guess at what models your GPU can handle loading at once and try load and unload models as needed.

If the guess turns out to be wrong you can set a custom Model Spec by using a model_spec.json file. An example Model Spec is included in the m00d-node folder named example_model_spec.json. Rename this file to model_spec.json and you can edit it with the desired settings.

model_spec.json

            
            {
                "sdxl": {
                    "sdxl-refine": "load"
                },
                "sd": {
                    "txt2img/img2img": "load",
                    "inpaint": "load",
                    "upscale": "load"
                }
            }
        

The above model spec will load full SDXL and the Refiner model when in SDXL mode, when in SD1.5 Mode it will try to load the txt2img, the inpainting and the upscaling models all at once. This is a suitable spec for cards like the 3090 with 24GB of VRAM, for smaller cards you'll need to change these settings.

The settings are as follows:

Examples

8GB - 12GB VRAM model_spec.json

            
                    {
                        "sdxl": {
                            "sdxl-refine": "disabled"
                        },
                        "sd": {
                            "txt2img/img2img": "load",
                            "inpaint": "as_needed",
                            "upscale": "as_needed"
                        }
                    }
                

This is a Model Spec suitable for 8GB-12GB cards, SDXL is too big so wont be loadable at all, txt2img will remain loaded but upscale and inpaint will be loaded in and out as needed.

Very low VRAM model_spec.json

            
                    {
                        "sdxl": {
                            "sdxl-refine": "disabled"
                        },
                        "sd": {
                            "txt2img/img2img": "as_needed",
                            "inpaint": "disabled",
                            "upscale": "as_needed"
                        }
                    }
                

Swapping txt2img to as_needed should be usable on builds with low VRAM like 6GB, inpainting is likely too big for those cards so we disable that too.

SDXL without Refiner model_spec.json

            
                    {
                        "sdxl": {
                            "sdxl": "load"
                        },
                        "sd": {
                            "txt2img/img2img": "load",
                            "inpaint": "load",
                            "upscale": "load"
                        }
                    }
                

SDXL ships with an extra refiner model that can result in better image quality, if you want to just run SDXL with the refiner which has lower VRAM requirements you can swap out "sdxl-refine" with just "sdxl".

M00D-node remote nodes and offline mode

m00d-node is able to act as a render node for other machines on the network, multuple nodes can be connected to the same M00D Ritual instance.

To connect to a different instance start m00d-node with an ip or .local address
e.g: m00d-node.exe 192.168.1.5
in the Powershell on Windows or Terminal on Mac.

m00d-node downloads it's models from HuggingFace, this does require a network connection. Once node has downloaded and cached all it's models you can run entirely offline by adding offline after the the address
e.g: m00d-node.exe localhost offline
The command line arguments functionality will be improved upon in future versions.

Model Downloading and Cache
Models are downloaded from HuggingFace as needed, so the first time you try to use them you will need to wait for them to download. Models are stored in the ./cache/ folder so if you are having issues with the download try deleting the model struggling from that folder and redownloading it again.
Need more help?
Need more help? Ask in the Discord





NEW WORLD CREATIVE TOOLS