Skip to main content

How to Train a Custom Model (LoRA)

A full guide to training your own custom model in Layer: preparing images, choosing a base model, understanding training settings, and troubleshooting common issues.

Updated today

Training a custom model in Layer, also known as LoRA training, teaches the AI to generate assets that match a specific visual style, whether that is a game art direction, a character design system, or a set of UI components. Once trained, your model is available across your entire workspace for anyone on your team to use.

This article covers everything from preparing your training images to understanding what to do when something goes wrong.

Before you start

The quality of your training images is the single biggest factor in how well your model performs. Before uploading anything, make sure your images meet the following criteria:

Requirement

Guidance

Image resolution

Resize images to 1024x1024 or similar before uploading. Images over 4K can cause training to stall or fail to auto-caption correctly.

Consistency

All images should share the same art style, framing, and format. Mixed styles or inconsistent quality will produce unpredictable results.

Quantity

There is no hard minimum or maximum. More images produce a more robust model but extend training time. A set of 15 to 50 clean, consistent images is a good starting point for most use cases.

File naming

Use only alphanumeric characters in file names. Special characters or symbols in file names can cause training to fail silently.

Cleaning and cropping

Remove backgrounds, watermarks, and irrelevant elements where possible. The more focused your training images are on the subject, the better the model will learn it.

Training your model, step by step

Step 1: Go to the Models page and start a new model

Navigate to the Models page from your Layer workspace. Click Build a New Model at the top of the page to begin the guided training flow. If you have already trained a model in another tool and want to bring it into Layer, select Bring in a model instead. Layer now supports uploading most externally trained LoRA models directly.

Step 2: Upload your training images

Upload your prepared images. Layer will automatically generate captions for each image once uploaded. These captions describe the content of each image and help the model understand what defines your style.

Review the captions before training. The auto-captions are usually accurate, but they can miss important details or mislabel specific elements, especially in complex or stylised images. You can edit any caption directly in the interface. Accurate captions make a meaningful difference to the quality of the trained model.

If any images did not receive captions after uploading, this is often a sign that the file is too large. Resize and re-upload those images before proceeding.

Step 3: Fill in your model details

Give your model a name and select the settings that match your use case.

Choosing a base model

Base model

Best for

Flux

High visual quality, strong prompt adherence, and broad style range. The most reliable base model for most use cases and the recommended fallback if Qwen is experiencing issues.

Qwen

High visual quality and prompt adherence. Note that Qwen-based LoRAs can behave differently from Flux-based ones, particularly around similarity settings. If results look incorrect at high similarity percentages, try Flux instead.

Choosing a model type

Select the type that best describes the content you are training on: Single Character, Multiple Characters, In-Game Items, Backgrounds, Vehicles, Environmental Objects, UI, Icons and Symbols, or Other. This helps Layer optimise the training process for your use case.

Step 4: Review evaluation prompts

Layer automatically generates three evaluation prompts that will be used to produce sample outputs once training is complete. These are for preview purposes only and do not affect the training itself. You can edit them if you want the preview to reflect a more specific use case.

Step 5: Start training and wait for the notification

Once everything looks correct, click Train to begin. Training typically takes between 15 and 30 minutes depending on the base model and the number of images. You will receive a notification when training is complete.

If training has been running for more than an hour without completing, see the troubleshooting section below.

Step 6: Set prompt prefix and suffix

After training completes, you can configure prompt prefix and suffix settings to guide how the model is used during generation. See the Prompt Prefix and Suffix guide for full details.

Troubleshooting common issues

The Train button is greyed out or does nothing

This is almost always caused by a required field being left empty. Check the following before contacting support:

  • Confirm your model has a name.

  • Confirm all uploaded images have captions. Images without captions will block the Train button.

  • Confirm a base model and model type have been selected.

If all fields are filled in and the button is still unresponsive, try refreshing the page and re-entering the form. If the issue persists, contact support with your workspace and model link.

Training is stuck or taking too long

Training should complete within 15 to 30 minutes in most cases. If it has been running for more than an hour:

  • Check whether your images were over 4K in resolution. Oversized images are a common cause of training stalling and can also prevent auto-captioning from working.

  • If the training is marked as "in progress" but has clearly finished, this can be a display issue rather than an actual failure. Try refreshing the Models page. This has been seen more often with Qwen-based models. If it persists, switch to Flux as a temporary measure and contact support with your model link.

  • If cancelling and restarting does not help, contact support with a link to your model and they will investigate directly.

Model results look wrong or low quality

If your trained model is producing outputs that do not match the style you trained on, the most common causes are:

  • Inaccurate captions: If captions did not correctly describe the images, the model may not have learned the right associations. Consider retraining with manually reviewed and corrected captions.

  • Inconsistent training images: Mixed styles or varying image quality in the training set produce inconsistent outputs. Use a tighter, more uniform set of images.

  • Similarity setting too high or too low: When generating with your model, the similarity percentage controls how closely the output adheres to the trained style. Start around 40 to 60 percent and adjust from there. Very high similarity on Qwen-based models in particular can produce unexpected results.

  • Base model mismatch: A LoRA trained on Flux should be used with Flux during generation, and a Qwen-based LoRA with Qwen. Using a LoRA with a different base model than it was trained on will produce poor results.

A previously working model has dropped in quality

If a model that previously produced good results has noticeably degraded without any changes to your training data or prompts, contact support with a link to the model and examples of the before and after outputs. Include the specific style or model name and approximately when you first noticed the change.

Frequently Asked Questions

How many images do I need to train a model?

There is no hard minimum or maximum. A set of 15 to 50 clean, consistent images is a practical starting point. More images will increase training time but generally improve the model's range and consistency.

Can I train on more than 100 images?

Yes. There is no hard upper limit, but training time increases with image count due to the additional steps and GPU usage required. Very large training sets can also occasionally cause issues; if you encounter problems with a very large set, try reducing it and retraining.

What is the difference between Flux and Qwen as a base model?

Both produce high-quality results, but they behave differently, particularly around similarity settings. Flux tends to be more predictable and is the recommended fallback if Qwen produces unexpected results. Your LoRA must be used with the same base model it was trained on.

Can I upload a model I trained outside of Layer?

Yes. Most externally trained LoRA models can now be uploaded directly to Layer. Go to the Models page and select Bring in a model to get started. If your upload is failing, check that the file name contains only alphanumeric characters, as special characters can cause import failures.

Can I share my trained model with other workspaces?

Models are scoped to the workspace they were trained in. Workspace admins can manage model access within that workspace. If you need to use a model across multiple workspaces, contact support to discuss your options.

Can I use my trained model in Workflows?

Yes. Once training is complete, your model is available as a selectable option across Forge, Prompt Edit, and Workflows within your workspace.

Do trained models support video generation?

No. Custom trained LoRA models are used for image generation only. Video generation models do not currently support LoRA styles.


Having trouble with a training that will not complete, or results that do not match what you expected? Contact support with a link to your model and we will take a look.

Did this answer your question?