Skip to main content

How do I create a Custom Model in Layer?

Learn how to train a custom AI model to learn your art style and use case on FLUX, QWEN, SDXL, or BRIA on Layer - letting you keep game assets consistent and high quality.

Updated today

Training a custom model (aka LoRA training) in Layer lets you generate assets that match a unique visual style - whether it’s a specific game art direction, a hand-drawn look, or something entirely custom.

This guide shows how to train your own custom model in Layer, step by step.


Step 1: Build a new MODEL

Once your training assets are ready (ideally cropped, cleaned up, and consistently formatted or you can refer to Layer's built in formatting tools inside the model trainer), head over to the Models page.

At the top center, click “Build a New Model” This will kick off the guided style creation flow - creating a new style for your workspace.

If you’ve trained a model somewhere else and want to bring it into Layer, select “Bring in a model.”


Step 2: Upload Your Training Assets

For guidance on the best assets to upload when creating a custom AI model, check out our video tutorial (Our UI was changed a bit in the last year but logic in the video is the same)

Once assets are uploaded, Layer will auto-caption each image. These captions describe what’s in the image and help the AI learn what defines your style. You can train with or without captions depending on the training technique you'd like to go for.

💡 Pro tip: Review the captions. They’re usually solid, but sometimes they miss key details or mislabel things — especially with complex images. You can edit the captions directly in the interface.


Step 3: Fill in your model details

You can fill in details using AI or manually enter.

Base Models Explained

  • FLUX – A high-performance model developed by Black Forest Labs, known for its exceptional visual quality, prompt adherence, and style diversity.

  • QWEN – A high-performance model developed by Ali Baba, known for its exceptional visual quality, prompt adherence, and style diversity.

  • Stable Diffusion XL (SDXL) – An open-source foundational AI model. Great for a wide range of use cases and highly customizable.

  • BRIA – A copyright-compliant, privacy-safe foundational AI model. Ideal for users with stricter legal or ethical guidelines, especially for commercial workflows.

Model Types Explained

  • Single Character – For creating poses, angles, and varying shots of characters.

  • In-Game Items – For items and equipment like trophies, gems, weapons, etc.

  • Backgrounds – For game backgrounds, including paintings, isometrics, & more.

  • Multiple Characters – For variations of a character type, like avatars same styles.

  • Vehicles – For generating different vehicle types (cars, ships, bikes, etc.).

  • Environmental Objects – For props/structures like houses, trees, and scenery.

  • UI – For buttons, frames, menus, and other interface elements.

  • Icons and Symbols – For icons and visual symbols in a consistent style.

  • Other – If none of the above fit what you’re working on.

Evaluation Prompts Explained

Layer automatically creates 3 evaluation prompts to display the first example generations once the model is trained.

These don’t affect the actual training, but help preview what the results might look like once the model is ready.

Think of these as early test prompts, such as:

  • “A cute forest house with a chimney”

  • “A knight holding a glowing sword”

  • “Top-down view of a treasure chest”

Layer will automatically keep them simple and relevant to your intended use based on what you input in the training form.


Step 4: Train and Wait

Once everything is set, kick off the training.

Depending on the base model you selected, training can take as little as 15 minutes. You’ll get notified when it’s done.


Step 5: Enter your prompt prefix & suffixes.


You’re Ready to Forge

After training completes, your style is live. You can now use it in Forge to generate style-consistent assets, matched to the look and feel you trained the model on.

Did this answer your question?