Waifu Diffusion 1.4 Epoch 2 + 東北ずん子PJ公式イラスト LoRA作成テスト

種類 LoRA ベースモデル Waifu Diffusion 1.4 Epoch 2 生成例 生成例1 <lora:tohoku_zunko-20230404.1-epoch-000010:1:OUTD>, (tohoku zunko girl:1), 1girl, solo, hairband, long hair, green hairband, japanese clothes, yellow eyes, smile, green hair, open mouth, upper body, white background, kimono, simple background, ahoge, very long hair, short kimono, looking at viewer, masterpiece, best quality, ultra-detailed Negative prompt: lowres, ((bad anatomy)), ((bad hands)), text, missing finger, extra digits, fewer digits, blurry, ((mutated hands and fingers)), (poorly drawn face), ((mutation)), ((deformed face)), (ugly), ((bad proportions)), ((extra limbs)), extra face, (double head), (extra head), ((extra feet)), monster, logo, cropped, worst quality, jpeg, humpbacked, long body, long neck, ((jpeg artifacts)), deleted, old, oldest, ((censored)), ((bad aesthetic)), (mosaic censoring, bar censor, blur censor) Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 4052543271, Size: 512x512, Model hash: 1f108d4ceb, Model: wd-1-4-anime_e2 生成例2 ...

2023年4月15日 · aoirint

Stable Diffusion + LoRA に関するメモ

LoRA(Low-Rank Adaptation)は、2021年にEdward Huらが提案した、大規模言語モデルを効率的にFine tuningする手法。 https://github.com/microsoft/LoRA https://arxiv.org/abs/2106.09685 An important paradigm of natural language processing consists of large-scale pretraining on general domain data and adaptation to particular tasks or domains. As we pre-train larger models, full fine-tuning, which retrains all model parameters, becomes less feasible. Using GPT-3 175B as an example – deploying independent instances of fine-tuned models, each with 175B parameters, is prohibitively expensive. We propose Low-Rank Adaptation, or LoRA, which freezes the pretrained model weights and injects trainable rank decomposition matrices into each layer of the Transformer architecture, greatly reducing the number of trainable parameters for downstream tasks. Compared to GPT-3 175B fine-tuned with Adam, LoRA can reduce the number of trainable parameters by 10,000 times and the GPU memory requirement by 3 times. LoRA performs on-par or better than finetuning in model quality on RoBERTa, DeBERTa, GPT-2, and GPT-3, despite having fewer trainable parameters, a higher training throughput, and, unlike adapters, no additional inference latency. We also provide an empirical investigation into rank-deficiency in language model adaptation, which sheds light on the efficacy of LoRA. We release a package that facilitates the integration of LoRA with PyTorch models and provide our implementations and model checkpoints for RoBERTa, DeBERTa, and GPT-2 at https://github.com/microsoft/LoRA. ...

2023年4月15日 · aoirint