会员登录 立即注册

搜索

MimicTalk v1.1中文版离线整合包

[复制链接]
崎山小鹿 发表于 2024-11-11 15:57:04 | 显示全部楼层 |阅读模式
崎山小鹿
2024-11-11 15:57:04 3522 16 看全部
        只需15分钟,就能训练高质量,个性化数字人大模型。由浙江大学与字节跳动联合推出MimicTalk算法,目前已开源。
        在外表和说话风格上和真人相似。将通用3D数字人大模型适应到单个目标人,采用动静结合的高效微调方案,首次实现了高效率个性化精品数字人视频合成。
mimictalk.png

        MimicTalk是浙江大学和字节跳动共同研发推出基于NeRF(神经辐射场)技术,能在极短的时间内,仅需15分钟训练出个性化和富有表现力的3D说话人脸模型。
        MimicTalk的核心在于其高效的微调策略和上下文学习能力。传统的个性化数字人生成往往依赖小型模型逐一训练,不仅耗时长,且对于数据量和样本质量的要求过高。而现有的大型通用3D数字人模型虽能快速生成数字人,但在外表相似度和说话风格模仿上屡有不足。MimicTalk通过结合这两者的优势,实现了前所未有的突破。

官网下载源文件
GitHub:https://github.com/yerfor/MimicTalk

下载3DMM BFM(人脸模型)
百度网盘地址:https://pan.baidu.com/share/init ... uxxblQ&pwd=m9q5
在项目内的文件结构如下
deep_3drecon/BFM/
├── 01_MorphableModel.mat
├── BFM_exp_idx.mat
├── BFM_front_idx.mat
├── BFM_model_front.mat
├── Exp_Pca.bin
├── facemodel_info.mat
├── index_mp468_from_mesh35709.npy
└── std_exp.txt

下载预训练模型
https://pan.baidu.com/share/init ... qsThUg&pwd=mimi
在项目内的文件结构如下:
checkpoints/
├── mimictalk_orig
│   └── os_secc2plane_torso
│       ├── config.yaml
│       └── model_ckpt_steps_100000.ckpt
|-- 240112_icl_audio2secc_vox2_cmlr
│     ├── config.yaml
│     └── model_ckpt_steps_1856000.ckpt
└── pretrained_ckpts
    └── mit_b0.pth

checkpoints_mimictalk/
└── German_20s
    ├── config.yaml
    └── model_ckpt_steps_10000.ckpt

启动Gradio WebUI
python inference/app_mimictalk.py

金双石科技开发了中文WebUI页面,启动命令:
python inference/app_mimictalkcn.py

mimictalkff.png
在浏览器中访问:
微信截图_20241204214811.png

网页中上传资源,点击Training按钮训练针对特定人的 MimicTalk 模型,然后点击Generate按钮对任意音频和风格进行推理:

MimicTalk 训练命令
python inference/train_mimictalk_on_a_video.py --video_id data/raw/videos/German_20s.mp4 --max_updates 2000 --work_dir checkpoints_mimictalk/German_20s
训练www.png

MimicTalk 推理命令
python inference/mimictalk_infer.py --drv_aud data/raw/examples/金双石男.mp3 --drv_pose data/raw/examples/German_20s.mp4 --drv_style data/raw/examples/German_20s.mp4 --bg_img data/raw/examples/bg.png --out_name output.mp4 --out_mode final
推理.png

我们也可以定制自己的数字人,我用自己的形象做了一个,经过2万次训练,历时20个小时,看效果:


想要领取一键整合包的加我微信:qishanxiaolu

参考:
https://news.sohu.com/a/823186926_121798711

中文说明文档
https://github.com/yerfor/MimicTalk/blob/main/README-zh.md
项目官网:
https://mimictalk.github.io/?utm_source=ai-bot.cn
天不生墨翟,万古如长夜!以墨运商,以商助墨。金双石科技长期招聘科技研发人才!微信:qishanxiaolu   电话:15876572365   公司:深圳市金双石科技有限公司
回复

使用道具 举报

 楼主| 崎山小鹿 发表于 2024-11-26 19:46:30 | 显示全部楼层
崎山小鹿
2024-11-26 19:46:30 看全部
conda安装
conda 是一个开源的软件包管理系统和环境管理软件,用于安装多个版本的软件包及其依赖关系,并在它们之间轻松切换。conda 是为Python程序创建的,类似于 Linux、MacOS、Windows,也可以打包和分发其他软件。
conda 分为 anaconda 和 miniconda,anaconda 是一个包含了许多常用库的集合版本,miniconda 是精简版本(只包含conda、pip、zlib、python 以及它们所需的包),剩余的通过 conda install command 命令自行安装即可;

miniconda 官网:https://conda.io/miniconda.html
anaconda 官网:https://www.anaconda.com/download
点击安装即可,不需要另外安装 Python运行环境,安装过程中,出现 Advanced options选项,第一个选项是将Anaconda的路径加入环境变量,第二个是默认将conda安装的 Python 定为系统使用的默认版本
检查 conda 是否安装成功,返回conda版本号则说明安装成功
conda --version

Conda 包管理
查看安装包
conda list

更新包
conda update 包名

删除包
conda remove 包名

列出可安装的 Python 版本
conda search python

创建指定 Python 版本的环境
conda create -n test python=3.8

激活环境
conda activate test

退出当前环境(返回 base 环境)
conda activate

列出所有环境
conda env list

删除环境
conda env remove  -n test

使用conda可以在电脑上创建很多套相互隔离的Python环境,命令如下:

# 语法
conda create --name <env_name> python=<version> [package_name1] [package_name2] [...]
# 样例 创建一个名为PaddleOCR的环境,python版本为3.7
conda create --name PaddleOCR python=3.7

切换Conda环境
前面说到Conda可以创建多套相互隔离的Python环境,使用conda activate env_name可以切换不同的环境。

# 语法
conda activate env_name
# 样例 切换到PaddleOCR环境
conda activate PaddleOCR

如果要退出此环境,回到基础环境 可以使用如下命令

# 退出当前环境
conda deactivate

查看电脑上已安装的Conda环境
当电脑上安装了很多台Conda环境的时候,可以使用conda env list命令查看所有已创建的Conda环境。


# 查看当前电脑上所有的conda环
conda env list

10:删除某个Conda环境
如果某个环境不需要了,可以使用conda remove命令移除环境,语法如下:

# 语法
conda remove --name <env_name> --all
# 样例
conda remove --name PaddleOCR --all
天不生墨翟,万古如长夜!以墨运商,以商助墨。金双石科技长期招聘科技研发人才!微信:qishanxiaolu   电话:15876572365   公司:深圳市金双石科技有限公司
回复

使用道具 举报

 楼主| 崎山小鹿 发表于 2024-11-26 23:30:46 | 显示全部楼层
崎山小鹿
2024-11-26 23:30:46 看全部
组件包安装
执行:
pip install -r docs/prepare_env/requirements.txt -v

提示错误:无法安装 -r docs/prepare_env/requirements.txt (第 76 行)和 httpx==0.23.3,因为这些软件包版本具有冲突的依赖项。

冲突的原因是:
    用户请求httpx==0.23.3
    gradio 4.43.0 取决于 httpx>=0.24.1

要解决此问题,您可以尝试:
1.放宽您指定的软件包版本范围
2.删除软件包版本以允许pip尝试解决依赖冲突

将httpx版本改为0.24.1
继续安装。
Successfully installed Cython-3.0.11 PyMCubes-0.1.6 absl-py-2.1.0 aiofiles-23.2.1 annotated-types-0.7.0 anyio-4.6.2.post1 attrs-24.2.0 audioread-3.0.1 av-13.1.0 beartype-0.19.0 certifi-2024.8.30 cffi-1.17.1 charset-normalizer-3.4.0 click-8.1.7 colorama-0.4.6 configargparse-1.7 contourpy-1.3.0 cycler-0.12.1 dearpygui-2.0.0 decorator-4.4.2 decord-0.6.0 einops-0.8.0 exceptiongroup-1.2.2 face_alignment-1.4.1 faiss-cpu-1.8.0.post1 fastapi-0.112.2 ffmpeg-python-0.2.0 ffmpy-0.4.0 filelock-3.16.1 flatbuffers-24.3.25 fonttools-4.55.0 fsspec-2024.10.0 future-1.0.0 gradio-4.43.0 gradio-client-1.3.0 grpcio-1.68.0 h11-0.14.0 httpcore-0.17.3 httpx-0.24.1 huggingface-hub-0.26.2 idna-3.10 imageio-2.36.0 imageio_ffmpeg-0.5.1 importlib-metadata-8.5.0 importlib-resources-6.4.5 jax-0.4.30 jaxlib-0.4.30 jinja2-3.1.4 joblib-1.4.2 kiwisolver-1.4.7 kornia-0.5.0 lazy-loader-0.4 librosa-0.9.2 llvmlite-0.39.1 lpips-0.1.4 markdown-3.7 markdown-it-py-3.0.0 markupsafe-2.1.5 matplotlib-3.9.2 mdurl-0.1.2 mediapipe-0.10.18 ml-dtypes-0.5.0 moviepy-1.0.3 mpmath-1.3.0 munch-4.0.0 networkx-3.2.1 ninja-1.11.1.2 numba-0.56.4 numpy-1.23.5 opencv-contrib-python-4.10.0.84 opencv_python-4.10.0.84 opt-einsum-3.4.0 orjson-3.10.12 packaging-24.2 pandas-2.2.3 pillow-10.4.0 platformdirs-4.3.6 pooch-1.8.2 praat-parselmouth-0.4.5 pretrainedmodels-0.7.4 proglog-0.1.10 protobuf-4.25.5 pyaudio-0.2.14 pycparser-2.22 pydantic-2.10.2 pydantic-core-2.27.1 pydub-0.25.1 pygments-2.18.0 pyloudnorm-0.1.1 pyparsing-3.2.0 pypinyin-0.42.0 python-dateutil-2.9.0.post0 python-multipart-0.0.17 python_speech_features-0.6 pytz-2024.2 pyworld-0.2.1rc0 pyyaml-6.0.2 regex-2024.11.6 requests-2.32.3 resampy-0.4.3 rich-13.9.4 ruff-0.8.0 safetensors-0.4.5 scikit-image-0.24.0 scikit-learn-1.5.2 scipy-1.11.1 semantic-version-2.10.0 sentencepiece-0.2.0 shellingham-1.5.4 six-1.16.0 sniffio-1.3.1 sounddevice-0.5.1 soundfile-0.12.1 starlette-0.38.6 sympy-1.13.1 tensorboard-2.18.0 tensorboard-data-server-0.7.2 tensorboardX-2.6.2.2 textgrid-1.6.1 threadpoolctl-3.5.0 tifffile-2024.8.30 timm-1.0.11 tokenizers-0.20.4 tomlkit-0.12.0 torch-2.5.1 torchdiffeq-0.2.5 torchode-1.0.0 torchshow-0.5.1 torchtyping-0.1.5 torchvision-0.20.1 tqdm-4.67.1 transformers-4.46.3 trimesh-4.5.2 typeguard-2.13.3 typer-0.13.1 typing-extensions-4.12.2 tzdata-2024.2 urllib3-2.2.3 uvicorn-0.32.1 webrtcvad-2.0.10 websocket-client-1.8.0 websockets-12.0 werkzeug-3.1.3 zipp-3.21.0


=================
pip install torch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 --index-url https://download.pytorch.org/whl/cu121
返回:
Successfully installed torch-2.4.0+cu121 torchaudio-2.4.0+cu121 torchvision-0.19.0+cu121

删除系统变量path:
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.3\bin
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.3\libnvvp
=================
pip install openmim==0.3.9

=====================
安装CUDA
查看CUDA版本
nvcc --version

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Wed_Feb__8_05:53:42_Coordinated_Universal_Time_2023
Cuda compilation tools, release 12.1, V12.1.66
Build cuda_12.1.r12.1/compiler.32415258_0

====================
安装pytorch3d
pip install "git+https://github.com/facebookresearch/pytorch3d.git@stable"

安装成功:
Successfully built pytorch3d iopath
Installing collected packages: portalocker, iopath, pytorch3d
Successfully installed iopath-0.1.10 portalocker-3.0.0 pytorch3d-0.7.8

如果安装不上可以通过源码安装:
从Windows上的本地克隆安装:根据PyTorch的版本,在编译之前可能需要更改某些PyTorch标头。这些问题经常在此存储库中的问题中进行讨论。在进行任何必要的修补后,您可以转到“x64 Native Tools命令提示符for VS 2019”进行编译和安装,Install from a local clone 本地安装适用于大部分情况,可以下载zip之后解压
git clone https://github.com/facebookresearch/pytorch3d.git
cd pytorch3d && pip install -e .

或者
cd pytorch3d
python3 setup.py install

是否安装好了用下面代码测试
python -m unittest discover -v -s tests -t .

Ran 302 tests in 16.836s

FAILED (errors=360)

测试
python -m unittest discover -v -s tests -t .
提示错误:
lib.site-packages.tests (unittest.loader._FailedTest) ... ERROR


安装pytorch3d最简单方法
https://blog.csdn.net/weixin_43357695/article/details/126063091
天不生墨翟,万古如长夜!以墨运商,以商助墨。金双石科技长期招聘科技研发人才!微信:qishanxiaolu   电话:15876572365   公司:深圳市金双石科技有限公司
回复

使用道具 举报

 楼主| 崎山小鹿 发表于 2024-11-27 11:59:14 | 显示全部楼层
崎山小鹿
2024-11-27 11:59:14 看全部
打开webUI

python inference/app_mimictalk.py

(MimicTalk39) G:\ProgramData\anaconda3\envs\MimicTalk39>python inference/app_mimictalk.py
G:\ProgramData\anaconda3\envs\MimicTalk39
| WARN: checkpoints\240112_icl_audio2secc_vox2_cmlr\audio2secc_vae.yaml not exist.
| load 'model' from 'checkpoints/240112_icl_audio2secc_vox2_cmlr\model_ckpt_steps_1856000.ckpt', strict=True
| WARN: checkpoints_mimictalk\German_20s\secc_img2plane.yaml not exist.
G:\ProgramData\anaconda3\envs\MimicTalk39\lib\site-packages\mmengine\optim\optimizer\zero_optimizer.py:11: DeprecationWarning: `TorchScript` support for functional optimizers is deprecated and will be removed in a future PyTorch release. Consider using the `torch.compile` optimizer instead.
  from torch.distributed.optim import \
lora_args_secc2plane:  {'lora_mode': 'secc2plane_sr', 'lora_r': 2}
lora_args_sr:  {'lora_mode': 'secc2plane_sr', 'lora_r': 2}
| load 'model' from 'checkpoints_mimictalk/German_20s/model_ckpt_steps_10000.ckpt', strict=True
| load 'learnable_triplane' from 'checkpoints_mimictalk/German_20s/model_ckpt_steps_10000.ckpt', strict=True
----------------------------------------
Model loading is finished.
----------------------------------------
----------------------------------------
Gradio page is constructed.
----------------------------------------
Running on local URL:  http://127.0.0.1:7860
INFO:httpx:HTTP Request: GET http://127.0.0.1:7860/startup-events "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: HEAD http://127.0.0.1:7860/ "HTTP/1.1 200 OK"

To create a public link, set `share=True` in `launch()`.



打开浏览器
http://127.0.0.1:7860/
天不生墨翟,万古如长夜!以墨运商,以商助墨。金双石科技长期招聘科技研发人才!微信:qishanxiaolu   电话:15876572365   公司:深圳市金双石科技有限公司
回复

使用道具 举报

 楼主| 崎山小鹿 发表于 2024-11-27 13:43:57 | 显示全部楼层
崎山小鹿
2024-11-27 13:43:57 看全部
训练视频
python inference/train_mimictalk_on_a_video.py --video_id data/raw/videos/German_20s.mp4 --max_updates 2000 --work_dir checkpoints_mimictalk/German_20s

参数解释:
python inference/train_mimictalk_on_a_video.py \
--video_id <PATH_TO_SOURCE_VIDEO> \
--max_updates <UPDATES_NUMBER> \
--work_dir <PATH_TO_SAVING_CKPT>

training lora...:  80%|██████████████████████████████████████████████▋           | 1610/2001 [3:12:13<49:48,  7.64s/it]Iter 1611: total_loss=0.13451951816678048  v2v_occlusion_reg_l1_loss=0.5819548964500427,  v2v_occlusion_2_reg_l1_loss=0.3143523037433624,  v2v_occlusion_2_weights_entropy_loss=0.2824477553367615,  density_weight_l2_loss=0.06278861314058304,  density_weight_entropy_loss=0.28357046842575073,  mse_loss=0.031486544758081436,  head_mse_loss=0.020894166082143784,  lpips_loss=0.04962434247136116,  head_lpips_loss=0.013046118430793285,  lip_mse_loss=0.035146500915288925,  lip_lpips_loss=0.01198840606957674,  blink_reg_loss=0.15494771301746368,  triplane_reg_loss=2.2314882278442383,  secc_reg_loss=0.0005266871303319931,
...
training lora...:  99%|█████████████████████████████████████████████████████████▋| 1990/2001 [4:05:42<01:38,  8.94s/it]Iter 1991: total_loss=0.1247451014816761  v2v_occlusion_reg_l1_loss=0.571661651134491,  v2v_occlusion_2_reg_l1_loss=0.32925185561180115,  v2v_occlusion_2_weights_entropy_loss=0.2831133008003235,  density_weight_l2_loss=0.06824828684329987,  density_weight_entropy_loss=0.28271210193634033,  mse_loss=0.035135187208652496,  head_mse_loss=0.026090573519468307,  lpips_loss=0.05605084449052811,  head_lpips_loss=0.015292336232960224,  lip_mse_loss=0.026103070005774498,  lip_lpips_loss=0.012671086937189102,  blink_reg_loss=0.1233232319355011,  triplane_reg_loss=2.664921760559082,  secc_reg_loss=0.0011901289690285921,
testing lora...: 100%|███████████████████████████████████████████████████████████████| 250/250 [01:47<00:00,  2.32it/s]
Iter 2001: total_loss=0.1094050370156765  v2v_occlusion_reg_l1_loss=0.5772683024406433,  v2v_occlusion_2_reg_l1_loss=0.3171294033527374,  v2v_occlusion_2_weights_entropy_loss=0.2925894856452942,  density_weight_l2_loss=0.06010424718260765,  density_weight_entropy_loss=0.2848914861679077,  mse_loss=0.03368102386593819,  head_mse_loss=0.02160782180726528,  lpips_loss=0.048644281923770905,  head_lpips_loss=0.013996387831866741,  lip_mse_loss=0.030205246061086655,  lip_lpips_loss=0.00717564020305872,  blink_reg_loss=0.11074436455965042,  triplane_reg_loss=2.6760036945343018,  secc_reg_loss=0.00035398523323237896,
training lora...: 100%|██████████████████████████████████████████████████████████| 2001/2001 [4:09:09<00:00,  7.47s/it]
testing lora...: 100%|███████████████████████████████████████████████████████████████| 250/250 [03:38<00:00,  1.14it/s]


下面是训练过程中出现的一些情况以及解决方法
===============================================
错误提示:
    from utils.commons.hparams import hparams, set_hparams  # 用于管理超参数
ModuleNotFoundError: No module named 'utils'

经目录切换到conda的目录里面
===============================================

cd <MimicTalkRoot>
source <CondaRoot>/bin/activate
conda create -n mimictalk python=3.9
conda activate mimictalk

出现错误 source 不是内部命令也不是外部命令 可能是因为你在 Windows 操作系统上执行了该命令,source 命令通常是在 Unix-like 系统(如 Linux 或 macOS)中使用的。
在 Windows 上,使用 source 命令会出现这个问题。相反,在 Windows 上,你应该使用 conda activate 直接激活环境

===============================================
错误提示:
File "G:\ProgramData\anaconda3\envs\MimicTalk-main\inference\train_mimictalk_on_a_video.py", line 163, in load_training_data
    bg_img = torch.tensor(cv2.imread(img_name)[..., ::-1] / 127.5 - 1).permute(2,0,1).float() # [3, H, W]
TypeError: 'NoneType' object is not subscriptable



[ WARN:0@245.781] global loadsave.cpp:241 cv::findDecoder imread_('data/processed/videos/German_20s/head_imgs/00000000.png'): can't open/read file: check file path/integrity



  File "G:\ProgramData\anaconda3\envs\MimicTalk39\inference\app_mimictalk.py", line 67, in train_once_args
    trainer.training_loop(inp)
  File "G:\ProgramData\anaconda3\envs\MimicTalk39\inference\train_mimictalk_on_a_video.py", line 196, in training_loop
    img = torch.tensor(cv2.imread(img_name)[..., ::-1] / 127.5 - 1).permute(2,0,1).float().cuda().float() # [3, H, W]
TypeError: 'NoneType' object is not subscriptable
Training ERROR: 'NoneType' object is not subscriptable

==============================================

File "G:\ProgramData\anaconda3\envs\MimicTalk39\data_gen\utils\process_video\extract_segment_imgs.py", line 266, in generate_segment_imgs_job
    out_img, mask = seg_model._seg_out_img_with_segmap(img, segmap, mode=mode)
AttributeError: 'NoneType' object has no attribute '_seg_out_img_with_segmap'
seg_model 是 None,即 seg_model 未被正确初始化或赋值。

来自这个任务
python data_gen/utils/process_video/extract_segment_imgs.py --ds_name=nerf --vid_dir={target_video_path}

python data_gen/utils/process_video/extract_segment_imgs.py --ds_name=nerf --vid_dir=data/raw/videos/German_20s.mp4


out_img_name = segmap_name = img_name.replace("/gt_imgs/", "/segmaps/").replace(".jpg", ".png")  在linux下不会有问题,但是在windows下会有问题

关键点是要在下面这个函数中:
#据输入的图像路径、分割图(segmap)和图像内容(img),生成并保存多种分割结果图
def generate_segment_imgs_job(img_name, segmap, img):
    out_img_name = segmap_name = img_name.replace("/gt_imgs", "/segmaps").replace(".jpg", ".png") # 存成jpg的话,pixel value会有误差
    try: os.makedirs(os.path.dirname(out_img_name), exist_ok=True) #如果输出路径的目录不存在,就创建相应的目录
    except: pass
    encoded_segmap = encode_segmap_mask_to_image(segmap) #将分割图 segmap 转换为适合保存的 RGB 图像
    save_rgb_image_to_path(encoded_segmap, out_img_name)
    seg_model = MediapipeSegmenter()
    for mode in ['head', 'torso', 'person', 'bg']:
        out_img, mask = seg_model._seg_out_img_with_segmap(img, segmap, mode=mode)
        img_alpha = 255 * np.ones((img.shape[0], img.shape[1], 1), dtype=np.uint8) # alpha 添加 Alpha 通道
        mask = mask[0][..., None]
        img_alpha[~mask] = 0
        out_img_name = img_name.replace("/gt_imgs", f"/{mode}_imgs").replace(".jpg", ".png")
        save_rgb_alpha_image_to_path(out_img, img_alpha, out_img_name)
    #躯干修复任务
    inpaint_torso_img, inpaint_torso_img_mask, inpaint_torso_with_bg_img, inpaint_torso_with_bg_img_mask = inpaint_torso_job(img, segmap)
    img_alpha = 255 * np.ones((img.shape[0], img.shape[1], 1), dtype=np.uint8) # alpha
    img_alpha[~inpaint_torso_img_mask[..., None]] = 0
    out_img_name = img_name.replace("/gt_imgs", f"/inpaint_torso_imgs").replace(".jpg", ".png")
    save_rgb_alpha_image_to_path(inpaint_torso_img, img_alpha, out_img_name)
    return segmap_name

添加:seg_model = MediapipeSegmenter(),同时replace会给windows带来困扰,也要修改。


=====================================
问题提示:
RuntimeError: Could not find MSVC/GCC/CLANG installation on this computer. Check _find_compiler_bindir() in "G:\ProgramData\anaconda3\envs\MimicTalk39\modules\eg3ds\torch_utils\custom_ops.py".

解决:
修改custom_ops.py  修复VS的安装路径
def _find_compiler_bindir():
    patterns = [
        'd:/Program Files/Microsoft Visual Studio/*/Professional/VC/Tools/MSVC/*/bin/Hostx64/x64',
        'd:/Program Files/Microsoft Visual Studio/*/BuildTools/VC/Tools/MSVC/*/bin/Hostx64/x64',
        'd:/Program Files/Microsoft Visual Studio/*/Community/VC/Tools/MSVC/*/bin/Hostx64/x64',
        'd:/Program Files/Microsoft Visual Studio */vc/bin',
    ]
    for pattern in patterns:
        matches = sorted(glob.glob(pattern))
        if len(matches):
            return matches[-1]
    return None
天不生墨翟,万古如长夜!以墨运商,以商助墨。金双石科技长期招聘科技研发人才!微信:qishanxiaolu   电话:15876572365   公司:深圳市金双石科技有限公司
回复

使用道具 举报

 楼主| 崎山小鹿 发表于 2024-11-27 15:06:06 | 显示全部楼层
崎山小鹿
2024-11-27 15:06:06 看全部
生成视频
python inference/mimictalk_infer.py \
--drv_aud data/raw/examples/Obama_5s.wav \
--drv_pose data/raw/examples/German_20s.mp4 \
--drv_style data/raw/examples/German_20s.mp4 \
--bg_img data/raw/examples/bg.png \
--out_name output.mp4 \
--out_mode final

python inference/mimictalk_infer.py --drv_aud data/raw/examples/Obama_5s.wav --drv_pose data/raw/examples/German_20s.mp4 --drv_style data/raw/examples/German_20s.mp4 --bg_img data/raw/examples/bg.png --out_name output.mp4 --out_mode final


python inference/mimictalk_infer.py \
--drv_aud <PATH_TO_AUDIO> \
--drv_style <PATH_TO_STYLE_VIDEO, OPTIONAL> \
--drv_pose <PATH_TO_POSE_VIDEO, OPTIONAL> \
--bg_img <PATH_TO_BACKGROUND_IMAGE, OPTIONAL> \
--out_name <PATH_TO_OUTPUT_VIDEO, OPTIONAL>

===========================
错误提示1:
File "G:\ProgramData\anaconda3\envs\MimicTalk39\lib\site-packages\soundfile.py", line 1216, in _open
    raise LibsndfileError(err, prefix="Error opening {0!r}: ".format(self.name))
soundfile.LibsndfileError: Error opening 'data/raw/examples/Obama_5s3a3311e1-aca9-11ef-a8ec-fcaa14e5b2b1_16k.wav': System error.

============================
错误提示2:
MimicTalk is rendering frames: 100%|█████████████████████████████████████████████████| 248/248 [01:08<00:00,  3.64it/s]
Imageio is saving video: 100%|██████████████████████████████████████████████████████| 248/248 [00:01<00:00, 226.16it/s]
系统找不到指定的路径。
'rm' 不是内部或外部命令,也不是可运行的程序
或批处理文件。

Saved at output.mp4

解决方法:
修改源代码mimictalk_infer.py:
        if inp['drv_audio_name'][-4:] in ['.wav', '.mp3']: #只处理 .wav 和 .mp3 格式的音频
            # os.system(f"ffmpeg -i {debug_name} -i {inp['drv_audio_name']} -y -v quiet -shortest {out_fname}")
            #cmd = f"/usr/bin/ffmpeg -i {debug_name} -i {self.wav16k_name} -y -r 25 -ar 16000 -c:v copy -c:a libmp3lame -pix_fmt yuv420p -b:v 2000k  -strict experimental -shortest {out_fname}"
            cmd = f"ffmpeg -i {debug_name} -i {self.wav16k_name} -y -r 25 -ar 16000 -c:v copy -c:a libmp3lame -pix_fmt yuv420p -b:v 2000k  -strict experimental -shortest {out_fname}"
            os.system(cmd)
            #os.system(f"rm {debug_name}") #合成完成后,删除原视频文件
            # 删除文件
            if os.path.exists(debug_name):  # 确保文件存在
                os.remove(debug_name)
        else: #如果音频文件格式不是 .wav 或 .mp3,
            ret = os.system(f"ffmpeg -i {debug_name} -i {inp['drv_audio_name']} -map 0:v -map 1:a -y -v quiet -shortest {out_fname}")
            if ret != 0: # 没有成功从drv_audio_name里面提取到音频, 则直接输出无音频轨道的纯视频
                #os.system(f"mv {debug_name} {out_fname}") #使用 mv 将原始视频文件直接重命名为目标文件 {out_fname},输出无音频的纯视频
                # 重命名文件(跨平台方式)
                if os.path.exists(debug_name):  # 检查文件是否存在
                    os.rename(debug_name, out_fname)
        print(f"Saved at {out_fname}")
天不生墨翟,万古如长夜!以墨运商,以商助墨。金双石科技长期招聘科技研发人才!微信:qishanxiaolu   电话:15876572365   公司:深圳市金双石科技有限公司
回复

使用道具 举报

 楼主| 崎山小鹿 发表于 2024-11-29 10:42:05 | 显示全部楼层
崎山小鹿
2024-11-29 10:42:05 看全部
训练视频yumo
python inference/train_mimictalk_on_a_video.py --video_id data/raw/examples/yumo_30s.mp4 --max_updates 200 --work_dir checkpoints_mimictalk/yumo_30s

training lora...:  99%|█████████████████████████████████████████████████████████▋| 1990/2001 [3:27:32<00:56,  5.16s/it]Iter 1991: total_loss=0.20095175951719285  v2v_occlusion_reg_l1_loss=0.6070544719696045,  v2v_occlusion_2_reg_l1_loss=0.353074848651886,  v2v_occlusion_2_weights_entropy_loss=0.22927413880825043,  density_weight_l2_loss=0.06759260594844818,  density_weight_entropy_loss=0.23434485495090485,  mse_loss=0.04891114681959152,  head_mse_loss=0.01373131200671196,  lpips_loss=0.07807260751724243,  head_lpips_loss=0.00782869104295969,  lip_mse_loss=0.05370587855577469,  lip_lpips_loss=0.059812031686306,  blink_reg_loss=0.06298534572124481,  triplane_reg_loss=2.8589870929718018,  secc_reg_loss=0.0007375427521765232,
testing lora...: 100%|███████████████████████████████████████████████████████████████| 250/250 [05:38<00:00,  1.35s/it]
Iter 2001: total_loss=0.20086731910705566  v2v_occlusion_reg_l1_loss=0.5791127681732178,  v2v_occlusion_2_reg_l1_loss=0.3529333770275116,  v2v_occlusion_2_weights_entropy_loss=0.22007043659687042,  density_weight_l2_loss=0.0645519271492958,  density_weight_entropy_loss=0.2498457282781601,  mse_loss=0.07357382774353027,  head_mse_loss=0.011108612641692162,  lpips_loss=0.1015414148569107,  head_lpips_loss=0.0076354071497917175,  lip_mse_loss=0.03572188690304756,  lip_lpips_loss=0.04669719189405441,  blink_reg_loss=0.11848977208137512,  triplane_reg_loss=2.869716167449951,  secc_reg_loss=0.0006351692136377096,
training lora...: 100%|██████████████████████████████████████████████████████████| 2001/2001 [3:34:10<00:00,  6.42s/it]
testing lora...: 100%|███████████████████████████████████████████████████████████████| 250/250 [04:36<00:00,  1.10s/it]

生成视频yumo
要想使用训练的视频来生成视频要重新设置--torso_ckpt checkpoints_mimictalk/yumo_30s 参数,命令:
python inference/mimictalk_infer.py --drv_aud data/raw/examples/yumo1.wav --drv_pose data/raw/examples/yumo_30s.mp4 --drv_style data/raw/examples/yumo_30s.mp4 --bg_img data/raw/examples/bg.png --torso_ckpt checkpoints_mimictalk/yumo_30s  --out_name yumo_jin.mp4 --out_mode final

测试结果:
发现领带位置比较模糊,头发比较模糊

再次继续训练10000次,历时12小时
python inference/train_mimictalk_on_a_video.py --video_id data/raw/examples/yumo_76s_clear.mp4  --torso_ckpt checkpoints_mimictalk/yumo_30s  --max_updates 10000 --work_dir checkpoints_mimictalk/yumo_76s_clear

training lora...: 100%|███████████████████████████████████████████████████████▉| 9990/10001 [12:09:49<00:46,  4.24s/it]Iter 9991: total_loss=0.1609121561050415  v2v_occlusion_reg_l1_loss=0.5904503464698792,  v2v_occlusion_2_reg_l1_loss=0.3447068929672241,  v2v_occlusion_2_weights_entropy_loss=0.16552835702896118,  density_weight_l2_loss=0.05787011608481407,  density_weight_entropy_loss=0.2808408737182617,  mse_loss=0.049534209072589874,  head_mse_loss=0.02492799609899521,  lpips_loss=0.06391364336013794,  head_lpips_loss=0.01891251839697361,  lip_mse_loss=0.07777386158704758,  lip_lpips_loss=0.11438668519258499,  blink_reg_loss=0.1107407659292221,  triplane_reg_loss=6.553991317749023,  secc_reg_loss=0.000695661292411387,
testing lora...: 100%|███████████████████████████████████████████████████████████████| 250/250 [03:38<00:00,  1.14it/s]
Iter 10001: total_loss=0.14759396761655807  v2v_occlusion_reg_l1_loss=0.5839571952819824,  v2v_occlusion_2_reg_l1_loss=0.3442821204662323,  v2v_occlusion_2_weights_entropy_loss=0.18935051560401917,  density_weight_l2_loss=0.06396747380495071,  density_weight_entropy_loss=0.2656245231628418,  mse_loss=0.035295695066452026,  head_mse_loss=0.007942741736769676,  lpips_loss=0.038584787398576736,  head_lpips_loss=0.003056813031435013,  lip_mse_loss=0.039540667086839676,  lip_lpips_loss=0.03253927454352379,  blink_reg_loss=0.09908352792263031,  triplane_reg_loss=6.555943965911865,  secc_reg_loss=0.00047020273632369936,
training lora...: 100%|███████████████████████████████████████████████████████| 10001/10001 [12:14:36<00:00,  4.41s/it]
testing lora...: 100%|███████████████████████████████████████████████████████████████| 250/250 [03:01<00:00,  1.38it/s]

测试效果:
python inference/mimictalk_infer.py --drv_aud data/raw/examples/yumo_jinshuangshi_15s.wav --drv_pose data/raw/examples/yumo_76s_clear.mp4 --drv_style data/raw/examples/yumo_76s_clear.mp4 --bg_img data/raw/examples/bg.png --torso_ckpt checkpoints_mimictalk/yumo_76s_clear  --out_name yumo_76s_jinshuangshi.mp4 --out_mode final

测试结果:yumo_76s_clear 模糊程度改善很多,但是发现下嘴唇不闭合的问题。
修改用30s的pose文件继续测试,改善了很多,但是还是有少量嘴型不自然的情况,
继续修改pose文件,去掉pose文件的背景音乐,效果又好很多
再次修改背景:
python inference/mimictalk_infer.py --drv_aud data/raw/examples/yumo_jinshuangshi_15s.wav --drv_pose data/raw/examples/yumo_30s_clear.mp4 --drv_style data/raw/examples/yumo_30s_clear.mp4 --bg_img data/raw/examples/yumo_bg.jpg  --torso_ckpt checkpoints_mimictalk/yumo_76s12k_clear  --out_name infer_out/tmp/yumo_76s12k_jinshuangshi.mp4 --out_mode final
天不生墨翟,万古如长夜!以墨运商,以商助墨。金双石科技长期招聘科技研发人才!微信:qishanxiaolu   电话:15876572365   公司:深圳市金双石科技有限公司
回复

使用道具 举报

 楼主| 崎山小鹿 发表于 2024-11-29 17:52:43 | 显示全部楼层
崎山小鹿
2024-11-29 17:52:43 看全部
训练视频liushiqi 6K步12小时
python inference/train_mimictalk_on_a_video.py --video_id data/raw/examples/liushiqi_130s.mp4 --max_updates 2000 --work_dir checkpoints_mimictalk/liushiqi_130s

training lora...:   0%|▎                                                           | 10/2001 [02:11<3:00:13,  5.43s/it]Iter 11: total_loss=0.3888101190328598  v2v_occlusion_reg_l1_loss=0.607401967048645,  v2v_occlusion_2_reg_l1_loss=0.3398691415786743,  v2v_occlusion_2_weights_entropy_loss=0.12148157507181168,  density_weight_l2_loss=0.025346789509058,  density_weight_entropy_loss=0.22934022545814514,  mse_loss=0.06664450466632843,  head_mse_loss=0.02819095179438591,  lpips_loss=0.10811140388250351,  head_lpips_loss=0.029408320784568787,  lip_mse_loss=0.13061849772930145,  lip_lpips_loss=0.08910778164863586,  blink_reg_loss=0.024365829303860664,  triplane_reg_loss=0.030589034780859947,  secc_reg_loss=0.0005546677857637405,
...
testing lora...: 100%|███████████████████████████████████████████████████████████████| 250/250 [02:29<00:00,  1.67it/s]
Iter 2001: total_loss=0.14735968708992003  v2v_occlusion_reg_l1_loss=0.5926839709281921,  v2v_occlusion_2_reg_l1_loss=0.33623775839805603,  v2v_occlusion_2_weights_entropy_loss=0.11574846506118774,  density_weight_l2_loss=0.04073842987418175,  density_weight_entropy_loss=0.23001746833324432,  mse_loss=0.034735675901174545,  head_mse_loss=0.009863680228590965,  lpips_loss=0.04859258607029915,  head_lpips_loss=0.006719035562127829,  lip_mse_loss=0.07170353829860687,  lip_lpips_loss=0.033719874918460846,  blink_reg_loss=0.2683372497558594,  triplane_reg_loss=3.0655908584594727,  secc_reg_loss=0.00040462621836923063,
training lora...: 100%|██████████████████████████████████████████████████████████| 2001/2001 [2:33:07<00:00,  4.59s/it]
testing lora...: 100%|███████████████████████████████████████████████████████████████| 250/250 [02:52<00:00,  1.45it/s]

继续训练liushiqi命令
python inference/train_mimictalk_on_a_video.py  --torso_ckpt checkpoints_mimictalk/liushiqi_130s --video_id data/raw/examples/liushiqi_130s.mp4 --max_updates 6000 --work_dir checkpoints_mimictalk/liushiqi_130s

4K
training lora...:  66%|█████████████████████████████████████▏                  | 3990/6001 [5:56:31<3:20:08,  5.97s/it]Iter 3991: total_loss=0.12766672372817994  v2v_occlusion_reg_l1_loss=0.5775173306465149,  v2v_occlusion_2_reg_l1_loss=0.3366568684577942,  v2v_occlusion_2_weights_entropy_loss=0.11709850281476974,  density_weight_l2_loss=0.061210133135318756,  density_weight_entropy_loss=0.2922610640525818,  mse_loss=0.02765999548137188,  head_mse_loss=0.009524929337203503,  lpips_loss=0.033704791218042374,  head_lpips_loss=0.004674407187849283,  lip_mse_loss=0.05166402459144592,  lip_lpips_loss=0.01862962730228901,  blink_reg_loss=0.12633971869945526,  triplane_reg_loss=4.3284454345703125,  secc_reg_loss=0.0008574479725211859,
testing lora...: 100%|██████████████████████████████████████████████████████████████████████████████████| 250/250 [01:11<00:00,  3.51it/s]
Iter 4001: total_loss=0.13310704231262208  v2v_occlusion_reg_l1_loss=0.5834369659423828,  v2v_occlusion_2_reg_l1_loss=0.3320615291595459,  v2v_occlusion_2_weights_entropy_loss=0.11004718393087387,  density_weight_l2_loss=0.06258229911327362,  density_weight_entropy_loss=0.2984924912452698,  mse_loss=0.03138240799307823,  head_mse_loss=0.00828567799180746,  lpips_loss=0.04263582453131676,  head_lpips_loss=0.005273307207971811,  lip_mse_loss=0.05257716402411461,  lip_lpips_loss=0.023266607895493507,  blink_reg_loss=0.13920198380947113,  triplane_reg_loss=4.335314750671387,  secc_reg_loss=0.0011855922639369965,

6k
testing lora...: 100%|██████████████████████████████████████████████████████████████████████████████████| 250/250 [02:56<00:00,  1.42it/s]
Iter 6001: total_loss=0.1162544161081314  v2v_occlusion_reg_l1_loss=0.5751290321350098,  v2v_occlusion_2_reg_l1_loss=0.3346855640411377,  v2v_occlusion_2_weights_entropy_loss=0.11792436242103577,  density_weight_l2_loss=0.06910260021686554,  density_weight_entropy_loss=0.30972737073898315,  mse_loss=0.02550116553902626,  head_mse_loss=0.007274949457496405,  lpips_loss=0.030336204916238785,  head_lpips_loss=0.002956756856292486,  lip_mse_loss=0.049502819776535034,  lip_lpips_loss=0.017222920432686806,  blink_reg_loss=0.1471508890390396,  triplane_reg_loss=5.438873291015625,  secc_reg_loss=0.00046311301412060857,
training lora...: 100%|█████████████████████████████████████████████████████████| 6001/6001 [10:41:21<00:00,  6.41s/it]
testing lora...: 100%|██████████████████████████████████████████████████████████████████████████████████| 250/250 [01:54<00:00,  2.19it/s]

用25s视频重新训练10000次,花了13个小时
python inference/train_mimictalk_on_a_video.py --video_id data/raw/examples/liushiqi_25s_clear.mp4 --max_updates 10000 --work_dir checkpoints_mimictalk/liushiqi_25s

training lora...: 100%|███████████████████████████████████████████████████████▉| 9980/10001 [13:01:04<02:53,  8.28s/it]Iter 9981: total_loss=0.09684689939022065  v2v_occlusion_reg_l1_loss=0.5950009822845459,  v2v_occlusion_2_reg_l1_loss=0.3200928568840027,  v2v_occlusion_2_weights_entropy_loss=0.10128924995660782,  density_weight_l2_loss=0.02812015265226364,  density_weight_entropy_loss=0.18296167254447937,  mse_loss=0.018398474901914597,  head_mse_loss=0.006617757957428694,  lpips_loss=0.014327870681881905,  head_lpips_loss=0.0026736254803836346,  lip_mse_loss=0.038001202046871185,  lip_lpips_loss=0.008279431611299515,  blink_reg_loss=0.1005098819732666,  triplane_reg_loss=7.517608642578125,  secc_reg_loss=0.0006871851510368288,
training lora...: 100%|███████████████████████████████████████████████████████▉| 9990/10001 [13:02:25<01:27,  7.96s/it]Iter 9991: total_loss=0.10520399659872055  v2v_occlusion_reg_l1_loss=0.5933628082275391,  v2v_occlusion_2_reg_l1_loss=0.31992822885513306,  v2v_occlusion_2_weights_entropy_loss=0.10183276236057281,  density_weight_l2_loss=0.03247998654842377,  density_weight_entropy_loss=0.18195389211177826,  mse_loss=0.022826239466667175,  head_mse_loss=0.005745640955865383,  lpips_loss=0.023273512721061707,  head_lpips_loss=0.0026981360279023647,  lip_mse_loss=0.05218761786818504,  lip_lpips_loss=0.016867591068148613,  blink_reg_loss=0.09263632446527481,  triplane_reg_loss=7.519522666931152,  secc_reg_loss=0.00048013354535214603,
testing lora...: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████| 250/250 [02:48<00:00,  1.49it/s]
Iter 10001: total_loss=0.10925301685929298  v2v_occlusion_reg_l1_loss=0.5926898717880249,  v2v_occlusion_2_reg_l1_loss=0.32016319036483765,  v2v_occlusion_2_weights_entropy_loss=0.10265837609767914,  density_weight_l2_loss=0.028329573571681976,  density_weight_entropy_loss=0.18770214915275574,  mse_loss=0.02030119113624096,  head_mse_loss=0.007713902276009321,  lpips_loss=0.01553319115191698,  head_lpips_loss=0.003065012628212571,  lip_mse_loss=0.0584687814116478,  lip_lpips_loss=0.02107074484229088,  blink_reg_loss=0.0705994963645935,  triplane_reg_loss=7.52163553237915,  secc_reg_loss=0.0006910899537615478,
training lora...: 100%|███████████████████████████████████████████████████████| 10001/10001 [13:06:49<00:00,  4.72s/it]
testing lora...: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████| 250/250 [02:43<00:00,  1.53it/s]

===================================
清除杂音重新训练
视频liushiqi130s 6K步12小时
python inference/train_mimictalk_on_a_video.py --video_id data/raw/examples/liushiqi_130s_clear.mp4 --max_updates 6000 --work_dir checkpoints_mimictalk/liushiqi_130s_clear

再训练4千步
python inference/train_mimictalk_on_a_video.py --video_id data/raw/examples/liushiqi_130s_clear.mp4 --max_updates 4000  --torso_ckpt checkpoints_mimictalk/liushiqi_130s_clear  --work_dir   checkpoints_mimictalk/liushiqi_130s_10k_clear

再训练2千步
python inference/train_mimictalk_on_a_video.py --video_id data/raw/examples/liushiqi_130s_clear.mp4 --max_updates 2000  --torso_ckpt checkpoints_mimictalk/liushiqi_130s_10k_clear  --work_dir   checkpoints_mimictalk/liushiqi_130s_12k_clear

Iter 2001: total_loss=0.12192660719156265  v2v_occlusion_reg_l1_loss=0.5856319665908813,  v2v_occlusion_2_reg_l1_loss=0.33812224864959717,  v2v_occlusion_2_weights_entropy_loss=0.1212739571928978,  density_weight_l2_loss=0.03360917046666145,  density_weight_entropy_loss=0.2701006233692169,  mse_loss=0.028573138639330864,  head_mse_loss=0.008359271101653576,  lpips_loss=0.03497806936502457,  head_lpips_loss=0.004728983622044325,  lip_mse_loss=0.04588288813829422,  lip_lpips_loss=0.014381850138306618,  blink_reg_loss=0.19454456865787506,  triplane_reg_loss=2.170748710632324,  secc_reg_loss=0.00041069742292165756,
training lora...: 100%|██████████████████████████████████████████████████████████| 2001/2001 [4:27:32<00:00, 8.02s/it]
testing lora...: 100%|███████████████████████████████████████████████████████████████| 250/250 [01:58<00:00,  2.11it/s]

=======================================
生成视频liushiqi
要想使用训练的视频来生成视频要重新设置--torso_ckpt checkpoints_mimictalk/liushiqi_130s12k_clear

生成29s视频a
python inference/mimictalk_infer.py --drv_aud data/raw/examples/liushiqi_jinshuangshi_29s_a.wav --drv_pose data/raw/examples/liushiqi_15s_clear.mp4 --drv_style data/raw/examples/liushiqi_15s_clear.mp4 --bg_img data/raw/examples/bg.png --torso_ckpt checkpoints_mimictalk/liushiqi_130s12k_clear  --out_name infer_out/tmp/liushiqi130s12k_jinshuangshi_29s_a.mp4 --out_mode final

生成29s视频b
python inference/mimictalk_infer.py --drv_aud data/raw/examples/liushiqi_jinshuangshi_29s_b.wav --drv_pose data/raw/examples/liushiqi_15s_clear.mp4 --drv_style data/raw/examples/liushiqi_15s_clear.mp4 --bg_img data/raw/examples/bg.png --torso_ckpt checkpoints_mimictalk/liushiqi_130s12k_clear  --out_name infer_out/tmp/liushiqi130s12k_jinshuangshi_29s_b.mp4 --out_mode final

测试结果:发现有少量的画面扭曲、闪烁的现象,嘴型的准确度和画面的清晰度还尚待提高

=======================================
再训练10000步
python inference/train_mimictalk_on_a_video.py --video_id data/raw/examples/liushiqi_130s_clear.mp4 --max_updates 10000  --torso_ckpt checkpoints_mimictalk/liushiqi_130s12k_clear  --work_dir   checkpoints_mimictalk/liushiqi_130s22k_clear

生成测试视频:
python inference/mimictalk_infer.py --drv_aud data/raw/examples/liushiqi_jinshuangshi_29s_a.wav --drv_pose data/raw/examples/liushiqi_15s_clear.mp4 --drv_style data/raw/examples/liushiqi_15s_clear.mp4 --bg_img data/raw/examples/bg.png --torso_ckpt checkpoints_mimictalk/liushiqi_130s20k_clear  --out_name infer_out/tmp/liushiqi130s20k_jinshuangshi_29s_a.mp4 --out_mode final

python inference/mimictalk_infer.py --drv_aud data/raw/examples/liushiqi_jinshuangshi_29s_b.wav --drv_pose data/raw/examples/liushiqi_15s_clear.mp4 --drv_style data/raw/examples/liushiqi_15s_clear.mp4 --bg_img data/raw/examples/bg.png --torso_ckpt checkpoints_mimictalk/liushiqi_130s20k_clear  --out_name infer_out/tmp/liushiqi130s20k_jinshuangshi_29s_b.mp4 --out_mode final
天不生墨翟,万古如长夜!以墨运商,以商助墨。金双石科技长期招聘科技研发人才!微信:qishanxiaolu   电话:15876572365   公司:深圳市金双石科技有限公司
回复

使用道具 举报

 楼主| 崎山小鹿 发表于 2024-11-30 10:51:05 | 显示全部楼层
崎山小鹿
2024-11-30 10:51:05 看全部
G:\ProgramData\anaconda3\envs\MimicTalk39\lib\site-packages\torch\utils\cpp_extension.py:1965: UserWarning: TORCH_CUDA_ARCH_LIST is not set, all archs for visible cards are included for compilation.
If this is not desired, please set os.environ['TORCH_CUDA_ARCH_LIST'].
  warnings.warn(

解释:
TORCH_CUDA_ARCH_LIST 是一个环境变量,用于告诉 PyTorch 应该支持哪些 CUDA 架构(即你的 GPU 的计算能力、
RTX 3070: 计算能力是 8.6

在python中添加:
import os
os.environ['TORCH_CUDA_ARCH_LIST'] = '8.6'

在windows “环境变量” 中添加
变量名: TORCH_CUDA_ARCH_LIST
变量值:8.6
确定

在CMD中测试:
echo %TORCH_CUDA_ARCH_LIST%

或者用命令完成
set TORCH_CUDA_ARCH_LIST=8.6
echo %TORCH_CUDA_ARCH_LIST%
天不生墨翟,万古如长夜!以墨运商,以商助墨。金双石科技长期招聘科技研发人才!微信:qishanxiaolu   电话:15876572365   公司:深圳市金双石科技有限公司
回复

使用道具 举报

 楼主| 崎山小鹿 发表于 2024-12-3 13:39:59 | 显示全部楼层
崎山小鹿
2024-12-3 13:39:59 看全部
RuntimeError: CUDA error: out of memory
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
天不生墨翟,万古如长夜!以墨运商,以商助墨。金双石科技长期招聘科技研发人才!微信:qishanxiaolu   电话:15876572365   公司:深圳市金双石科技有限公司
回复

使用道具 举报

  • 您可能感兴趣
您需要登录后才可以回帖 登录 | 立即注册 |

本版积分规则 返回列表

管理员给TA私信
以墨运商,以商助墨。

查看:3522 | 回复:16

  • 墨家小镇文化与经济

    文化建设: 墨家十要 旗帜: 八卦 双鱼戏水 经济建设: 麦田、 甘蔗田

    阅读:96|2024-12-20
  • 墨者的面具

    仿照三星堆的面具做头像

    阅读:102|2024-12-19
  • 从出生地前往墨家小镇集合

    装饰一下出生地 现在有路标了,通过路标让会员找到我们的村庄,一路要非常小心不

    阅读:207|2024-12-18
  • 墨家盾牌和武器

    盾牌上有墨家的标志,武器上也有墨家的特色

    阅读:212|2024-12-17
  • 安全的保险箱

    每个人都一个自己的箱子,只有自己能开启。且死亡不掉落! 对着一个上方没有红石导

    阅读:217|2024-12-17
  • 我的世界之墨家旗帜

    如何在我的世界里创建独特的旗帜呢? 将图片生成像素画 https://chuiliu.github.io/d

    阅读:279|2024-12-16
  • 给服务器增加组件

    给服务器增加组件,例如:墨家旗帜 租赁服务器如何使用mod? 答:目前我的世界纯净

    阅读:336|2024-12-15
  • 用手机玩墨山游侠

    用手机玩墨山游侠 电脑版怎么和手机版玩家一起联机? 答:更新后电脑版新增基岩版,

    阅读:345|2024-12-15
  • 墨山游侠服务器开启

    在网易上开启创服之旅 服务器号:25744989 我们先用游戏版本:1.12.2 来测试,看

    阅读:463|2024-12-14
  • 构建我们墨者自己的,侠客世界

    各位好,整理了一下大家的意见。同步一下信息。 1.制作一个墨家元素的游戏。 2.先依托

    阅读:340|2024-12-13
金双石科技,软件开发20年,技术行业领先,您的满意,就是我们的目标,认真负责,开拓进取,让成品物超所值
关于我们
公司简介
发展历程
联系我们
本站站务
友情链接
新手指南
内容审核
商家合作
广告合作
商家入驻
新闻合作

手机APP

官方微博

官方微信

联系电话:15876572365 地址:深圳市宝安区西乡街道宝民二路宝民花园 ( 粤ICP备2021100124号-1 ) 邮箱:qishanxiaolu@qq.com
QQ|Powered by Discuz! X3.5 © 2001-2024 Discuz! Team.
快速回复 返回顶部 返回列表