提交
This commit is contained in:
parent
09b16bebff
commit
1803df2baa
@ -89,6 +89,9 @@ ollama cp <源模型> <目标模型>
|
||||
|
||||
# 删除模型
|
||||
ollama rm <模型名称>
|
||||
|
||||
# 显示模型文件
|
||||
ollama show --modelfile <模型名称>
|
||||
```
|
||||
|
||||
## 4. Ollama模型存储目录
|
||||
@ -106,3 +109,134 @@ ollama rm <模型名称>
|
||||
请参考[上面的部分](https://ollama.readthedocs.io/faq/#how-do-i-configure-ollama-server)了解如何在你的平台上设置环境变量。
|
||||
|
||||

|
||||
|
||||
## 5. WSL中Ollama使用Windows中的
|
||||
|
||||
```shell
|
||||
# 编辑环境变量
|
||||
vim /etc/profile
|
||||
|
||||
# 文件末尾添加
|
||||
export PATH="$PATH:/mnt/c/Program Files/Ollama"
|
||||
alias ollama='ollama.exe'
|
||||
```
|
||||
|
||||
# nvidia-smi
|
||||
|
||||
> nvidia-smi是nvidia 的系统管理界面 ,其中smi是System management interface的缩写,它可以收集各种级别的信息,查看显存使用情况。此外, 可以启用和禁用 GPU 配置选项 (如 ECC 内存功能)。
|
||||
|
||||
## nvidia-smi
|
||||
|
||||
```shell
|
||||
nvidia-smi
|
||||
|
||||
+-----------------------------------------------------------------------------------------+
|
||||
| NVIDIA-SMI 570.86.09 Driver Version: 571.96 CUDA Version: 12.8 |
|
||||
|-----------------------------------------+------------------------+----------------------+
|
||||
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
|
||||
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
|
||||
| | | MIG M. |
|
||||
|=========================================+========================+======================|
|
||||
| 0 NVIDIA RTX 4000 Ada Gene... On | 00000000:01:00.0 Off | Off |
|
||||
| N/A 50C P8 7W / 85W | 4970MiB / 12282MiB | 0% Default |
|
||||
| | | N/A |
|
||||
+-----------------------------------------+------------------------+----------------------+
|
||||
|
||||
+-----------------------------------------------------------------------------------------+
|
||||
| Processes: |
|
||||
| GPU GI CI PID Type Process name GPU Memory |
|
||||
| ID ID Usage |
|
||||
|=========================================================================================|
|
||||
| 0 N/A N/A 16221 C /python3.12 N/A |
|
||||
+-----------------------------------------------------------------------------------------+
|
||||
|
||||
```
|
||||
|
||||
解释相关参数含义:
|
||||
|
||||
GPU:本机中的GPU编号
|
||||
|
||||
Name:GPU 类型
|
||||
|
||||
Persistence-M:
|
||||
|
||||
Fan:风扇转速
|
||||
|
||||
Temp:温度,单位摄氏度
|
||||
|
||||
Perf:表征性能状态,从P0到P12,P0表示最大性能,P12表示状态最小性能
|
||||
|
||||
Pwr:Usage/Cap:能耗表示
|
||||
|
||||
Bus-Id:涉及GPU总线的相关信息;
|
||||
|
||||
Disp.A:Display Active,表示GPU的显示是否初始化
|
||||
|
||||
Memory-Usage:显存使用率
|
||||
|
||||
Volatile GPU-Util:浮动的GPU利用率
|
||||
|
||||
Uncorr. ECC:关于ECC的东西
|
||||
|
||||
Compute M.:计算模式
|
||||
|
||||
Processes 显示每块GPU上每个进程所使用的显存情况。
|
||||
|
||||
### 持续监控
|
||||
|
||||
```shell
|
||||
# 使用 watch 命令,它可以定时执行指定的命令并刷新输出。例如,每隔 1 秒刷新一次 GPU 状态,可以使用以下命令
|
||||
watch -n 1 nvidia-smi
|
||||
```
|
||||
|
||||
## nvidia-smi -L
|
||||
|
||||
```shell
|
||||
# 列出所有可用的 NVIDIA 设备
|
||||
nvidia-smi -L
|
||||
GPU 0: NVIDIA RTX 4000 Ada Generation Laptop GPU (UUID: GPU-9856f99a-c32c-fe63-b2ad-7bdee2b12291)
|
||||
```
|
||||
|
||||
# ModelScope
|
||||
|
||||
## 模型下载
|
||||
|
||||
### 安装
|
||||
|
||||
```shell
|
||||
pip install modelscope
|
||||
```
|
||||
|
||||
### 命令行下载
|
||||
|
||||
```shell
|
||||
# 下载完整模型库
|
||||
modelscope download --model deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
|
||||
# 下载单个文件到指定本地文件夹(以下载README.md到当前路径下“dir”目录为例)
|
||||
modelscope download --model deepseek-ai/DeepSeek-R1-Distill-Qwen-7B README.md --local_dir ./dir
|
||||
```
|
||||
|
||||
# Anaconda
|
||||
|
||||
>
|
||||
|
||||
|
||||
|
||||
# Jupyter Notebook
|
||||
|
||||
## 安装
|
||||
|
||||
```shell
|
||||
pip install jupyter
|
||||
```
|
||||
|
||||
## 运行
|
||||
|
||||
```shell
|
||||
jupyter notebook
|
||||
# 若是root用户执行,会出现警告 Running as root is not recommended. Use --allow-root to bypass. 需在后面加上 --allow-root
|
||||
jupyter notebook --allow-root
|
||||
```
|
||||
|
||||
# UnSloth
|
||||
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user