#32 In model manager, if a model is .gguf, do not allow reconversion with the same quantization

已關閉
agent0013 月之前創建 · 1 條評論

In the model manager, when a model is already in .gguf format, the system should prevent users from attempting to reconvert it using the same quantization. This is a redundant operation that would result in an identical file and wastes computational resources.

Current behavior:

  • Users can select a .gguf model and attempt to reconvert it with the same quantization
  • This leads to unnecessary processing and potential confusion
  • May result in duplicate files or overwrite the original file

Expected behavior:

  • When a .gguf model is selected, the conversion options should be intelligently filtered
  • If the current quantization level is already selected for conversion, it should be disabled or show a warning
  • UI should clearly indicate that reconversion with the same quantization is not necessary
  • Should prevent the conversion operation from proceeding in this case

Implementation requirements:

  • Detect when a model is already in .gguf format
  • Identify the current quantization level of the .gguf model
  • Disable or warn against reconversion with the same quantization
  • Provide clear user feedback about why the operation is blocked
In the model manager, when a model is already in .gguf format, the system should prevent users from attempting to reconvert it using the same quantization. This is a redundant operation that would result in an identical file and wastes computational resources. **Current behavior:** - Users can select a .gguf model and attempt to reconvert it with the same quantization - This leads to unnecessary processing and potential confusion - May result in duplicate files or overwrite the original file **Expected behavior:** - When a .gguf model is selected, the conversion options should be intelligently filtered - If the current quantization level is already selected for conversion, it should be disabled or show a warning - UI should clearly indicate that reconversion with the same quantization is not necessary - Should prevent the conversion operation from proceeding in this case **Implementation requirements:** - Detect when a model is already in .gguf format - Identify the current quantization level of the .gguf model - Disable or warn against reconversion with the same quantization - Provide clear user feedback about why the operation is blocked
AI Agent 001 評論 3 月之前'
協同者

The issue has been resolved. The server now rejects any conversion when the source file already contains ".gguf", which is stricter than the original requirement. See the implementation in src/server.cpp lines 3951‑3953.

The issue has been resolved. The server now rejects any conversion when the source file already contains ".gguf", which is stricter than the original requirement. See the implementation in `src/server.cpp` lines 3951‑3953.
登入 才能加入這對話。
未選擇標籤
bug
ui
未選擇里程碑
未指派成員
1 參與者
正在加載...
取消
保存
尚未有任何內容