Open vs closed weights are fueling a sustainability debate in the LocalLLaMA crowd. A thread argues open weights aren’t sustainable long-term unless someone funds the training, hardware, and engineers—pushing toward paid-weight bundles that ship with a runtime engine [1].
Open vs Closed Weights — Proponents flag a chicken-egg problem: consumer hardware isn’t ready for mass on-device LLMs, and open models rely on ongoing donations or a business model that covers updates [1]. The idea: a few million users paying for upgrades could make on-device LLMs viable, but a clear path remains elusive [1].
Self-hosting & Copilot-like Tooling — For local models and agent-style workflows, tool access is the sticking point. Using qwen3-4b-instruct-2507 in LM Studio with GitHub Copilot often won’t web-browse or call MCPs; many argue you need bigger options like GPT-OSS20b or Qwen30b for reliable tool calls, even with fine-tuning for tooling [2]. Some suggest small models just can’t support full agent workflows yet [2].
GGUF Accessibility — Browsing your own GGUF files isn’t always seamless. Both GPT4All and LM Studio store GGUF inside their app folders, but you can point LM Studio to your own model directory or use symlinks so your models stay in a personal library [3].
Ecosystem Signals & Tooling Fragility — Community chatter highlights uneven momentum: LM Studio updates lag as llama.cpp advances; GLM 4.6 support is slow, and some call the project “dead,” even as the llama-server web UI helps bridge gaps while engine updates stall [4].
Closing thought: sustained self-hosting hinges on funding for open weights, mature toolchains, and stable ecosystem signals—watch how LM Studio and GPT4All evolve in 2025.
References
Why Open weights vs closed weights, why not paid weights
Discusses sustainability, pricing, and ownership of open vs closed weights; hardware needs, consumer viability, and subscription vs ownership debate today.
View sourceLocal model to use with github copilot which can access web and invoke MCP server
Discusses using local LLMs with Copilot; compares Gemini 2.5 Pro, Qwen variants; questions tool access and web fetch capabilities today.
View sourceHow can I browse my own GGUF file in GPT4ALL and LMStudio
Discusses accessing locally stored gguf models in GPT4All and LM Studio; compares Kobold and other apps; notes viability.
View sourceLM Studio dead?
Post questions LM Studio status; GLM-4.6 awaits; llama.cpp updates exist; mixed signals about OpenAI collab; user suggests alternatives.
View source