Skip Navigation
Posts
0
Comments
50
Joined
2 yr. ago
  • Is there a way to download content from the community workshop using the steam download_depot ?

  • Are you on windows or linux, if you managed to fond the dlc files, you can most likely(not 100% sure it works with delisted) use creamAPI to make steam think you on them. On windows I've used CreamInstaller, which is a handy GUI that does it for you. I seem to recall I did it on my father computer running ubuntu, but don't recall exactly how.

  • Scrubbles's comment outlined what would likely be the best workflow. Having done something similar myself, here are my recommendations:

    In my opinion, the best way to do STT with Whisper is by using Whisper Writer, I use it to write most most messages and texts.

    For the LLM part, I recommend Koboldcpp. It's built on top of llama.cpp and has a simple GUI that saves you from looking for the name of each poorly documented llama.cpp launch flag (cli is still available if you prefer). Plus, it offers more sampling options.

    If you want a chat frontend for the text generated by the LLM, SillyTavern is a great choice. Despite its poor naming and branding, it's the most feature-rich and extensible frontend. They even have an official extension to integrate TTS.

    For the TTS backend, I recommend Alltalk_tts. It provides multiple model options (xttsv2, coqui, T5, ...) and has an okay UI if you need it. It also offers a unified API to use with the different models. If you pick SillyTavern, it can be accessed by their TTS extension. For the models, T5 will give you the best quality but is more resource-hungry. Xtts and coqui will give you decent results and are easier to run.

    There are also STS models emerging, like GLM4-V, but I still haven't tried them, so I can't judge the quality.

  • zen integrates every upstream change a few hours after release, it is built as a set of patch on top of firefox just to make that easy

  • they released a search engine where the model reads the first link before trying to answer your request

  • llama.cpp works on windows too (or any os for that matter), though linux will vive you better performances

  • Mistral modèles don't have much filter don't worry lmao

  • They is no chance they are the one training it. It costs hundreds of millions to get a descent model. Seems like they will be using mistral, who have scrapped pretty much 100% of the web to use as training data.

  • Permanently Deleted

  • Buying second hand 3090/7090xtx will be cheaper for better performances if you are not building the rest of the machine.

  • Permanently Deleted

  • You are limited by bandwidth not compute with llm, so accelerator won't change the interferance tp/s

  • I use similar feature on discord quite extensively (custom emote/sticker) and i don't feel they are just a novelty. Allows us to have inside joke / custom reaction to specific event and I really miss it when trying out open source alternatives.

  • Too be fair to Gemini, even though it is worse than Claude and Gpt. The weird answer were caused by bad engineering and not by bad model training. They were forcing the incorporattion off the Google search results even though the base model would most likely have gotten it right.

  • The training doesn't use csam, 0% chance big tech would use that in their dataset. The models are somewhat able to link concept like red and car, even if it had never seen a red car before.

  • The models used are not trained on CP. The models weight are distributed freely and anybody can train a LORA on his computer. Its already too late to ban open weight models.

  • They know the tech is not good enough, they just dont care and want to maximise profit.

  • Whatsapp is europe's iMessage

  • You can take a look at exllama and llama.cpp source code on github if you want to see how it is implemented.

  • If you have good enough hardware, this is a rabbithole you could explore. https://github.com/oobabooga/text-generation-webui/

  • Around 48gb of VRAM if you want to run it in 4bits