Skip Navigation

What do you guys use LLMs for, that is proven to work well with them?

When I search this topic online, I always find either wrong information or advertising lies. So what is actually something that LLMs can do very well, as in being actually useful and not just outputing a nonsensical word salad that sounds coherent.

Results

So basically from what I've read, most people use it for natural language processing problems.

Example: turn this infodump into a bullet point list, or turn this bullet point list into a coherent text, help me with rephrasing this text, word association, etc.

Other people use it for simple questions that it can answer with a database of verified sources.

Also, a few people use it as struggle duck, basically helping alleviate writers block.

Thanks guys.

61 comments
  • Philosophy.

    Ask it to act as Socrates, pick a topic and it will help you with introspection.

    This is good for examining your biases.

    e.g. I want to examine the role of government employees.
    e.g. when is it ok to give up on an idea?

  • A fringe case I've found ChatGPT very useful is to learn more about information that is plentiful but buried in dead threads in various old school web forums and thus very hard to Google. Like other people's experiences from homebrewing. Then I ask it for sources and most often it is accurate to the claims of other homebrewers that also can be correct or less correct.

  • Getting an initial impression of some new field I want to learn about. I ask the model for a short summary and links to more in-depth information. This would be more difficult to do on my own when I don't even know where to start.

  • When I'm in a hurry I use them for

    • longer more complex excel formulas.
    • to create powercell scrips to manipulate large csv-files.
    • I used it to teach me app script and it was 90 percent accurate

  • I think using LLMs with RAG (aka tools) is more useful and reliable than relying only on training data that the model does its best to represent.

    For example, using a search engine to find results for a query, downloading the first 10 results as text, and then having the LLM answer subsequent queries about those sources, or another example would be uploading a document and having the LLM answer queries about its contents.

    This is also advantageous because much smaller and quicker models can be used while still producing accurate results (often with citations to the source).

    This can even be self hosted with Open WebUI/ollama.

  • They help me make better searches. I use ChatGPT to get a good idea of what better to search for based on my inquiry. It tells me what I am looking for, and then just use a search engine based on that.

    Also, taught me some python and appscript. Currently learning and testing its capabilities in JavaScript teaching. And, yes I test out everything it gives me. It is best to output small blocks of code and lice it together. Hoping for the best and then, 3 years later finally create an app lol because that is on my end. Still working on an organization app. 80 percent accurate on following complete directions in this case.

  • Very basic and non-creative source code operations. Eg. "convert this representation of data to that representation of data based on the template"

  • I find they're pretty good at some coding tasks. For example, it's very easy to make a reasonable UI given a sample JSON payload you might get from an endpoint. They're good at doing stuff like crafting farily complex SQL queries or making shell scripts. As long as the task is reasonably focused, they tend to get it right a lot of the time. I find they're also useful for discovering language features working with languages I'm not as familiar with. I also find LLMs are great at translation and transcribing images. They're also useful for summaries and finding information within documents, including codebases. I've found it makes it a lot easier to search through papers where you might want to find relationships between concepts or definitions for things. They're also good at subtitle generation and well as doing text to speech tasks. Another task I find they're great at is proofreading and providing suggestions for phrasing. They can also make a good sounding board. If there's a topic you understand, and you just want to bounce ideas off, it's great to be able to talk through that with a LLM. Often the output it produces can stimulate a new idea in my head. I also use LLM as a tutor when I practice Chinese, they're great for doing free form conversational practice when learning a new language. These are a just a few areas I use LLMs in on nearly daily basis now.

    • I use LLMs to generate unit tests, among other things that are pretty much already described here. It helps me discover edge cases I haven't considered before, regardless if the generated unit tests themselves pass correctly or not.

  • It's really good in statistics, but you need to know enough statistics to know what to ask. Just today I needed to write a PyStan script for doing some MCMC and it's helped me to write it, structure the data and understand the results of the experiment. Then, it confirmed my suspension that the chosen model was not very good for my data and tomorrow I'm trying with another probability distribution.

61 comments