LLM responds filtering
- maxwellapex
- Sep 28
- 1 min read

Usually, we ask LLMs and they respond in tones we may like. For example, you ask an easy question, they give you a short answer; you ask for a touching story, they generate a good one. However, this “mind reading” feature can be dangerous. Transformer based LLM were designed to predict the next word with the highest possibility, and this means that it has to sounds correct, in the scale of several words. Therefore, it’s important for us to be caution of their responds, especially for those we didn’t understand. I guess this is a common sense for some users nowadays, and it can make the experience better.



Comments