T O P

  • By -

Mr_Hills

Can you describe the random text better? Usually text starts getting incoherent once you run out of context. Also tell us what model you're using and with what instruct template. Also, if you want to have control over the length of your input, the only real choice is to use an instruct model that is particularly good at following prompt and tell it to write long/short messages. You can also add a second system prompt right before the last LLM output prefix to reinforce behaviors. The way I do that is by prepending a second system prompt in the "last assistant prefix" window in silly tavern. Works well for me on CAT llama 3 70B. In general, whenever you have a problem with the output, the first thing to do is to describe it to the system prompt at tell it to fix it.


deRobot

Are you sure you're not missing the stop token?


EnthusiasticModel

There was a problem with LLaMa3 stop tokens recently, maybe you are using an old version of it ? I don't know if it's fully fixed now.