Context, then answer… instead of having everything ride on the first character (e.g. we make it pick “Y” or “N” first in response to a yes-or-no question, it usually picks “Y” even if it later talks itself out of it).
BroBot9000@lemmy.worldEnglish
2 months
Or even better, don’t use the racist pile of linear algebra that regurgitates misinformation and propaganda.
- TheLeadenSea@sh.itjust.worksEnglish2 months
That’s the basis of reasoning models. Make LLMs ‘think’ through the problem for several hundred tokens before giving a final answer.
- 2 months
Yeah I’ve been having it code short useful scripts (like converting the PDF of my work schedule to an importable ICS or making a custom desktop timer for a work task that repeats every fifteen minutes) and I find it works better if you make it sum up it’s goals at the beginning then if I need to start fresh in a new chat (faster processing, less perseveration on erroneous earlier versions) I have it sum up the goals at the end to paste into the new one.
- Tomtits@lemmy.dbzer0.comEnglish2 months
Remember when satnavs first came out and you could download different voices for them?
If only Waze weren’t owned by Google… I saw they had that function. Wonder if you can do that with OSMAnd…



