You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
1). Alwrity does web research, to provide factual context before generating content.
2). This seems to be working, but there repetitions of the same with different context does not yield good results.
There are valid reasons for it. Shorter, transient memory and context window limitations.
3). All LLMs are susceptible to loose of context depending on the volume of data. They need to supplemented with more memory. The problem with that is, the context windows and LLM can remember in a prompt session is limited.
4). It will be worthwhile to check if embedding with local vectordb will improve utilization of context we provide to Alwrity.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
1). Alwrity does web research, to provide factual context before generating content.
2). This seems to be working, but there repetitions of the same with different context does not yield good results.
There are valid reasons for it. Shorter, transient memory and context window limitations.
3). All LLMs are susceptible to loose of context depending on the volume of data. They need to supplemented with more memory. The problem with that is, the context windows and LLM can remember in a prompt session is limited.
4). It will be worthwhile to check if embedding with local vectordb will improve utilization of context we provide to Alwrity.
Beta Was this translation helpful? Give feedback.
All reactions