You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Apr 27, 2024. It is now read-only.
When i was using the memory manager and automatically import a memory base on my chat, i'm noticed that it actually only summarize my chats to a certain point and not the entire chats. The current method of scanning messages is to scan the page of any HTML element that contain the message. Character AI itself by default doesn't load the entire chats until we scroll to a certain point and it start to load more. so it make the feature only load the newest chat instead of the entire chats.
Solution?
Instead of scanning the page, we could try to use a http request to get every messages on including the one that is not currently loaded by the page.
How?
I've been looking on network tab on chrome devtools and manage to get a few API Endpoint that is used by the page. I've also made a function in JS to get all the messages history.
asyncfunctiongetChatHistory(){// Get the char ID from url paramsletcharID=newURLSearchParams(window.location.search).get("char");if(!charID)thrownewError("CharID not found");// Get user token to use it as Authorization headerlettoken=JSON.parse(localStorage.getItem("char_token")).value;letopt={headers: {Authorization: `Token ${token}`,},};// Send request to get chat information such as chat_idletchatInfo=await(awaitfetch(`https://neo.character.ai/chats/recent/${charID}`,opt)).json();if(!chatInfo)thrownewError("No chat was found");letchatID=chatInfo.chats[0].chat_id;// Send request to get the first newest turns (a chunk of messages) to get the next tokenletrecentHistory=await(awaitfetch(`https://neo.character.ai/turns/${chatID}`,opt)).json();// Every turns will be stored hereletchatsHistory=[recentHistory];// Get the nextTokenletnextToken=chatsHistory[chatsHistory.length-1].meta.next_token;while(nextToken){// Send request to get the next turns until the next token is null.lethistory=await(awaitfetch(`https://neo.character.ai/turns/${chatID}?next_token=${nextToken}`,opt)).json();chatsHistory.push(history);nextToken=history.meta.next_token;}returnchatsHistory;}//Data is the returns value of the function abovefunctionconvertHistory(data){// Reverse the arraydata.reverse();// Convert the data only include an array of turns (without next_token)letturns=data.reduce((pre,now)=>[...pre, ...now.turns],[]);letchats=[];// Convert the data to a string of the author name and the messageturns.forEach((e)=>{chats.push(`<< ${e.author.name} >>\n${e.candidates[0].raw_content}`);});returnchats;}
hope this help
and the messages might be too long to be summarize so you can summarize it in chunks.
The text was updated successfully, but these errors were encountered:
Thank you for your suggestion! I'm going to take a look at your code, afterwards I'll include something like this in the extension + connect to the Cohere API. Once again, thanks!
As for summarizing in chunks, that might work, but I'm not sure if Cohere can transfer context for the summarization model. I'll investigate for a bit, maybe it actually does.
@mangadi3859 It appears that combining the two functions didn't work out of the box due to promise-array issues, but now it's resolved and it seems like your algorithm works really well! This will also reduce the size of the extension because it'll remove the device type dependency (as the client had a different structure on mobile and on legacy chats). At the moment I'm working on integrating it with the C.AI extension to replace the export / summarizer function grabbers.
This has been successfully implemented for both Chat Export functions and Automatic Generation and will be included in the next stable release, yet already available in the source code. Thank you for your amazing proposition!
Suggestion
What happened?
When i was using the memory manager and automatically import a memory base on my chat, i'm noticed that it actually only summarize my chats to a certain point and not the entire chats. The current method of scanning messages is to scan the page of any HTML element that contain the message. Character AI itself by default doesn't load the entire chats until we scroll to a certain point and it start to load more. so it make the feature only load the newest chat instead of the entire chats.
Solution?
Instead of scanning the page, we could try to use a http request to get every messages on including the one that is not currently loaded by the page.
How?
I've been looking on network tab on chrome devtools and manage to get a few API Endpoint that is used by the page. I've also made a function in JS to get all the messages history.
hope this help
and the messages might be too long to be summarize so you can summarize it in chunks.
The text was updated successfully, but these errors were encountered: