Sentimental AI #65
nduartech
announced in
Blog Posts
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
slug: sentimental
description: My experience at a company hackathon
published: 2024-11-16
A couple months ago, the Treasury Services technology group held an internal hackathon in conjunction with the Eliza Platform team. Eliza is BNY's in-house AI solutions platform, and the challenge for the event was to incorporate Eliza into a project that aligned with corporate OKRs and values.
My initial idea was to build on the progress our team had made with file imports by designing a system that would automatically suggest corrections to invalid data in these files by using the errors generated by the system along with user entitlements to diagnose and fix issues. While this was unfortunately not selected for the next stage (and thus I was unable to test the feasibility of such an approach), I fortunately was invited to join another group within our team whose idea was accepted.
For background, our team had previously added an agent developed specially by the Eliza team to answer common questions about the Treasury Online Banking Platform into our UI in the form of a chatbot. Building on this, our group's plan was to add a user feedback form to the same component, which would in turn submit collected data to an asynchronous process for analysis.
We all contributed: one team member handled the UI development, another wrote the initial version of the back-end which gave us a great starting point, I built upon this in developing the final version of our agentic flow—my wife even chipped in by naming the team/project☺️ . Below is a diagram I made in Excalidraw examining the approach:
By utilizing this manager-worker multi-agentic approach, and by breaking a complex task like sentimental analysis of user feedback into 4 sequential tasks, we were able to achieve consistent results. The system was able to separate complex feedback into segments based on relevance to UI features and assign separate sentiment scores for each segment, as well as use these to arrive at an overall sentiment score for the feedback as a whole.
This included inheritance of sentiment, such as when listing features (for example: Payments, FX, search, nothing works! In this statement 3 features are listed and the overall sentiment for the list has to be distributed to each feature), and contradiction of sentiment (for example: Even though search is working, I can't submit any payments! Here we have two features listed with opposing sentiments that should be attributed to each feature).
On the UI, as part of the pitch, we created custom screens for our product and client support teams to read feedback for product features to help them gauge how individual features, as well as feature groups and the TS Online Banking product as a whole, were performing with clients, which would in turn help with prioritization as the product's feature offering continues to grow larger.
This entailed two pages: The first was a dashboard with charts offering holistic and drill-down views of average sentiment scores across feature groups, as well as within individual features. The second presented the actual feedback provided for each feature group (filterable by drop down menu), with highlighting to indicate where the agents segmented the text, as well as tool-tips that appear when each segment is clicked indicating the score the agent assigned to the segment as well as its reasoning for the score.
Even though our project was not selected for the finalist stage, this experience taught me quite a bit and I am grateful for the opportunity and more than proud of the end result we produced in a short period of time. Having taken courses in AI and NLP in college, I really missed working in this field after several years now on the application development side, and the chance to jump in again has certainly whet my appetite.
Beta Was this translation helpful? Give feedback.
All reactions