Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Firebase functions clean-up #270

Merged
merged 6 commits into from
Jul 12, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
51 changes: 51 additions & 0 deletions Documentation/Firebase/functions.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
# Firebase Functions Documentation 📚

In this section, we'll explore how to use Firebase Functions. They're a crucial part of our system. Let's dive in! 🚀

## Setting Up Environment Variables 🛠️

To ensure your Firebase Functions work correctly both locally and when deployed, follow these steps to set up your environment variables:

1. **Locate the `.env.template` File** - Navigate to the `functions` folder and find the `.env.template` file. This file contains the necessary environment variable keys for your project.
2. **Create a `.env` File** - Copy the `.env.template` file and rename the copy to `.env`. This file will be used to store your actual environment variable values.
3. **Replace Values** - Open the `.env` file and replace the placeholder values with your actual environment variable values. This step is crucial for the proper functioning of your Firebase Functions both locally and during deployment.

By following these steps, you ensure that your Firebase Functions have the necessary environment variables to operate correctly in any environment. 🌐

## Getting Started with Firebase Functions Locally 🏁

To run Firebase Functions locally, follow these steps:

1. **Install the Firebase CLI** - Essential for both local development and deployment. [Learn more here](https://firebase.google.com/docs/cli) 🛠️
2. **Login to Firebase CLI** - Necessary only if you plan to deploy your functions. 🔑
3. **Select the Project** - Required if you're deploying the functions. 📂
4. **Confirmation** - To confirm the Firebase CLI is correctly installed, open your terminal and run: `firebase --version` ✅

## Running the Firebase Functions Emulator 🚀

To get the Firebase Functions Emulator up and running, follow these steps:

1. **Install Emulators** - If not already done, you may need to install the necessary emulators. [Learn more here](https://firebase.google.com/docs/emulator-suite/install_and_configure) 📦
2. **Initialize Emulators** - Run `firebase init emulators` to set up the emulators in your project. This step helps configure the emulators according to your project's needs. 🛠️
3. **Start the Emulator** - From the root of your repository, run `firebase emulators:start --only functions` to start the emulator. 🖥️

That's it! You're all set. All details related to the status and logs of your Firebase Functions will be available in the console once the functions start working. Keep an eye on the console for real-time updates and debugging information. 🎉

## Working with Firebase Emulator and React Native 📱

Working with the Firebase emulator and React Native can be challenging due to the servers running on different systems. The Firebase Functions run on your local machine, while the emulator or testing device runs on your phone or inside the machine. This setup leads to different `localhost` environments, making it impossible to access the functions directly from your phone.

## Overcoming Localhost Issues with Firebase and React Native 🌉

To address the challenge of different localhost environments between the Firebase emulator (running on your local machine) and your React Native app (running on a phone or emulator), we've implemented a solution using HTTP callable functions.

### Testing with HTTP Callable Functions 🧪

1. **Create HTTP Callable Functions**: These functions can be invoked via HTTP requests, making them accessible regardless of the `localhost` issue.
2. **Testing with Postman or Thunder Client**: Before integrating with your React Native app, test the HTTP callable functions using tools like Postman or Thunder Client, both available as VS Code extensions. This ensures the functions behave as expected.

### Deployment 🚀

- **Why Deployment is Necessary**: For your React Native app to access the Firebase Functions, you must deploy them. This is because the app can only interact with deployed functions, bypassing the localhost discrepancy issue.

By following these steps, you can effectively bridge the gap between your Firebase Functions and React Native app, ensuring a smooth development and testing process.
5 changes: 5 additions & 0 deletions functions/.env.template
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
OPEN_AI_API_KEY=""
ASTRA_DB_URL=""
ASTRA_DB_TOKEN=""
ASTRA_DB_COLLECTION_NAME=""
ASTRA_DB_NAMESPACE=""
38 changes: 38 additions & 0 deletions functions/chain.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
from meta import document_content_description, metadata_field_info


def create_health_ai_chain(llm, vector_store):
retriever = SelfQueryRetriever.from_llm(
llm=llm,
vectorstore=vector_store,
document_content_description=document_content_description,
metadata_field_info=metadata_field_info,
document_contents='',
)
health_ai_template = """
You are a health AI agent equipped with access to diverse sources of health data,
including research articles, nutritional information, medical archives, and more.
Your task is to provide informed answers to user queries based on the available data.
If you cannot find relevant information, simply state that you do not have enough data
to answer accurately. write your response in markdown form and also add reference url
so user can know from which source you are answering the questions.

CONTEXT:
{context}

QUESTION: {question}

YOUR ANSWER:
"""
health_ai_prompt = ChatPromptTemplate.from_template(health_ai_template)
chain = (
{'context': retriever, 'question': RunnablePassthrough()}
| health_ai_prompt
| llm
| StrOutputParser()
)
return chain
5 changes: 5 additions & 0 deletions functions/config.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
from firebase_admin import initialize_app


def initialize_firebase():
initialize_app()
25 changes: 25 additions & 0 deletions functions/handlers.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
from os import environ

from chain import create_health_ai_chain
from config import initialize_firebase
from langchain_openai import ChatOpenAI
from store import get_vector_store

initialize_firebase()


def get_health_ai_response(question, llm):
vector_store = get_vector_store()
chain = create_health_ai_chain(llm, vector_store)
response = chain.invoke(question)
return response


def get_response_from_llm(query, llm):
models = {'gpt-4': {}, 'gpt-3.5-turbo-instruct': {'name': 'gpt-3.5-turbo-instruct'}}
if llm in models:
llm_model = ChatOpenAI(api_key=environ.get('OPEN_AI_API_KEY'), temperature=0, **models[llm])
response = get_health_ai_response(query, llm_model)
return response
else:
return 'Model Not Found'
201 changes: 10 additions & 191 deletions functions/main.py
Original file line number Diff line number Diff line change
@@ -1,207 +1,26 @@
from firebase_admin import initialize_app
from firebase_functions import https_fn, options
from langchain.chains.query_constructor.base import AttributeInfo
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain_astradb import AstraDBVectorStore
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
from langchain_openai import ChatOpenAI, OpenAIEmbeddings

initialize_app()


# Initialize embeddings and vector store
def initialize_vector_store(token):
embeddings = OpenAIEmbeddings(api_key='')
return AstraDBVectorStore(
embedding=embeddings,
collection_name='test_collection_2',
api_endpoint='',
token=token,
namespace='test',
)


# Metadata field info
metadata_field_info = [
AttributeInfo(
name='author',
description='The author of the YouTube video or nutrition article',
type='string',
),
AttributeInfo(
name='videoId', description='The unique identifier for the YouTube video', type='string'
),
AttributeInfo(name='title', description='The title of the content', type='string'),
AttributeInfo(
name='keywords',
description='A list of keywords associated with the YouTube video',
type='List[string]',
),
AttributeInfo(
name='viewCount', description='The number of views for the YouTube video', type='string'
),
AttributeInfo(
name='shortDescription',
description='A short description of the YouTube video',
type='string',
),
AttributeInfo(name='transcript', description='The transcript of the content', type='string'),
AttributeInfo(
name='authors', description='The authors of the PubMed or Archive document', type='list'
),
AttributeInfo(
name='publicationDate',
description='The publication date of the PubMed or Archive document',
type='string',
),
AttributeInfo(
name='abstract', description='The abstract of the PubMed or Archive document', type='string'
),
AttributeInfo(name='date', description='The date of the nutrition article', type='string'),
AttributeInfo(
name='keyPoints', description='The key points of the nutrition article', type='string'
),
AttributeInfo(name='subTitle', description='The subtitle of the recipe', type='string'),
AttributeInfo(name='rating', description='The rating of the recipe', type='float'),
AttributeInfo(
name='recipeDetails',
description='The details of the recipe include the time also.',
type='Dict[string, string]',
),
AttributeInfo(
name='ingredients', description='A list of ingredients for the recipe', type='List[string]'
),
AttributeInfo(name='steps', description='The steps to prepare the recipe', type='List[string]'),
AttributeInfo(
name='nutritionFacts',
description='Nutritional facts of the recipe',
type='Dict[string, string]',
),
AttributeInfo(
name='nutritionInfo',
description='Detailed nutritional information of the recipe',
type='Dict[string, Dict[string, string]]',
),
]

document_content_description = """
It includes a variety of metadata to describe different aspects of the content:

General Information:
- Title: The title of the content.
- Transcript: A full transcript of any video, audio, or written content associated with document.

YouTube Video Information:
- Author: The author or creator of the YouTube video.
- VideoId: The unique identifier for the YouTube video.
- Keywords: A list of relevant keywords associated with the YouTube video.
- ViewCount: The number of views for the YouTube video.
- Short Description: A brief overview of the YouTube video.

PubMed Article Information:
- Authors: List of authors for the PubMed article.
- PublicationDate: The date when the PubMed article was published.
- Abstract: A summary of the PubMed article.

Podcast Information:
- Title: The title of the podcast episode.
- Transcript: The transcript of the podcast episode.
from json import dumps

Nutrition Article Information:
- Title: The title of the nutrition article.
- Date: The date when the nutrition article was published.
- Author: The author of the nutrition article.
- Key Points: Important highlights or key points about recipe from the nutrition article.

Recipe Information:
- Title: The title of the recipe.
- SubTitle: The subtitle of the recipe.
- Rating: The rating of the recipe, if available.
- Recipe Details: Detailed information about the recipe, including preparation time,
cooking time, and serving size.
- Ingredients: A list of ingredients required for making recipe.
- Steps: Step-by-step instructions to prepare the dish.
- Nutrition Facts: Basic nutritional information about the recipe.
- Nutrition Info: Detailed nutritional information, including amounts and daily values.

Archived Document Information:
- Title: The title of the archived document.
- Authors: List of authors for the archived document.
- Abstract: A summary of the archived document.
- PublicationDate: The date when the archived document was published.
"""


# Create a function to get response from the chain
def get_health_ai_response(question, llm):
token = ''
vector_store = initialize_vector_store(token)
retriever = SelfQueryRetriever.from_llm(
llm=llm,
vectorstore=vector_store,
document_content_description=document_content_description,
metadata_field_info=metadata_field_info,
document_contents='',
)
health_ai_template = """
You are a health AI agent equipped with access to diverse sources of health data,
including research articles, nutritional information, medical archives, and more.
Your task is to provide informed answers to user queries based on the available data.
If you cannot find relevant information, simply state that you do not have enough data
to answer accurately. write your response in markdown form and also add reference url
so user can know from which source you are answering the questions.

CONTEXT:
{context}

QUESTION: {question}

YOUR ANSWER:
"""
health_ai_prompt = ChatPromptTemplate.from_template(health_ai_template)
chain = (
{'context': retriever, 'question': RunnablePassthrough()}
| health_ai_prompt
| llm
| StrOutputParser()
)
response = chain.invoke(question)
return response


def get_response_from_llm(query, llm):
api_key = ''
llm_model = None
if llm == 'gpt-4':
llm_model = ChatOpenAI(api_key=api_key, temperature=0)
if llm == 'gpt-3.5-turbo-instruct':
llm_model = ChatOpenAI(name='gpt-3.5-turbo-instruct', api_key=api_key, temperature=0)
if llm_model is not None:
response = get_health_ai_response(query, llm_model)
return response
else:
return 'Model Not Found'
from firebase_functions import https_fn, options
from handlers import get_response_from_llm


@https_fn.on_request(cors=options.CorsOptions(cors_origins=['*'], cors_methods=['get', 'post']))
@https_fn.on_request(cors=options.CorsOptions(cors_origins=['*']))
def get_response_url(req: https_fn.Request) -> https_fn.Response:
query = req.get_json().get('query', '')
llms = req.get_json().get('llms', ['gpt-4'])
responses = []
responses = {}
for llm in llms:
response = get_response_from_llm(query, llm)
responses.append(response)
return https_fn.Response(responses)
responses[llm] = response
return https_fn.Response(dumps(responses), mimetype='application/json')


@https_fn.on_call()
def get_response(req: https_fn.CallableRequest):
query = req.data.get('query', '')
llms = req.data.get('llms', ['gpt-4'])
responses = []
responses = {}
for llm in llms:
response = get_response_from_llm(query, llm)
responses.append(response)
return response
responses[llm] = response
return responses
Loading
Loading