Skip to content

Commit

Permalink
No public description
Browse files Browse the repository at this point in the history
PiperOrigin-RevId: 723596187
  • Loading branch information
MediaPipe Team authored and copybara-github committed Feb 5, 2025
1 parent d9cdaea commit 17d10fd
Showing 1 changed file with 19 additions and 0 deletions.
19 changes: 19 additions & 0 deletions mediapipe/tasks/cc/genai/inference/c/llm_inference_engine.h
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,10 @@
#include <stdint.h>
#endif

#ifdef __EMSCRIPTEN__
#include <functional>
#endif // __EMSCRIPTEN__

#ifndef ODML_EXPORT
#define ODML_EXPORT __attribute__((visibility("default")))
#endif // ODML_EXPORT
Expand Down Expand Up @@ -68,6 +72,21 @@ typedef struct {
// Path to the model artifact.
const char* model_path;

#ifdef __EMSCRIPTEN__
// Function to read model file.
// The function returns a pointer to heap memory that contains the model file
// contents started from `offset` with `size`.
// Since the model file is hosted on JavaScript layer and this function copies
// the data to the heap memory, the `mode` instructs how the source model file
// data should be mainuplated:
// 0: Data will be kept in memory after read.
// 1: Data will not be accessed again and can be discarded.
// 2: All data has been used and can be discarded.
using ReadDataFn =
std::function<void*(uint64_t offset, uint64_t size, int mode)>;
ReadDataFn* read_model_fn;
#endif // __EMSCRIPTEN__

// Path to the vision encoder to use for vision modality. Optional.
const char* vision_encoder_path;

Expand Down

0 comments on commit 17d10fd

Please sign in to comment.