Releases: googleapis/python-aiplatform
Releases · googleapis/python-aiplatform
v1.36.1
1.36.1 (2023-11-07)
Features
- Add
per_crowding_attribute_neighbor_count
,approx_num_neighbors
,fraction_leaf_nodes_to_search_override
, andreturn_full_datapoint
to MatchingEngineIndexEndpointfind_neighbors
(33c551e) - Add profiler support to tensorboard uploader sdk (be1df7f)
- Add support for
per_crowding_attribute_num_neighbors
approx_num_neighbors
to MatchingEngineIndexEndpointmatch()
(e5c20c3) - Add support for
per_crowding_attribute_num_neighbors
approx_num_neighbors
to MatchingEngineIndexEndpointmatch()
(53d31b5) - Add support for
per_crowding_attribute_num_neighbors
approx_num_neighbors
to MatchingEngineIndexEndpointmatch()
(4e357d5) - Enable grounding to ChatModel send_message and send_message_async methods (d4667f2)
- Enable grounding to TextGenerationModel predict and predict_async methods (b0b4e6b)
- LLM - Added support for the
enable_checkpoint_selection
tuning evaluation parameter (eaf4420) - LLM - Added tuning support for the
*-bison-32k
models (9eba18f) - LLM - Released
CodeChatModel
tuning to GA (621af52)
Bug Fixes
- Correct class name in system test (b822b57)
Documentation
- Clean up RoV create_ray_cluster docstring (1473e19)
Miscellaneous Chores
- Release 1.36.1 (1cde170)
v1.36.0
1.36.0 (2023-10-31)
Features
- Add preview count_tokens method to CodeGenerationModel (96e7f7d)
- Allow the users to use extra serialization arguments for objects. (ffbd872)
- Also support unhashable objects to be serialized with extra args (77a741e)
- LLM - Added
count_tokens
support to ChatModel (preview) (01989b1) - LLM - Added new regions for tuning and tuned model inference (3d43497)
- LLM - Added support for async streaming (760a025)
- LLM - Added support for multiple response candidates in code chat models (598d57d)
- LLM - Added support for multiple response candidates in code generation models (0c371a4)
- LLM - Enable tuning eval TensorBoard without evaluation data (eaf5d81)
- LLM - Released
CodeGenerationModel
tuning to GA (87dfe40) - LLM - Support
accelerator_type
in tuning (98ab2f9) - Support experiment autologging when using persistent cluster as executor (c19b6c3)
- Upgrade BigQuery Datasource to use write() interface (7944348)
Bug Fixes
- Adding setuptools to dependencies for Python 3.12 and above. (afd540d)
- Fix Bigframes tensorflow serializer dependencies (b4cdb05)
- LLM - Fixed the async streaming (41bfcb6)
- LLM - Make tuning use the global staging bucket if specified (d9ced10)
- LVM - Fixed negative prompt in
ImageGenerationModel
(cbe3a0d) - Made the Endpoint prediction client initialization lazy (eb6071f)
- Make sure PipelineRuntimeConfigBuilder is created with the right arguments (ad19838)
- Make sure the models list is populated before indexing (f1659e8)
- Raise exception for RoV BQ Write for too many rate limit exceeded (7e09529)
- Rollback BigQuery Datasource to use do_write() interface (dc1b82a)
v1.35.0
1.35.0 (2023-10-10)
Features
- Add serializer.register_custom_command() (639cf10)
- Install Bigframes sklearn dependencies automatically (7aaffe5)
- Install Bigframes tensorflow dependencies automatically (e58689b)
- Install Bigframes torch dependencies automatically (1d65347)
- LLM - Added support for multiple chat response candidates (587df74)
- LLM - Added support for multiple text generation response candidates (c3ae475)
Bug Fixes
- Duplicate logs in Colab (9b75259)
- LLM - Fixed tuning and evaluation when explicit credentials are specified (188dffe)
- Resolve Artifact Registry tags when creating PipelineJob (f04ca35)
- Resolve Artifact Registry tags when creating PipelineJob (06bf487)
Documentation
- Add probabilistic inference to TiDE and L2L model code samples. (efe88f9)
v1.34.0
1.34.0 (2023-10-02)
Features
- Add Model Garden support to vertexai.preview.from_pretrained (f978200)
- Enable vertexai preview persistent cluster executor (0ae969d)
- LLM - Added the
count_tokens
method to the previewTextGenerationModel
andTextEmbeddingModel
classes (6a2f2aa) - LLM - Improved representation for blocked responses (222f222)
- LLM - Released
ChatModel
tuning to GA (7d667f9)
Bug Fixes
- Create PipelineJobSchedule in same project and location as associated PipelineJob by default (c22220e)
Documentation
- Add documentation for the preview namespace (69a67f2)
v1.33.1
v1.33.0
1.33.0 (2023-09-18)
Features
- Add Custom Job support to from_pretrained (8b0add1)
- Added async prediction and explanation support to the
Endpoint
class (e9eb159) - LLM - Added support for async prediction methods (c9c9f10)
- LLM - CodeChat - Added support for
context
(f7feeca) - Release Ray on Vertex SDK Preview (3be36e6)
Bug Fixes
v1.32.0
1.32.0 (2023-09-05)
Features
- LLM - Added
stop_sequences
parameter to streaming methods andCodeChatModel
(d62bb1b) - LLM - Improved the handling of temperature and top_p in streaming (6566529)
- Support bigframes sharded parquet ingestion at remote deserialization (Tensorflow) (a8f85ec)
- Release Vertex SDK Preview (c60b9ca)
- Allow setting default service account (d11b8e6)
Bug Fixes
- Fix feature update since no LRO is created (468e6e7)
- LLM -
CodeGenerationModel
now supports safety attributes (c2c8a5e) - LLM - Fixed batch prediction on tuned models (2a08535)
- LLM - Fixed the handling of the
TextEmbeddingInput.task_type
parameter. (2e3090b) - Make statistics Optional for TextEmbedding. (7eaa1d4)
v1.31.1
v1.31.0
1.31.0 (2023-08-21)
Features
- Add disable_retries option to custom jobs. (db518b0)
- LLM - Added support for
stop_sequences
in inference (6f7ea84) - LLM - Exposed the
TextGenerationResponse.raw_prediction_response
(f8f2b9c) - LLM - Made tuning asynchronous when tuning becomes GA (226ab8b)
- LLM - release model evaluation for TextGenerationModel to public preview (8df5185)
- LLM - Released
TextGenerationModel
tuning to GA (62ff30d) - LLM - Support streaming prediction for chat models (ce60cf7)
- LLM - Support streaming prediction for code chat models (0359f1d)
- LLM - Support streaming prediction for code generation models (3a8348b)
- LLM - Support streaming prediction for text generation models (fb527f3)
- LLM - TextEmbeddingModel - Added support for structural inputs (
TextEmbeddingInput
),auto_truncate
parameter and resultstatistics
(cbf9b6e) - LVM - Added support for Image Generation models (b3729c1)
- LVM - Released
ImageCaptioningModel
to GA (7575046) - LVM - Released
ImageQnAModel
to GA (fd5cb02) - LVM - Released
MultiModalEmbeddingModel
to GA (e99f366) - LVM - Removed the
width
andheight
parameters fromImageGenerationModel.generate_images
since the service has dropped support for image sizes and aspect ratios (52897e6) - Scheduled pipelines client GA. (62b8b23)