Skip to content

Releases: brainsqueeze/text2vec

v2.0.3

15 Jul 23:19
Compare
Choose a tag to compare

Enables training dropout for the Bahdanau attention layers.

v2.0.2

15 Jul 15:06
85e44b7
Compare
Choose a tag to compare

Minor fixes including docstring updates and removal of layer name overrides.

v2.0.1

07 Jul 21:31
27fd3aa
Compare
Choose a tag to compare

Breaking changes, See pull requests:

v1.2.0

27 Sep 17:42
Compare
Choose a tag to compare
  • Breaking changes on ServingModel wrapper class, now with dictionary outputs
  • new strings module with SubTokenFinderMask class which performs ragged substring searches and masking
  • More flexible encoder/decoder network flow

v1.0.0

08 Jun 19:53
Compare
Choose a tag to compare
  • Breaking changes for some layers, some layers deprecated (in README)
  • Training loop via /bin/main.py is now deprecated and will be removed in future versions
  • Included data is removed in favor of the much richer HuggingFace datasets library
  • More flexible API for training auto-encoders
  • Leverage HuggingFace tokenizers

v0.4.3

27 Oct 14:57
Compare
Choose a tag to compare
  • Improved training performance
  • Slimmed down tf.saved_model output for inference
  • Better documentation.

v0.2.2

10 Feb 17:33
Compare
Choose a tag to compare

Updates include:

  • YAML config for training to avoid long CLI inputs
  • More flexible method for handling external training data
  • convenience CLI function
  • PyYAML dependency added
  • some improvements to TensorBoard logging for scalar quantities

v0.1.1-beta

31 Dec 14:52
Compare
Choose a tag to compare
v0.1.1-beta Pre-release
Pre-release

Remove the encoding and decoding masking on the Bahdanau attention during the transformer decode pipeline.

v0.1

29 Dec 19:18
Compare
Choose a tag to compare

Initial release of text2vec. This includes tools for creating attention-based and LSTM-based transformer models for turning sentences into vectors which encode contextual meaning.