Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[code_search] Train a high quality model #239

Closed
jlewi opened this issue Aug 27, 2018 · 34 comments
Closed

[code_search] Train a high quality model #239

jlewi opened this issue Aug 27, 2018 · 34 comments

Comments

@jlewi
Copy link
Contributor

jlewi commented Aug 27, 2018

We'd like to train a high quality model for the code search example.

I took an initial stab at this.
My parameters are here:https://github.com/jlewi/examples/blob/7346d6f1605eaf3e9251e3eec7b1802f5683e637/code_search/kubeflow/components/params.libsonnet#L33

  • I ran for ~300K steps
  • I used 1 K80 GPU
  • Eval steps = 10000
  • Used the transformer_tiny hyperparameters set.

Results from TensorBoard are below

Accuracy and eval appear to flatten out after about 150K steps.

@activatedgeek Any suggestions about what experiments to run?

loss_eval
accuracy
loss

@activatedgeek
Copy link
Contributor

activatedgeek commented Aug 28, 2018 via email

@jlewi
Copy link
Contributor Author

jlewi commented Aug 28, 2018

Thanks.

@cwbeitel Any additional ideas?

@cwbeitel
Copy link
Contributor

Sure. Just so I'm clear, correct me if this is wrong, but the idea here is to learn a mapping between a doc string and the code it corresponds to, then perform search by mapping a new query into embedding space and returning code snippets with embeddings in a close neighborhood of the query? So your loss above corresponds to the first stage, where we're learning a mapping between doc strings and functions? Are you seeing poor model performance when attempting to re-purpose that embedding for search or poor performance on the original task as well? Can your provide some example code that is produced for some queries that are in and not in your dataset?

Also what do these orange and blue lines correspond to? Training and eval? If that's your training loss curve then something is off with the problem setup. What loss function are we using?

You might try constructing a much simpler toy problem that is similar to this one to do some basic debugging. For example only work with doc string / code pairs that are really short (both short doc string and small amounts of code). Then you can progressively increase the length considered with training on shorter snippets serving as pre-training for longer snippets. You can imagine the problem of generating long blocks of code is kind of a challenge but it might be easier if you were first able to generate short ones.

@jlewi
Copy link
Contributor Author

jlewi commented Aug 29, 2018

Thanks @cwbeitel . The network output is the similarity (cosine distance?) between a doc string and code. We have positive examples (want the distance to be small) and negative examples (want the distance to be high).

blue line - is eval (according to label in tensorboard)
red line - doesn't say (I assume its training)

I don't think we've gotten to the point of evaluating qualitatively how well the model is performing on the search task.

I think at this point we simple want metrics that can be used to detect convergence during training and for hyperparameter tuning.

@activatedgeek regarding your first point. Would it be better to use a classification loss metric (e.g. +1, -1) or a regression loss?

/cc @hamelsmu @activatedgeek

@cwbeitel
Copy link
Contributor

Ah that's great, reminds me of this paper from Arandjelovic and Zimmerman (2017) that trained a model to predict the correspondence of audio and visuals in a corpus of videos.

Looks like the batch size for transformer_tiny that is inherited from transformer_base_v1 is 4096 so it's probably not too small but I'd wonder if maybe your examples aren't shuffled properly. If that was the case (or your batch size was too small) you might see instability in the loss fn like you're showing above as a result of training batches not being representative of the full distribution.

@activatedgeek
Copy link
Contributor

@jlewi The loss metric might need improvement but I think we can get mileage without making any changes to that. I suspect it is because the "target" (from the "features" dict) in our case is a continuous high dimensional vector. When compared against the model output (during eval), it is leading to almost zero accuracy because using the accuracy metric for continuous domain wouldn't make sense.

@cwbeitel The data shuffling is done by T2T itself but that could certainly be one place to play with.

@jlewi
Copy link
Contributor Author

jlewi commented Aug 30, 2018

@activatedgeek Thanks. Can you suggest:

  1. what experiments we should run to train a model?
  2. what metrics we should look at to pick the best model?
  3. suggestion about qualitatively evaluating the final product/model?

I tried using hparams transformer_base_single_gpu
https://github.com/jlewi/examples/blob/jlewi_train/code_search/kubeflow/components/params.libsonnet#L47

Here's a graph of the loss. eval loss seems to flatten out after only 4K trials.

loss

My job failed after ~120K steps and I'm not quite sure why. There was no error in stdout/stderr of the process so I expect it exited with some non-zero exit code for some reason. kubeflow/training-operator#811 will hopefully provide better logging.

@cwbeitel
Copy link
Contributor

Random thought - you could restrict your training to a single language or a query subset of that language to make the learning problem simpler, either for debug or towards using a search approach that ensembles many specialized models.

@activatedgeek
Copy link
Contributor

@jlewi Sorry about the delay. Moving and a lot of first week formalities.

I am trying to fix the loss function computation. Once I see new graphs with that, I'll revert back.

@cwbeitel That is a great point. I am trying to overfit the model to a very small dataset to make sure the training is working as intended.

@jlewi
Copy link
Contributor Author

jlewi commented Sep 18, 2018

I am trying to fix the loss function computation. Once I see new graphs with that, I'll revert back.

Typo? Did you mean you will create a PR with the new loss function?

@texasmichelle texasmichelle added the area/example/code_search The code search example label Sep 24, 2018
@cwbeitel
Copy link
Contributor

@activatedgeek It looks like the model Hamel trained in Part 4 of his series on code search (https://github.com/hamelsmu/code_search/blob/master/notebooks/4%20-%20Train%20Model%20To%20Map%20Code%20Embeddings%20to%20Language%20Embeddings.ipynb) started with a pre-trained language model on doc strings (described in Part 3). Do you think doing something similar here (as well as perhaps replicating his choice of model) would be helpful?

@activatedgeek
Copy link
Contributor

@cwbeitel We did consider that at the start however went in favor of training an end-to-end task specific model instead of doing a two step thing - first and language model and then fine-tuning for the task.

@cwbeitel
Copy link
Contributor

@activatedgeek Because that makes continuous training more straightforward or something else? Not to knock the approach, it seems like it should work.

@activatedgeek
Copy link
Contributor

@cwbeitel Certainly, only to make it an end-to-end architecture and less moving parts to deal with.

@activatedgeek
Copy link
Contributor

I just made some new changes in #233 but after all the training it crashes up with dimension errors.

@hamelsmu
Copy link
Member

@inc0 maybe you can take a look today after we talk.

@cwbeitel
Copy link
Contributor

Hmm sorry to hear that, can you roll back to what was working and see which of the additions in the current version cause that problem?

Also debugging interactively with tf.Eager (e.g. https://github.com/tensorflow/tensor2tensor/blob/master/tensor2tensor/notebooks/hello_t2t.ipynb) or some local tests (e.g. https://github.com/tensorflow/tensor2tensor/blob/master/tensor2tensor/models/image_transformer_2d_test.py) could be helpful.

@activatedgeek
Copy link
Contributor

@cwbeitel The code on the master works just that during inference it always ends up using one of the networks. I wanted to fix that by adding a TensorFlow conditional but broke training.

@cwbeitel
Copy link
Contributor

Maybe it would be helpful to have a test with tiny hparams that runs training and inference in series locally. In my opinion tf.Eager makes this more straightforward, you could just fork this one: https://github.com/tensorflow/tensor2tensor/blob/master/tensor2tensor/data_generators/allen_brain_test.py#L157

@activatedgeek
Copy link
Contributor

@cwbeitel Do you know of an easy way to convert my non-eager T2T code to eager mode? This would be a great savior to help me debug.

@cwbeitel
Copy link
Contributor

The model will work with both. In that test model = image_transformer_2d.Img2imgTransformer(...) instantiates a typical model built in the same way as yours, decorates a loss function to work with eager, then directly applies gradients produced from it. Having called tfe.enable_eager_execution() on startup.

@activatedgeek
Copy link
Contributor

activatedgeek commented Sep 28, 2018

@cwbeitel Do you mind skimming through the two files (function_docstring.py and similarity_transformer.py) in #233? I'm kind of lost in the T2T abstractions here.

The error at the end of training is

InvalidArgumentError (see above for traceback): logits and labels must be broadcastable: logits_size=[6,128] labels_size=[84,128]
	 [[{{node cs_similarity_transformer/parallel_0_5/cs_similarity_transformer/cs_similarity_transformer/padded_cross_entropy/smoothing_cross_entropy/softmax_cross_entropy_with_logits}} = SoftmaxCrossEntropyWithLogits[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](cs_similarity_transformer/parallel_0_5/cs_similarity_transformer/cs_similarity_transformer/body/cond/Merge, cs_similarity_transformer/parallel_0_5/cs_similarity_transformer/cs_similarity_transformer/padded_cross_entropy/smoothing_cross_entropy/one_hot)]]

When I put --eval_steps=1, this error goes away.

@inc0
Copy link

inc0 commented Sep 28, 2018

What I've noticed (yesterday evening, didn't get too much into it) is that Embedding layers in T2T are much simpler than what Hamel did (or it looks like it to me).
Particularly, if I understand correctly, this:
https://github.com/tensorflow/tensor2tensor/blob/master/tensor2tensor/models/transformer.py#L1081 seems to be correspondent with this:
https://github.com/hamelsmu/code_search/blob/master/notebooks/seq2seq_utils.py#L48-L80
So we're replacing encoder/decoder+embeddings with just embeddings tested for english nlp. Not sure how much does that affect accuracy. Another thing is same function in t2t is used for language modelling, where in Hamel's example it's dedicated FastAI model (which I haven't dig into yet). After that we're simply calculating distances with t2t. In effect T2T models seems quite a bit shallower than what Hamel initially created, maybe that's affecting accuracy.

Let me dig a little more into this, T2T isn't exactly best documented project so I'm shooting blind and missing stuff:)

@cwbeitel
Copy link
Contributor

Ok hm well just to confirm regarding the shuffling question I raised a while ago it looks like for Text2TextProblem this is handled in for you but you could also confirm that by iterating over examples in a notebook and Googling the function docstrings for sequential examples.

Also more generally it would be good to just look at what is ending up in "inputs", "targets", and "emb_code" just to sanity check.

Looking at similarity_transformer.py and based on another discussion what comes to mind is whether the variables for the two encoder branches are being initialized in the right way. Idk if in tf.variable_scope if the initializer isn't specified it just inherits from the parent scope and if none is specified it just uses zero? You could try dropping in a specific initializer to those or maybe I'm misunderstanding how that works 😜

The error you're getting looks like you're trying to compute the cross entropy of a prediction with a different shape from a ground-truth example. Can you roll back to a state of the code where this didn't happen and step forward to see what change causes this? Doing this locally against a tf.Eager test that tests both training and inference should save you a lot of time relative to launching new runs on Kubeflow if that's what you're doing currently.

Also what is your reasoning around this tf.cond and the use of 'embed_code'? It looks like it's always zero or am I missing something?

Also FYI you can directly examine the values for the gradients of each layer at each iteration by examining gv in loss, gv = loss_fn(example), in the context of the tf.Eager thing I referenced above.

@cwbeitel
Copy link
Contributor

cwbeitel commented Sep 28, 2018

And you said that error happens at the end of training, not during? Then it's probably a problem with the eval phase specifically. If you turn off eval by using schedule=train then you shouldn't see the error. i.e. toward identifying whether that's where the problem is arising.

@cwbeitel
Copy link
Contributor

And you probably will want a way to sanity check the cosine distance code in body as @jlewi as suggesting.

@jlewi
Copy link
Contributor Author

jlewi commented Sep 28, 2018

Filed #254 about measuring the quality of the model since this issue was getting logged.

I think once we fix #254 we could do a quick sanity check of comparing a trained model to a randomly initialized model.

@cwbeitel
Copy link
Contributor

@activatedgeek Maybe this notebook will make debugging a little easier!

@jlewi
Copy link
Contributor Author

jlewi commented Oct 31, 2018

Thanks @cwbeitel

@cwbeitel @activatedgeek @hamelsmu It looks like the vocabulary in
gs://kubeflow-examples/t2t-code-search/20180802/data/vocab.github_function_docstring.8192.subwords

Has 8164 words. Does that seem reasonable?

@jlewi
Copy link
Contributor Author

jlewi commented Oct 31, 2018

Just noticed @cwbeitel's comment #259 (comment) in the other issue noting that the vocab size seems low.

@cwbeitel
Copy link
Contributor

You could get a sense of it by taking a sample of raw doc string or code and compute edit_distance(decode(encode(string)), string) where decode and encode are coders that use that vocab, maybe more or less this:

# Get the encoders from the problem
encoders = problem_object.feature_encoders(data_dir)

rt = encoders["inputs"].decode(encoders["inputs"].encode(input_str))

d = edit_distance(rt, input_str)

@cwbeitel
Copy link
Contributor

This results from the setting of approx_vocab_size (where 2^13 ~ 8k), e.g.

@registry.register_problem
class GithubFunctionDocstring(text_problems.Text2TextProblem):

  @property
  def approx_vocab_size(self):
    return 2**13
...

Numerous other language-oriented problems in t2t use vocab sizes of 8k or 32k, see this... but perhaps a larger vocab is needed for code than prose given the ways words combine with symbols, e.g. here's a sample from my vocab:

'<pad>'
'<EOS>'
'\u_'
'self_'
'_'
's_'
'if_'
'def_'
'the_'
'return_'
'0_'
'in_'
'1_'
'for_'
'a_'
' \u_'
'name_'
'to_'
' ._'
'get_'
'is_'
'n_'
'None_'
...
'figs'
'exi'
'ency'
'contrib'
'chat_'
'bject'
'area'
'arc_'
'anima'
'agg_'
'addCa'
'Windows_'
'Mus'
'Left'
'Forma'
'Channel'
'0001_'
'{'
'}'
'|'
'~'
'?'
'&'
'.'
'<'
'$'
'@'
'^H'
'^G'
'ü'
'ó'
'³'
'^P'
'^D'
'^B'
'^A'

Also it's worth noting that the language translation problems I've seen in the t2t repo have vocabs around 8-32k per language and the code and doc strings we're working with are both drawn from a variety of languages, whether spoken or programmatic.

The easy route is to just try increasing the vocab and see if it improves the quality. Or we might see what happens when we filter the dataset e.g. to be only english python (or perhaps further to be that with nicely formatted doc strings e.g. including an Args: section).

@jlewi
Copy link
Contributor Author

jlewi commented Nov 2, 2018

Thanks @cwbeitel

jlewi added a commit to jlewi/examples that referenced this issue Nov 6, 2018
* Check in a ksonnet component to train a model using the tinyparam
  hyperparameter set.

* We want to check in the ksonnet component to facilitate reproducibility.
  We need a better way to separate the particular experiments used for
  the CS search demo effort from the jobs we want customers to try.

Related to kubeflow#239 train a high quality model.
jlewi added a commit to jlewi/examples that referenced this issue Nov 8, 2018
* Check in a ksonnet component to train a model using the tinyparam
  hyperparameter set.

* We want to check in the ksonnet component to facilitate reproducibility.
  We need a better way to separate the particular experiments used for
  the CS search demo effort from the jobs we want customers to try.

Related to kubeflow#239 train a high quality model.
jlewi added a commit to jlewi/examples that referenced this issue Nov 8, 2018
* Check in a ksonnet component to train a model using the tinyparam
  hyperparameter set.

* We want to check in the ksonnet component to facilitate reproducibility.
  We need a better way to separate the particular experiments used for
  the CS search demo effort from the jobs we want customers to try.

   Related to kubeflow#239 train a high quality model.

* Check in the cs_demo ks environment; this was being ignored as a result of
  .gitignore

Make distributed training work kubeflow#208

* We got distributed synchronous training to work with TensorTensor 1.10
* This required creating a simple python script to start the TF standard
  server and run it as a sidecar of the chief pod and as the main container
  for the workers/ps.

* Rename the model to kf_similarity_transformer to be consistent with other
  code.
  * We don't want to use the default name because we don't want to inadvertently
  use the SimilarityTransformer model defined in the Tensor2Tensor project.

* replace build.sh by a Makefile. Makes it easier to add variant commands
  * Use the GitHash not a random id as the tag.
  * Add a label to the docker image to indicate the git version.

* Put the Makefile at the top of the code_search tree; makes it easier
  to pull all the different sources for the Docker images.

* Add an option to build the Docker iamges with GCB; this is more efficient
  when you are on a poor network connection because you don't have to download
  images locally.
    * Use jsonnet to define and parameterize the GCB workflow.

* Build separate docker images for running Dataflow and for running the trainer.
  This helps avoid versioning conflicts caused by different versions of protobuf
  pulled in by the TF version used as the base image vs. the version used
  with apache beam.

      Fix kubeflow#310 - Training fails with GPUs.

* Changes to support distributed training.
* Simplify t2t-entrypoint.sh so that all we do is parse TF_CONFIG
  and pass requisite config information as command line arguments;
  everything else can be set in the K8s spec.

* Upgrade to T2T 1.10.
k8s-ci-robot pushed a commit that referenced this issue Nov 9, 2018
…nents to train models (#317)

* Make distributed training work; Create some components to train models

* Check in a ksonnet component to train a model using the tinyparam
  hyperparameter set.

* We want to check in the ksonnet component to facilitate reproducibility.
  We need a better way to separate the particular experiments used for
  the CS search demo effort from the jobs we want customers to try.

   Related to #239 train a high quality model.

* Check in the cs_demo ks environment; this was being ignored as a result of
  .gitignore

Make distributed training work #208

* We got distributed synchronous training to work with TensorTensor 1.10
* This required creating a simple python script to start the TF standard
  server and run it as a sidecar of the chief pod and as the main container
  for the workers/ps.

* Rename the model to kf_similarity_transformer to be consistent with other
  code.
  * We don't want to use the default name because we don't want to inadvertently
  use the SimilarityTransformer model defined in the Tensor2Tensor project.

* replace build.sh by a Makefile. Makes it easier to add variant commands
  * Use the GitHash not a random id as the tag.
  * Add a label to the docker image to indicate the git version.

* Put the Makefile at the top of the code_search tree; makes it easier
  to pull all the different sources for the Docker images.

* Add an option to build the Docker iamges with GCB; this is more efficient
  when you are on a poor network connection because you don't have to download
  images locally.
    * Use jsonnet to define and parameterize the GCB workflow.

* Build separate docker images for running Dataflow and for running the trainer.
  This helps avoid versioning conflicts caused by different versions of protobuf
  pulled in by the TF version used as the base image vs. the version used
  with apache beam.

      Fix #310 - Training fails with GPUs.

* Changes to support distributed training.
* Simplify t2t-entrypoint.sh so that all we do is parse TF_CONFIG
  and pass requisite config information as command line arguments;
  everything else can be set in the K8s spec.

* Upgrade to T2T 1.10.

* * Add ksonnet prototypes for tensorboard.
yixinshi pushed a commit to yixinshi/examples that referenced this issue Nov 30, 2018
…nents to train models (kubeflow#317)

* Make distributed training work; Create some components to train models

* Check in a ksonnet component to train a model using the tinyparam
  hyperparameter set.

* We want to check in the ksonnet component to facilitate reproducibility.
  We need a better way to separate the particular experiments used for
  the CS search demo effort from the jobs we want customers to try.

   Related to kubeflow#239 train a high quality model.

* Check in the cs_demo ks environment; this was being ignored as a result of
  .gitignore

Make distributed training work kubeflow#208

* We got distributed synchronous training to work with TensorTensor 1.10
* This required creating a simple python script to start the TF standard
  server and run it as a sidecar of the chief pod and as the main container
  for the workers/ps.

* Rename the model to kf_similarity_transformer to be consistent with other
  code.
  * We don't want to use the default name because we don't want to inadvertently
  use the SimilarityTransformer model defined in the Tensor2Tensor project.

* replace build.sh by a Makefile. Makes it easier to add variant commands
  * Use the GitHash not a random id as the tag.
  * Add a label to the docker image to indicate the git version.

* Put the Makefile at the top of the code_search tree; makes it easier
  to pull all the different sources for the Docker images.

* Add an option to build the Docker iamges with GCB; this is more efficient
  when you are on a poor network connection because you don't have to download
  images locally.
    * Use jsonnet to define and parameterize the GCB workflow.

* Build separate docker images for running Dataflow and for running the trainer.
  This helps avoid versioning conflicts caused by different versions of protobuf
  pulled in by the TF version used as the base image vs. the version used
  with apache beam.

      Fix kubeflow#310 - Training fails with GPUs.

* Changes to support distributed training.
* Simplify t2t-entrypoint.sh so that all we do is parse TF_CONFIG
  and pass requisite config information as command line arguments;
  everything else can be set in the K8s spec.

* Upgrade to T2T 1.10.

* * Add ksonnet prototypes for tensorboard.
Svendegroote91 pushed a commit to Svendegroote91/examples that referenced this issue Dec 6, 2018
…nents to train models (kubeflow#317)

* Make distributed training work; Create some components to train models

* Check in a ksonnet component to train a model using the tinyparam
  hyperparameter set.

* We want to check in the ksonnet component to facilitate reproducibility.
  We need a better way to separate the particular experiments used for
  the CS search demo effort from the jobs we want customers to try.

   Related to kubeflow#239 train a high quality model.

* Check in the cs_demo ks environment; this was being ignored as a result of
  .gitignore

Make distributed training work kubeflow#208

* We got distributed synchronous training to work with TensorTensor 1.10
* This required creating a simple python script to start the TF standard
  server and run it as a sidecar of the chief pod and as the main container
  for the workers/ps.

* Rename the model to kf_similarity_transformer to be consistent with other
  code.
  * We don't want to use the default name because we don't want to inadvertently
  use the SimilarityTransformer model defined in the Tensor2Tensor project.

* replace build.sh by a Makefile. Makes it easier to add variant commands
  * Use the GitHash not a random id as the tag.
  * Add a label to the docker image to indicate the git version.

* Put the Makefile at the top of the code_search tree; makes it easier
  to pull all the different sources for the Docker images.

* Add an option to build the Docker iamges with GCB; this is more efficient
  when you are on a poor network connection because you don't have to download
  images locally.
    * Use jsonnet to define and parameterize the GCB workflow.

* Build separate docker images for running Dataflow and for running the trainer.
  This helps avoid versioning conflicts caused by different versions of protobuf
  pulled in by the TF version used as the base image vs. the version used
  with apache beam.

      Fix kubeflow#310 - Training fails with GPUs.

* Changes to support distributed training.
* Simplify t2t-entrypoint.sh so that all we do is parse TF_CONFIG
  and pass requisite config information as command line arguments;
  everything else can be set in the K8s spec.

* Upgrade to T2T 1.10.

* * Add ksonnet prototypes for tensorboard.
Svendegroote91 pushed a commit to Svendegroote91/examples that referenced this issue Apr 1, 2019
…nents to train models (kubeflow#317)

* Make distributed training work; Create some components to train models

* Check in a ksonnet component to train a model using the tinyparam
  hyperparameter set.

* We want to check in the ksonnet component to facilitate reproducibility.
  We need a better way to separate the particular experiments used for
  the CS search demo effort from the jobs we want customers to try.

   Related to kubeflow#239 train a high quality model.

* Check in the cs_demo ks environment; this was being ignored as a result of
  .gitignore

Make distributed training work kubeflow#208

* We got distributed synchronous training to work with TensorTensor 1.10
* This required creating a simple python script to start the TF standard
  server and run it as a sidecar of the chief pod and as the main container
  for the workers/ps.

* Rename the model to kf_similarity_transformer to be consistent with other
  code.
  * We don't want to use the default name because we don't want to inadvertently
  use the SimilarityTransformer model defined in the Tensor2Tensor project.

* replace build.sh by a Makefile. Makes it easier to add variant commands
  * Use the GitHash not a random id as the tag.
  * Add a label to the docker image to indicate the git version.

* Put the Makefile at the top of the code_search tree; makes it easier
  to pull all the different sources for the Docker images.

* Add an option to build the Docker iamges with GCB; this is more efficient
  when you are on a poor network connection because you don't have to download
  images locally.
    * Use jsonnet to define and parameterize the GCB workflow.

* Build separate docker images for running Dataflow and for running the trainer.
  This helps avoid versioning conflicts caused by different versions of protobuf
  pulled in by the TF version used as the base image vs. the version used
  with apache beam.

      Fix kubeflow#310 - Training fails with GPUs.

* Changes to support distributed training.
* Simplify t2t-entrypoint.sh so that all we do is parse TF_CONFIG
  and pass requisite config information as command line arguments;
  everything else can be set in the K8s spec.

* Upgrade to T2T 1.10.

* * Add ksonnet prototypes for tensorboard.
@stale
Copy link

stale bot commented Jun 27, 2019

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot closed this as completed Jul 4, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants