-
Notifications
You must be signed in to change notification settings - Fork 308
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
also having issues with the demo #14
Comments
Does this demo occur when you run the demo notebook? Did you try restarting the notebook and running all cells in order? It sounds like for some reason the same variable is being created twice when you run it. |
I believe this is an error that occurs when you are running with tensorflow version pip uninstall tensorflow # tensorflow-gpu
pip install tensorflow==0.12 # tensorflow-gpu==0.12 |
Yes, the issue is the version of tensorflow. It would be great if the project included a Failing this, you can easily to TensorFlow 1.1.0 at least (didn't try my modification on the new 1.2). |
@tnarik @brannondorsey @WeiHan3 if you have succeeded in training it now, can you please share the model. |
i am running it on colab and facing this error
|
This is what I get on step 2. I tried working with the fixes that people suggested on the other posts, but have still not had any luck. Any suggestions would be greatly appreciated!
ValueErrorTraceback (most recent call last)
in ()
6 model = Model(num_time_samples=num_time_samples,
7 num_channels=num_channels,
----> 8 gpu_fraction=gpu_fraction)
9
10 Audio(inputs.reshape(inputs.shape[1]), rate=44100)
/root/shared/fast-wavenet-master/wavenet/models.py in init(self, num_time_samples, num_channels, num_classes, num_blocks, num_layers, num_hidden, gpu_fraction)
34 for i in range(num_layers):
35 rate = 2**i
---> 36 name = 'b{}-l{}'.format(b, i)
37 h = dilated_conv1d(h, num_hidden, rate=rate, name=name)
38 hs.append(h)
/root/shared/fast-wavenet-master/wavenet/layers.py in dilated_conv1d(inputs, out_channels, filter_width, rate, padding, name, gain, activation)
142 padding=padding,
143 gain=gain,
--> 144 activation=activation)
145 , conv_out_width, _ = outputs.get_shape().as_list()
146 new_width = conv_out_width * rate
/root/shared/fast-wavenet-master/wavenet/layers.py in conv1d(inputs, out_channels, filter_width, stride, padding, data_format, gain, activation, bias)
89 w = tf.get_variable(name='w',
90 shape=(filter_width, in_channels, out_channels),
---> 91 initializer=w_init)
92
93 outputs = tf.nn.conv1d(inputs,
/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.pyc in get_variable(name, shape, dtype, initializer, regularizer, trainable, collections, caching_device, partitioner, validate_shape, custom_getter)
986 collections=collections, caching_device=caching_device,
987 partitioner=partitioner, validate_shape=validate_shape,
--> 988 custom_getter=custom_getter)
989 get_variable_or_local_docstring = (
990 """%s
/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.pyc in get_variable(self, var_store, name, shape, dtype, initializer, regularizer, trainable, collections, caching_device, partitioner, validate_shape, custom_getter)
888 collections=collections, caching_device=caching_device,
889 partitioner=partitioner, validate_shape=validate_shape,
--> 890 custom_getter=custom_getter)
891
892 def _get_partitioned_variable(self,
/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.pyc in get_variable(self, name, shape, dtype, initializer, regularizer, reuse, trainable, collections, caching_device, partitioner, validate_shape, custom_getter)
346 reuse=reuse, trainable=trainable, collections=collections,
347 caching_device=caching_device, partitioner=partitioner,
--> 348 validate_shape=validate_shape)
349
350 def _get_partitioned_variable(
/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.pyc in _true_getter(name, shape, dtype, initializer, regularizer, reuse, trainable, collections, caching_device, partitioner, validate_shape)
331 initializer=initializer, regularizer=regularizer, reuse=reuse,
332 trainable=trainable, collections=collections,
--> 333 caching_device=caching_device, validate_shape=validate_shape)
334
335 if custom_getter is not None:
/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.pyc in _get_single_variable(self, name, shape, dtype, initializer, regularizer, partition_info, reuse, trainable, collections, caching_device, validate_shape)
637 " Did you mean to set reuse=True in VarScope? "
638 "Originally defined at:\n\n%s" % (
--> 639 name, "".join(traceback.format_list(tb))))
640 found_var = self._vars[name]
641 if not shape.is_compatible_with(found_var.get_shape()):
ValueError: Variable b0-l0/w already exists, disallowed. Did you mean to set reuse=True in VarScope? Originally defined at:
File "wavenet/layers.py", line 91, in conv1d
initializer=w_init)
File "wavenet/layers.py", line 144, in dilated_conv1d
activation=activation)
File "wavenet/models.py", line 36, in init
name = 'b{}-l{}'.format(b, i)
The text was updated successfully, but these errors were encountered: