Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow various ParameterExpressions for qiskit to tc conversion #207

Merged

Conversation

king-p3nguin
Copy link
Contributor

  • The current implementation for converting a Qiskit circuit to a Tensorcircuit circuit does not allow Qiskit's circuit to involve gates with multiple parameters or use NumPy's ufunc. sympy.lambdify can fix this.
  • qiskit.circuit.bit.Bit.index is deprecated.
  • test_qiskit2tc_parameterized does not seem to be working due to the deprecation of QubitConverter. Removed QubitConverter from the test.

I noticed that qiskit-nature is commented on requirements-extra.txt. Is it okay that I uncommented it?

Copy link

codecov bot commented Apr 6, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 75.68%. Comparing base (b042a74) to head (0565d5e).

❗ Current head 0565d5e differs from pull request most recent head 77721d4. Consider uploading reports for the commit 77721d4 to get more accurate results

Additional details and impacted files
@@            Coverage Diff             @@
##           master     #207      +/-   ##
==========================================
+ Coverage   75.52%   75.68%   +0.15%     
==========================================
  Files          67       67              
  Lines       10804    10801       -3     
==========================================
+ Hits         8160     8175      +15     
+ Misses       2644     2626      -18     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

ansatz4.ry(ansatz4_param[0] * ansatz4_param[1] + ansatz4_param[2], 0)
ansatz4.rz(
np.exp(np.sin(ansatz4_param[0]))
+ np.abs(ansatz4_param[1]) / np.arctan(ansatz4_param[2]),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ParameterVector doesn't support np.abs for qiskit with early versions, though not determine the exact breaking version (at least 0.23 fails).
For now, I think it is ok to have abs in the test, just comment for a record.

sympy.Symbol(str(symbol).replace("[", "_").replace("]", ""))
for symbol in sympy_symbols
]
lam_f = sympy.lambdify(sympy_symbols, expr, modules=backend.name)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for modules arg, there is no pytorch module support for lambdify? I think we can stick to numpy modules? As I have tested, modules="numpy" can support binding_params in the tensor format of each backend

Copy link
Contributor

@refraction-ray refraction-ray left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the contribution, it is indeed a very nice solution for parameter experession translation. To me, the only thing required to change is to use modules="numpy" when calling lambdify.

@refraction-ray
Copy link
Contributor

I noticed that qiskit-nature is commented on requirements-extra.txt. Is it okay that I uncommented it?

As long as the CI works well :) I commented the package due to some CI incompatibility which might have been resolved now.

@king-p3nguin king-p3nguin changed the title Allow various ParameterExpression's for qiskit to tc conversion Allow various ParameterExpressions for qiskit to tc conversion Apr 7, 2024
@king-p3nguin
Copy link
Contributor Author

If I just use algebraic operations like +-*/ with modules="numpy", there is no problem, but if I use numpy's ufunc, pytorch, jax, or tensorflow all seem to fail, unfortunately.
I changed the code so that using the pytorch backend with algebraic operations only still works.

@refraction-ray
Copy link
Contributor

If I just use algebraic operations like +-*/ with modules="numpy", there is no problem, but if I use numpy's ufunc, pytorch, jax, or tensorflow all seem to fail, unfortunately. I changed the code so that using the pytorch backend with algebraic operations only still works.

Yes, I just realized another scenario that prefers module=backend.name, i.e. when the translation function is used in a jitted function. However, I think we can directly fallback torch backend to numpy even for non algebraic operations.

At least for the test case in the newly added tests, I tried the following (numpy module works for torch tensor even with non-algebraic ops):

Screen Shot 2024-04-07 at 3 57 36 PM

@king-p3nguin
Copy link
Contributor Author

It seems like using modules="numpy" with pytorch backend has a problem when used with grad, but modules="math" seems to be working.

{
	"name": "RuntimeError",
	"message": "Can't call numpy() on Tensor that requires grad. Use tensor.detach().numpy() instead.",
	"stack": "---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
Cell In[1], line 52
     50 params = tc.backend.convert_to_tensor(params)
     51 print(\"new :\", params)
---> 52 grad = tc.backend.grad(cost_fn)(
     53     params,
     54 )
     55 print(np.isnan(grad))
     56 assert tc.backend.sum(np.isnan(grad)) == 0

File ~/tensorcircuit/tensorcircuit/backends/pytorch_backend.py:504, in PyTorchBackend.grad.<locals>.wrapper(*args, **kws)
    503 def wrapper(*args: Any, **kws: Any) -> Any:
--> 504     y, gr = self.value_and_grad(f, argnums, has_aux)(*args, **kws)
    505     if has_aux:
    506         return gr, y[1:]

File ~/tensorcircuit/tensorcircuit/backends/pytorch_backend.py:541, in PyTorchBackend.value_and_grad.<locals>.wrapper(*args, **kws)
    539 def wrapper(*args: Any, **kws: Any) -> Any:
    540     gavf = torchlib.func.grad_and_value(f, argnums=argnums, has_aux=has_aux)
--> 541     g, v = gavf(*args, **kws)
    542     return v, g

File ~/.local/share/virtualenvs/tensorcircuit-BMvyhJJt/lib/python3.11/site-packages/torch/_functorch/vmap.py:44, in doesnt_support_saved_tensors_hooks.<locals>.fn(*args, **kwargs)
     41 @functools.wraps(f)
     42 def fn(*args, **kwargs):
     43     with torch.autograd.graph.disable_saved_tensors_hooks(message):
---> 44         return f(*args, **kwargs)

File ~/.local/share/virtualenvs/tensorcircuit-BMvyhJJt/lib/python3.11/site-packages/torch/_functorch/eager_transforms.py:1256, in grad_and_value.<locals>.wrapper(*args, **kwargs)
   1253 diff_args = _slice_argnums(args, argnums, as_tuple=False)
   1254 tree_map_(partial(_create_differentiable, level=level), diff_args)
-> 1256 output = func(*args, **kwargs)
   1257 if has_aux:
   1258     if not (isinstance(output, tuple) and len(output) == 2):

Cell In[1], line 44, in cost_fn(params)
     41 def cost_fn(params):
     42     return tc.backend.real(
     43         tc.backend.sum(
---> 44             get_unitary(params),
     45         ),
     46     )

Cell In[1], line 29, in get_unitary(params)
     27 @tc.backend.jit
     28 def get_unitary(params):
---> 29     return tc.Circuit.from_qiskit(
     30         ansatz, inputs=np.eye(2**n), binding_params=params
     31     ).state()

File ~/tensorcircuit/tensorcircuit/abstractcircuit.py:889, in AbstractCircuit.from_qiskit(cls, qc, n, inputs, circuit_params, binding_params)
    886 if n is None:
    887     n = qc.num_qubits
--> 889 return qiskit2tc(  # type: ignore
    890     qc,
    891     n,
    892     inputs,
    893     circuit_constructor=cls,
    894     circuit_params=circuit_params,
    895     binding_params=binding_params,
    896 )

File ~/tensorcircuit/tensorcircuit/translation.py:474, in qiskit2tc(qc, n, inputs, is_dm, circuit_constructor, circuit_params, binding_params)
    472 idx = [qc.find_bit(qb).index for qb in gate_info.qubits]
    473 gate_name = gate_info[0].name
--> 474 parameters = _translate_qiskit_params(gate_info, binding_params)
    475 if gate_name in [
    476     \"h\",
    477     \"x\",
   (...)
    490     \"cz\",
    491 ]:
    492     getattr(tc_circuit, gate_name)(*idx)

File ~/tensorcircuit/tensorcircuit/translation.py:401, in _translate_qiskit_params(gate_info, binding_params)
    395     sympy_symbols = [
    396         sympy.Symbol(str(symbol).replace(\"[\", \"_\").replace(\"]\", \"\"))
    397         for symbol in sympy_symbols
    398     ]
    399     lam_f = sympy.lambdify(sympy_symbols, expr, modules=lambdify_module_name)
    400     parameters.append(
--> 401         lam_f(*[binding_params[param.index] for param in parameter_list])
    402     )
    403 else:
    404     # numbers, arrays, etc.
    405     parameters.append(p)

File <lambdifygenerated-2>:2, in _lambdifygenerated(φ_0, φ_1, φ_2)
      1 def _lambdifygenerated(φ_0, φ_1, φ_2):
----> 2     return exp(sin(φ_0)) + abs(φ_1)/arctan(φ_2)

File ~/.local/share/virtualenvs/tensorcircuit-BMvyhJJt/lib/python3.11/site-packages/torch/_tensor.py:1062, in Tensor.__array__(self, dtype)
   1060     return handle_torch_function(Tensor.__array__, (self,), self, dtype=dtype)
   1061 if dtype is None:
-> 1062     return self.numpy()
   1063 else:
   1064     return self.numpy().astype(dtype, copy=False)

RuntimeError: Can't call numpy() on Tensor that requires grad. Use tensor.detach().numpy() instead."
}

@refraction-ray
Copy link
Contributor

It seems like using modules="numpy" with pytorch backend has a problem when used with grad, but modules="math" seems to be working.

Cool, thanks for your careful investigation! The PR now LGTM

@refraction-ray refraction-ray merged commit 77721d4 into tencent-quantum-lab:master Apr 8, 2024
2 checks passed
@refraction-ray
Copy link
Contributor

@all-contributors please add @king-p3nguin for test, doc

Copy link
Contributor

@refraction-ray

I've put up a pull request to add @king-p3nguin! 🎉

@king-p3nguin king-p3nguin deleted the parameterexpression branch April 8, 2024 10:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants