- Milestone: 0.4 goal --> Backlog
The tensors in the tensor library and the Numpy arrays are not really compatible with each other. The former, for example, uses reference counting and copy-on-write, while the latter require you to instantiate new tensors explicitly.
While the buffer protocol helps a bit here, by allowing each tensor to be constructed from a numpy array or vice versa, this requires an annoying explicit conversion step on the side of the user. If the conversion step is forgotten, things ... sometimes work, and sometimes you get strange errors. Also, it requires to export the tensor classes, which are awkward in the Python interface. Finally, it is a bit brittle: If the user wraps a Numpy array around the buffer without copying the data (possible though unlikely), things can go completely crazy when the tensor finishes life and releases its data.
A better separation would be to explicitly keep the CTensor/RTensor classes inside the C++ part, and always use Numpy arrays as the representation on the Python side. This requires a custom caster that can transparently mediate between the two worlds, see https://pybind11.readthedocs.io/en/stable/advanced/cast/custom.html.
Performance is not an issue, as you would not normally put critical sections in Python code. The whole casting is only required during setup (specifying a potential, for example), or after relatively large timestep for doing something with the result wavepacket, for example, plotting it. A copy is then perfectly fine, the propagation is usually the expensive part.