Skip to content
Pasqal Documentation

Results are limited to the current section : emulationtools

API specification

The emu-sv API is based on the specification here. Concretely, the classes are as follows:

Bases: EmulatorBackend

A backend for emulating Pulser sequences using state vectors and sparse matrices. Noisy simulation is supported by solving the Lindblad equation and using effective noise channel or jump operators

PARAMETER DESCRIPTION
config

Configuration for the SV backend.

TYPE: SVConfig DEFAULT: None

Source code in pulser/backend/abc.py
def __init__(
self,
sequence: pulser.Sequence,
*,
config: EmulationConfig | None = None,
mimic_qpu: bool = False,
) -> None:
"""Initializes the backend."""
super().__init__(sequence, mimic_qpu=mimic_qpu)
self._config = self.validate_config(config or self.default_config)
if (
self._config.prefer_device_noise_model
and self._sequence.device.default_noise_model is not None
and self._sequence.device.default_noise_model.runs is not None
and self._sequence.device.default_noise_model.runs
!= self._config.n_trajectories
):
config = self._config
warnings.warn(
f"'{sequence.device.default_noise_model.runs=}' is being "
f"ignored; '{config.n_trajectories=}' will be used instead.",
stacklevel=2,
)

Emulates the given sequence.

RETURNS DESCRIPTION
Results | list[Results]

the simulation results

Source code in emu_sv/sv_backend.py
def run(self) -> Results | list[Results]:
"""
Emulates the given sequence.
Returns:
the simulation results
"""
assert isinstance(self._config, SVConfig)
pulser_data = PulserData(
sequence=self._sequence, config=self._config, dt=self._config.dt
)
results = []
for sequence_data in pulser_data.get_sequences():
impl = create_impl(sequence_data, self._config)
results.append(impl._run())
return Results.aggregate(results)

Bases: EmulationConfig

The configuration of the emu-sv SVBackend. The kwargs passed to this class are passed on to the base class. See the API for that class for a list of available options.

PARAMETER DESCRIPTION
dt

the timestep size that the solver uses. Note that observables are only calculated if the evaluation_times are divisible by dt.

TYPE: float DEFAULT: 10.0

max_krylov_dim

the size of the krylov subspace that the Lanczos algorithm maximally builds

TYPE: int DEFAULT: 100

krylov_tolerance

the Lanczos algorithm uses this as the convergence tolerance

TYPE: float DEFAULT: 1e-10

gpu

choosing the number of gpus to use during the simulation - if gpu = True, use 1 GPU to store the state. (causes errors if True when GPU not available) - if gpu = False, use CPU to run the entire simulation. - if gpu = None (the default value), the backend internally chooses 1 GPU.

TYPE: bool | None DEFAULT: None

interaction_cutoff

Set interaction coefficients below this value to 0. Potentially improves runtime and memory consumption.

TYPE: float DEFAULT: 0.0

log_level

How much to log. Set to logging.WARN to get rid of the timestep info.

TYPE: int DEFAULT: INFO

log_file

If specified, log to this file rather than stout.

TYPE: Path | None DEFAULT: None

kwargs

arguments that are passed to the base class

TYPE: Any DEFAULT: {}

Examples:

gpu = True
dt = 1.0 #this will impact the runtime
krylov_tolerance = 1e-8 #the simulation will be faster, but less accurate
SVConfig(gpu=gpu, dt=dt, krylov_tolerance=krylov_tolerance,
with_modulation=True) #the last arg is taken from the base class
Source code in emu_sv/sv_config.py
def __init__(
self,
*,
dt: float = 10.0,
max_krylov_dim: int = 100,
krylov_tolerance: float = 1e-10,
gpu: bool | None = None,
interaction_cutoff: float = 0.0,
log_level: int = logging.INFO,
log_file: pathlib.Path | None = None,
**kwargs: Any,
):
super().__init__(
dt=dt,
max_krylov_dim=max_krylov_dim,
gpu=gpu,
krylov_tolerance=krylov_tolerance,
interaction_cutoff=interaction_cutoff,
log_level=log_level,
log_file=log_file,
**kwargs,
)
self.monkeypatch_observables()
logger = init_logging(log_level, log_file)
if (self.noise_model.runs != 1 and self.noise_model.runs is not None) or (
self.noise_model.samples_per_run != 1
and self.noise_model.samples_per_run is not None
):
logger.warning(
"Warning: The runs and samples_per_run "
"values of the NoiseModel are ignored!"
)

Bases: State[complex, Tensor]

Represents a quantum state vector in a computational basis.

This class extends the State class to handle state vectors, providing various utilities for initialization, normalization, manipulation, and measurement. The state vector must have a length that is a power of 2, representing 2ⁿ basis states for n qubits.

PARAMETER DESCRIPTION
vector

1D tensor representation of a state vector.

TYPE: Tensor

gpu

store the vector on GPU if True, otherwise on CPU

TYPE: bool DEFAULT: True

eigenstates

sequence of eigenstates used as basis only qubit basis are

TYPE: Sequence[Eigenstate] DEFAULT: ('r', 'g')

supported (default

('r','g'))

Source code in emu_sv/state_vector.py
def __init__(
self,
vector: torch.Tensor,
*,
gpu: bool = True,
eigenstates: Sequence[Eigenstate] = ("r", "g"),
):
# NOTE: this accepts also zero vectors.
assert math.log2(
len(vector)
).is_integer(), "The number of elements in the vector should be power of 2"
super().__init__(eigenstates=eigenstates)
device = "cuda" if gpu and DEVICE_COUNT > 0 else "cpu"
self.vector = vector.to(dtype=dtype, device=device)

The number of qudits in the state.

Sum of two state vectors

PARAMETER DESCRIPTION
other

the vector to add to this vector

TYPE: State

RETURNS DESCRIPTION
StateVector

The summed state

Source code in emu_sv/state_vector.py
def __add__(self, other: State) -> StateVector:
"""Sum of two state vectors
Args:
other: the vector to add to this vector
Returns:
The summed state
"""
assert isinstance(other, StateVector), "`Other` state can only be a StateVector"
assert (
self.eigenstates == other.eigenstates
), f"`Other` state has basis {other.eigenstates} != {self.eigenstates}"
return StateVector(
self.vector + other.vector,
gpu=self.vector.is_cuda,
eigenstates=self.eigenstates,
)

Scalar multiplication

PARAMETER DESCRIPTION
scalar

the scalar to multiply with

TYPE: complex

RETURNS DESCRIPTION
StateVector

The scaled state

Source code in emu_sv/state_vector.py
def __rmul__(self, scalar: complex) -> StateVector:
"""Scalar multiplication
Args:
scalar: the scalar to multiply with
Returns:
The scaled state
"""
return StateVector(
scalar * self.vector,
gpu=self.vector.is_cuda,
eigenstates=self.eigenstates,
)
Compute. The type of other must be StateVector.
PARAMETER DESCRIPTION
other

the other state

TYPE: State

RETURNS DESCRIPTION
Tensor

the inner product

Source code in emu_sv/state_vector.py
def inner(self, other: State) -> torch.Tensor:
"""
Compute . The type of other must be StateVector.
Args:
other: the other state
Returns:
the inner product
"""
assert isinstance(other, StateVector), "Other state must be a StateVector"
assert (
self.vector.shape == other.vector.shape
), "States do not have the same shape"
# by our internal convention inner and norm return to cpu
return torch.vdot(self.vector, other.vector.to(self.vector.device)).cpu()

Returns a State vector in the ground state |00..0>.

PARAMETER DESCRIPTION
num_sites

the number of qubits

TYPE: int

gpu

whether gpu or cpu

TYPE: bool DEFAULT: True

RETURNS DESCRIPTION
StateVector

The described state

Examples:

StateVector.make(2,gpu=False)

Output:

tensor([1.+0.j, 0.+0.j, 0.+0.j, 0.+0.j], dtype=torch.complex128)
Source code in emu_sv/state_vector.py
@classmethod
def make(cls, num_sites: int, gpu: bool = True) -> StateVector:
"""
Returns a State vector in the ground state |00..0>.
Args:
num_sites: the number of qubits
gpu: whether gpu or cpu
Returns:
The described state
Examples:
```python
StateVector.make(2,gpu=False)
```
Output:
```
tensor([1.+0.j, 0.+0.j, 0.+0.j, 0.+0.j], dtype=torch.complex128)
```
"""
result = cls.zero(num_sites=num_sites, gpu=gpu)
result.vector[0] = 1.0
return result

Returns the norm of the state

RETURNS DESCRIPTION
Tensor

the norm of the state

Source code in emu_sv/state_vector.py
def norm(self) -> torch.Tensor:
"""Returns the norm of the state
Returns:
the norm of the state
"""
nrm: torch.Tensor = torch.linalg.vector_norm(self.vector).cpu()
return nrm

sample(*, num_shots=1000, one_state=None, p_false_pos=0.0, p_false_neg=0.0)

Section titled “ sample(*, num_shots=1000, one_state=None, p_false_pos=0.0, p_false_neg=0.0) ”

Samples bitstrings, taking into account the specified error rates.

PARAMETER DESCRIPTION
num_shots

how many bitstrings to sample

TYPE: int DEFAULT: 1000

p_false_pos

the rate at which a 0 is read as a 1

TYPE: float DEFAULT: 0.0

p_false_neg

teh rate at which a 1 is read as a 0

TYPE: float DEFAULT: 0.0

RETURNS DESCRIPTION
Counter[str]

the measured bitstrings, by count

Source code in emu_sv/state_vector.py
def sample(
self,
*,
num_shots: int = 1000,
one_state: Eigenstate | None = None,
p_false_pos: float = 0.0,
p_false_neg: float = 0.0,
) -> Counter[str]:
"""
Samples bitstrings, taking into account the specified error rates.
Args:
num_shots: how many bitstrings to sample
p_false_pos: the rate at which a 0 is read as a 1
p_false_neg: teh rate at which a 1 is read as a 0
Returns:
the measured bitstrings, by count
"""
probabilities = torch.abs(self.vector) ** 2
outcomes = torch.multinomial(probabilities, num_shots, replacement=True)
# Convert outcomes to bitstrings and count occurrences
counts = Counter(
[index_to_bitstring(self.n_qudits, outcome) for outcome in outcomes]
)
if p_false_neg > 0 or p_false_pos > 0:
counts = apply_measurement_errors(
counts,
p_false_pos=p_false_pos,
p_false_neg=p_false_neg,
)
return counts

Returns a zero uninitialized "state" vector. Warning, this has no physical meaning as-is!

PARAMETER DESCRIPTION
num_sites

the number of qubits

TYPE: int

gpu

whether gpu or cpu

TYPE: bool DEFAULT: True

RETURNS DESCRIPTION
StateVector

The zero state

Examples:

StateVector.zero(2,gpu=False)

Output:

tensor([0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j], dtype=torch.complex128)
Source code in emu_sv/state_vector.py
@classmethod
def zero(
cls,
num_sites: int,
gpu: bool = True,
eigenstates: Sequence[Eigenstate] = ("r", "g"),
) -> StateVector:
"""
Returns a zero uninitialized "state" vector. Warning, this has no physical meaning as-is!
Args:
num_sites: the number of qubits
gpu: whether gpu or cpu
Returns:
The zero state
Examples:
```python
StateVector.zero(2,gpu=False)
```
Output:
```
tensor([0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j], dtype=torch.complex128)
```
"""
device = "cuda" if gpu and DEVICE_COUNT > 0 else "cpu"
vector = torch.zeros(2**num_sites, dtype=dtype, device=device)
return cls(vector, gpu=gpu, eigenstates=eigenstates)

Wrapper around StateVector.inner.

PARAMETER DESCRIPTION
left

StateVector argument

TYPE: StateVector

right

StateVector argument

TYPE: StateVector

RETURNS DESCRIPTION
Tensor

the inner product

Examples:

factor = math.sqrt(2.0)
basis = ("r","g")
string_state1 = {"gg":1.0,"rr":1.0}
state1 = StateVector.from_state_string(basis=basis,
nqubits=nqubits,strings=string_state1)
string_state2 = {"gr":1.0/factor,"rr":1.0/factor}
state2 = StateVector.from_state_string(basis=basis,
nqubits=nqubits,strings=string_state2)
state1 = StateVector.from_state_amplitudes(eigenstates=basis,
amplitudes=string_state1)
string_state2 = {"gr":1.0/factor,"rr":1.0/factor}
state2 = StateVector.from_state_amplitudes(eigenstates=basis,
amplitudes=string_state2)
inner(state1,state2).item()

Output:

(0.49999999144286444+0j)
Source code in emu_sv/state_vector.py
def inner(left: StateVector, right: StateVector) -> torch.Tensor:
"""
Wrapper around StateVector.inner.
Args:
left: StateVector argument
right: StateVector argument
Returns:
the inner product
Examples:
```python
factor = math.sqrt(2.0)
basis = ("r","g")
string_state1 = {"gg":1.0,"rr":1.0}
state1 = StateVector.from_state_string(basis=basis,
nqubits=nqubits,strings=string_state1)
string_state2 = {"gr":1.0/factor,"rr":1.0/factor}
state2 = StateVector.from_state_string(basis=basis,
nqubits=nqubits,strings=string_state2)
```
```python
state1 = StateVector.from_state_amplitudes(eigenstates=basis,
amplitudes=string_state1)
string_state2 = {"gr":1.0/factor,"rr":1.0/factor}
state2 = StateVector.from_state_amplitudes(eigenstates=basis,
amplitudes=string_state2)
inner(state1,state2).item()
```
Output:
```
(0.49999999144286444+0j)
```
"""
assert (left.vector.shape == right.vector.shape) and (left.vector.dim() == 1), (
"Shape of left.vector and right.vector should be",
" the same and both need to be 1D tesnor",
)
return left.inner(right)

Bases: Operator[complex, Tensor, StateVector]

DenseOperator in emu-sv uses dense matrices. This class represents a quantum operator backed by a dense PyTorch tensor for state-vector simulation.

PARAMETER DESCRIPTION
matrix

Square complex tensor of shape (2ⁿ, 2ⁿ) representing the operator in the computational basis.

TYPE: Tensor

gpu

If True, place the operator on a CUDA device when available. Default: True (and only 1 GPU).

TYPE: bool DEFAULT: True

RETURNS DESCRIPTION
DenseOperator

An operator object wrapping the provided matrix.

RAISES DESCRIPTION
ValueError

If 'matrix' is not a 2-D square tensor.

RuntimeError

If gpu=True but CUDA is not available (if applicable).

Source code in emu_sv/dense_operator.py
def __init__(
self,
matrix: torch.Tensor,
*,
gpu: bool = True,
):
device = "cuda" if gpu and DEVICE_COUNT > 0 else "cpu"
self.matrix = matrix.to(dtype=dtype, device=device)

Element-wise addition of two DenseOperators.

PARAMETER DESCRIPTION
other

a DenseOperator instance.

TYPE: Operator

RETURNS DESCRIPTION
DenseOperator

A new DenseOperator representing the sum.

Source code in emu_sv/dense_operator.py
def __add__(self, other: Operator) -> DenseOperator:
"""
Element-wise addition of two DenseOperators.
Args:
other: a DenseOperator instance.
Returns:
A new DenseOperator representing the sum.
"""
assert isinstance(
other, DenseOperator
), "DenseOperator can only be added to another DenseOperator."
return DenseOperator(self.matrix + other.matrix)

Compose two DenseOperators via matrix multiplication.

PARAMETER DESCRIPTION
other

a DenseOperator instance.

TYPE: Operator

RETURNS DESCRIPTION
DenseOperator

A new DenseOperator representing the product self @ other.

Source code in emu_sv/dense_operator.py
def __matmul__(self, other: Operator) -> DenseOperator:
"""
Compose two DenseOperators via matrix multiplication.
Args:
other: a DenseOperator instance.
Returns:
A new DenseOperator representing the product `self @ other`.
"""
assert isinstance(
other, DenseOperator
), "DenseOperator can only be multiplied with a DenseOperator."
return DenseOperator(self.matrix @ other.matrix)

Scalar multiplication of the DenseOperator.

PARAMETER DESCRIPTION
scalar

a number to scale the operator.

TYPE: complex

RETURNS DESCRIPTION
DenseOperator

A new DenseOperator scaled by the given scalar.

Source code in emu_sv/dense_operator.py
def __rmul__(self, scalar: complex) -> DenseOperator:
"""
Scalar multiplication of the DenseOperator.
Args:
scalar: a number to scale the operator.
Returns:
A new DenseOperator scaled by the given scalar.
"""
return DenseOperator(scalar * self.matrix)

Apply the DenseOperator to a given StateVector.

PARAMETER DESCRIPTION
other

a StateVector instance.

TYPE: State

RETURNS DESCRIPTION
StateVector

A new StateVector after applying the operator.

Source code in emu_sv/dense_operator.py
def apply_to(self, other: State) -> StateVector:
"""
Apply the DenseOperator to a given StateVector.
Args:
other: a StateVector instance.
Returns:
A new StateVector after applying the operator.
"""
assert isinstance(
other, StateVector
), "DenseOperator can only be applied to a StateVector."
return StateVector(self.matrix @ other.vector)

Compute the expectation value of the operator with respect to a state.

PARAMETER DESCRIPTION
state

a StateVector instance.

TYPE: State

RETURNS DESCRIPTION
Tensor

The expectation value as a float or complex number.

Source code in emu_sv/dense_operator.py
def expect(self, state: State) -> torch.Tensor:
"""
Compute the expectation value of the operator with respect to a state.
Args:
state: a StateVector instance.
Returns:
The expectation value as a float or complex number.
"""
assert isinstance(
state, StateVector
), "Only expectation values of StateVectors are supported."
return torch.vdot(state.vector, self.apply_to(state).vector).cpu()

Bases: Operator[complex, Tensor, StateVector]

This operator is used to represent a sparse matrix in CSR (Compressed Sparse Row) format for efficient computation on the emu-sv emulator

PARAMETER DESCRIPTION
matrix

The CSR matrix representation of the operator.

TYPE: Tensor

gpu

If True (by default), run on GPU when available; otherwise

TYPE: bool DEFAULT: True

Source code in emu_sv/sparse_operator.py
def __init__(
self,
matrix: torch.Tensor,
*,
gpu: bool = True,
):
device = "cuda" if gpu and DEVICE_COUNT > 0 else "cpu"
self.matrix = matrix.to(dtype=dtype, device=device)

Element-wise addition of two SparseOperators.

PARAMETER DESCRIPTION
other

a SparseOperator instance.

TYPE: Operator

RETURNS DESCRIPTION
SparseOperator

A new SparseOperator representing the sum.

Source code in emu_sv/sparse_operator.py
def __add__(self, other: Operator) -> SparseOperator:
"""
Element-wise addition of two SparseOperators.
Args:
other: a SparseOperator instance.
Returns:
A new SparseOperator representing the sum.
"""
assert isinstance(
other, SparseOperator
), "SparseOperator can only be added to another SparseOperator."
# TODO: figure out a better algorithm.
# self.matrix + other.matrix doesn't work on mac.
return SparseOperator(
sparse_add(
self.matrix.to_sparse_coo(), other.matrix.to_sparse_coo()
).to_sparse_csr()
)

torch CSR tensor does not deepcopy automatically

Source code in emu_sv/sparse_operator.py
def __deepcopy__(self, memo: dict) -> SparseOperator:
"""torch CSR tensor does not deepcopy automatically"""
cls = self.__class__
result = cls(torch.clone(self.matrix), gpu=self.matrix.is_cuda)
memo[id(self)] = result
return result

Compose two SparseOperators via matrix multiplication.

PARAMETER DESCRIPTION
other

a SparseOperator instance.

TYPE: Operator

RETURNS DESCRIPTION
SparseOperator

A new SparseOperator representing the product self @ other.

Source code in emu_sv/sparse_operator.py
def __matmul__(self, other: Operator) -> SparseOperator:
"""
Compose two SparseOperators via matrix multiplication.
Args:
other: a SparseOperator instance.
Returns:
A new SparseOperator representing the product `self @ other`.
"""
raise NotImplementedError()

Scalar multiplication of the SparseOperator.

PARAMETER DESCRIPTION
scalar

a number to scale the operator.

TYPE: complex

RETURNS DESCRIPTION
SparseOperator

A new SparseOperator scaled by the given scalar.

Source code in emu_sv/sparse_operator.py
def __rmul__(self, scalar: complex) -> SparseOperator:
"""
Scalar multiplication of the SparseOperator.
Args:
scalar: a number to scale the operator.
Returns:
A new SparseOperator scaled by the given scalar.
"""
return SparseOperator(scalar * self.matrix)

Apply the SparseOperator to a given StateVector.

PARAMETER DESCRIPTION
other

a StateVector instance.

TYPE: State

RETURNS DESCRIPTION
StateVector

A new StateVector after applying the operator.

Source code in emu_sv/sparse_operator.py
def apply_to(self, other: State) -> StateVector:
"""
Apply the SparseOperator to a given StateVector.
Args:
other: a StateVector instance.
Returns:
A new StateVector after applying the operator.
"""
assert isinstance(
other, StateVector
), "SparseOperator can only be applied to a StateVector."
return StateVector(self.matrix @ other.vector)

Compute the expectation value of the operator with respect to a state.

PARAMETER DESCRIPTION
state

a StateVector instance.

TYPE: State

RETURNS DESCRIPTION
Tensor

The expectation value as a float or complex number.

Source code in emu_sv/sparse_operator.py
def expect(self, state: State) -> torch.Tensor:
"""
Compute the expectation value of the operator with respect to a state.
Args:
state: a StateVector instance.
Returns:
The expectation value as a float or complex number.
"""
assert isinstance(
state, StateVector
), "Only expectation values of StateVectors are supported."
return torch.vdot(state.vector, self.apply_to(state).vector).cpu()

Bases: State[complex, Tensor]

Represents an n-qubit density matrix ρ in the computational (|g⟩, |r⟩) basis. The input should be a square complex tensor with shape (2ⁿ, 2ⁿ), where n is the number of atoms. ρ must be Hermitian, positive semidefinite, and has trace 1.

PARAMETER DESCRIPTION
matrix

Square complex tensor of shape (2ⁿ, 2ⁿ), Hermitian with trace 1, that represents the state in the computational basis.

TYPE: Tensor

gpu

If True, place the operator on a CUDA device when available. Default: True.

TYPE: bool DEFAULT: True

RETURNS DESCRIPTION
DensityMatrix

A density-matrix wrapper around the provided tensor."

RAISES DESCRIPTION
ValueError

If matrix is not a square 2D tensor of shape (2ⁿ, 2ⁿ) or fails validation (e.g., not Hermitian / trace != 1) if validation is performed.

RuntimeError

If gpu=True but CUDA is not available (if the implementation moves tensors to CUDA).

Source code in emu_sv/density_matrix_state.py
def __init__(
self,
matrix: torch.Tensor,
*,
gpu: bool = True,
):
# NOTE: this accepts also zero matrices.
device = "cuda" if gpu and DEVICE_COUNT > 0 else "cpu"
self.matrix = matrix.to(dtype=dtype, device=device)

The number of qudits in the state.

Convert a state vector to a density matrix. This function takes a state vector |ψ❭ and returns the corresponding density matrix ρ = |ψ❭❬ψ| representing the pure state |ψ❭.

Examples:

bell_state_vec = 0.7071 * torch.tensor([1.0, 0.0, 0.0, 1.0j],
dtype=torch.complex128)
bell_state = StateVector(bell_state_vec, gpu=False)
density = DensityMatrix.from_state_vector(bell_state)
print(density.matrix)

Output:

tensor([[0.5000+0.0000j, 0.0000+0.0000j, 0.0000+0.0000j, 0.0000-0.5000j],
[0.0000+0.0000j, 0.0000+0.0000j, 0.0000+0.0000j, 0.0000+0.0000j],
[0.0000+0.0000j, 0.0000+0.0000j, 0.0000+0.0000j, 0.0000+0.0000j],
[0.0000+0.5000j, 0.0000+0.0000j, 0.0000+0.0000j, 0.5000+0.0000j]],
dtype=torch.complex128)
Source code in emu_sv/density_matrix_state.py
@classmethod
def from_state_vector(cls, state: StateVector) -> DensityMatrix:
"""
Convert a state vector to a density matrix.
This function takes a state vector |ψ❭ and returns the corresponding
density matrix ρ = |ψ❭❬ψ| representing the pure state |ψ❭.
Examples:
```python
bell_state_vec = 0.7071 * torch.tensor([1.0, 0.0, 0.0, 1.0j],
dtype=torch.complex128)
bell_state = StateVector(bell_state_vec, gpu=False)
density = DensityMatrix.from_state_vector(bell_state)
print(density.matrix)
```
Output:
```
tensor([[0.5000+0.0000j, 0.0000+0.0000j, 0.0000+0.0000j, 0.0000-0.5000j],
[0.0000+0.0000j, 0.0000+0.0000j, 0.0000+0.0000j, 0.0000+0.0000j],
[0.0000+0.0000j, 0.0000+0.0000j, 0.0000+0.0000j, 0.0000+0.0000j],
[0.0000+0.5000j, 0.0000+0.0000j, 0.0000+0.0000j, 0.5000+0.0000j]],
dtype=torch.complex128)
```
"""
return cls(
torch.outer(state.vector, state.vector.conj()), gpu=state.vector.is_cuda
)

Creates the density matrix of the ground state |000...0>

Source code in emu_sv/density_matrix_state.py
@classmethod
def make(cls, n_atoms: int, gpu: bool = True) -> DensityMatrix:
"""Creates the density matrix of the ground state |000...0>"""
result = torch.zeros(2**n_atoms, 2**n_atoms, dtype=dtype)
result[0, 0] = 1.0
return cls(result, gpu=gpu)

Compute Tr(self^† @ other). The type of other must be DensityMatrix.

PARAMETER DESCRIPTION
other

the other state

TYPE: State

RETURNS DESCRIPTION
Tensor

the inner product

Examples:

density_bell_state = 0.5 * torch.tensor([[1, 0, 0, 1], [0, 0, 0, 0],
[0, 0, 0, 0], [1, 0, 0, 1]],dtype=torch.complex128)
density_c = DensityMatrix(density_bell_state, gpu=False)
density_c.overlap(density_c)

Output:

tensor(1.+0.j, dtype=torch.complex128)
Source code in emu_sv/density_matrix_state.py
def overlap(self, other: State) -> torch.Tensor:
"""
Compute Tr(self^† @ other). The type of other must be DensityMatrix.
Args:
other: the other state
Returns:
the inner product
Examples:
```python
density_bell_state = 0.5 * torch.tensor([[1, 0, 0, 1], [0, 0, 0, 0],
[0, 0, 0, 0], [1, 0, 0, 1]],dtype=torch.complex128)
density_c = DensityMatrix(density_bell_state, gpu=False)
density_c.overlap(density_c)
```
Output:
```
tensor(1.+0.j, dtype=torch.complex128)
```
"""
assert isinstance(
other, DensityMatrix
), "Other state also needs to be a DensityMatrix"
assert (
self.matrix.shape == other.matrix.shape
), "States do not have the same number of sites"
return torch.vdot(
self.matrix.flatten(), other.matrix.to(self.matrix.device).flatten()
)

sample(num_shots=1000, one_state=None, p_false_pos=0.0, p_false_neg=0.0)

Section titled “ sample(num_shots=1000, one_state=None, p_false_pos=0.0, p_false_neg=0.0) ”

Samples bitstrings, taking into account the specified error rates.

PARAMETER DESCRIPTION
num_shots

how many bitstrings to sample

TYPE: int DEFAULT: 1000

p_false_pos

the rate at which a 0 is read as a 1

TYPE: float DEFAULT: 0.0

p_false_neg

teh rate at which a 1 is read as a 0

TYPE: float DEFAULT: 0.0

RETURNS DESCRIPTION
Counter[str]

the measured bitstrings, by count

Examples:

torch.manual_seed(1234)
from emu_sv import StateVector
bell_vec = 0.7071 * torch.tensor([1.0, 0.0, 0.0, 1.0j],
dtype=torch.complex128)
bell_state_vec = StateVector(bell_vec)
bell_density = DensityMatrix.from_state_vector(bell_state_vec)
bell_density.sample(1000)

Output:

Counter({'00': 517, '11': 483})
Source code in emu_sv/density_matrix_state.py
def sample(
self,
num_shots: int = 1000,
one_state: Eigenstate | None = None,
p_false_pos: float = 0.0,
p_false_neg: float = 0.0,
) -> Counter[str]:
"""
Samples bitstrings, taking into account the specified error rates.
Args:
num_shots: how many bitstrings to sample
p_false_pos: the rate at which a 0 is read as a 1
p_false_neg: teh rate at which a 1 is read as a 0
Returns:
the measured bitstrings, by count
Examples:
```python
torch.manual_seed(1234)
from emu_sv import StateVector
bell_vec = 0.7071 * torch.tensor([1.0, 0.0, 0.0, 1.0j],
dtype=torch.complex128)
bell_state_vec = StateVector(bell_vec)
bell_density = DensityMatrix.from_state_vector(bell_state_vec)
bell_density.sample(1000)
```
Output:
```
Counter({'00': 517, '11': 483})
```
"""
probabilities = torch.abs(self.matrix.diagonal())
outcomes = torch.multinomial(probabilities, num_shots, replacement=True)
# Convert outcomes to bitstrings and count occurrences
counts = Counter(
[index_to_bitstring(self.n_qudits, outcome) for outcome in outcomes]
)
if p_false_neg > 0 or p_false_pos > 0:
counts = apply_measurement_errors(
counts,
p_false_pos=p_false_pos,
p_false_neg=p_false_neg,
)
return counts