Skip to content

Python Functions

Kraus

Parameterization of a quantum state using Kraus operators.

The evolved density matrix \(\rho\) is given by

\[\begin{aligned} \rho=\sum_i K_i \rho_0 K_i^{\dagger}, \end{aligned}\]

where \(\rho_0\) is the initial density matrix and \(K_i\) are the Kraus operators.

Parameters:

Name Type Description Default
rho0 array

Initial density matrix.

required
K list

Kraus operators.

required
dK list

Derivatives of the Kraus operators with respect to the unknown parameters to be estimated. This is a nested list where the first index corresponds to the Kraus operator and the second index corresponds to the parameter. For example, dK[0][1] is the derivative of the second Kraus operator with respect to the first parameter.

required

Returns:

Type Description
tuple

rho (np.array): Evolved density matrix.

drho (list): Derivatives of the evolved density matrix with respect to the unknown parameters.
Each element in the list is a matrix representing the partial derivative of \(\rho\) with respect to one parameter.

Source code in quanestimation/Parameterization/NonDynamics.py
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
def Kraus(rho0, K, dK):
    r"""
    Parameterization of a quantum state using Kraus operators.

    The evolved density matrix $\rho$ is given by

    \begin{aligned}
        \rho=\sum_i K_i \rho_0 K_i^{\dagger},
    \end{aligned}

    where $\rho_0$ is the initial density matrix and $K_i$ are the Kraus operators.

    Args: 
        rho0 (np.array): 
            Initial density matrix.
        K (list): 
            Kraus operators.
        dK (list): 
            Derivatives of the Kraus operators with respect to the unknown parameters to be 
            estimated. This is a nested list where the first index corresponds to the Kraus operator 
            and the second index corresponds to the parameter. For example, `dK[0][1]` is the derivative 
            of the second Kraus operator with respect to the first parameter.

    Returns:
        (tuple):
            rho (np.array): 
                Evolved density matrix.

            drho (list): 
                Derivatives of the evolved density matrix with respect to the unknown parameters.  
                Each element in the list is a matrix representing the partial derivative of $\rho$ with 
                respect to one parameter.
    """

    k_num = len(K)
    para_num = len(dK[0])
    dK_reshape = [[dK[i][j] for i in range(k_num)] for j in range(para_num)]

    rho = sum([np.dot(Ki, np.dot(rho0, Ki.conj().T)) for Ki in K])
    drho = [sum([(np.dot(dKi, np.dot(rho0, Ki.conj().T))+ np.dot(Ki, np.dot(rho0, dKi.conj().T))) for (Ki, dKi) in zip(K, dKj)]) for dKj in dK_reshape]

    return rho, drho

Metrological resources

Spin squeezing

Calculation of the spin squeezing parameter for a density matrix.

The spin squeezing parameter \(\xi\) given by Kitagawa and Ueda is defined as:

\[ \xi^2 = \frac{N(\Delta J_{\vec{n}_1})^2}{\langle J_{\vec{n}_3}\rangle^2} \]

where \(J_{\vec{n}_i}\) are the collective spin operators.

The spin squeezing parameter \(\xi\) given by Wineland etal. is defined as:

\[ \xi^2 = \left(\frac{j}{\langle \vec{J}\rangle}\right)^2 \frac{N(\Delta J_{\vec{n}_1})^2}{\langle J_{\vec{n}_3}\rangle^2} \]

Parameters:

Name Type Description Default
rho array

Density matrix.

required
basis str

Basis to use: "Dicke" (default) or "Pauli".

'Dicke'
output str

Type of spin squeezing to calculate:
- "KU": Kitagawa-Ueda squeezing parameter.
- "WBIMH": Wineland et al. squeezing parameter.

'KU'

Returns:

Type Description
float

Spin squeezing parameter.

Raises:

Type Description
ValueError

If basis has invalid value.

ValueError

If output has invalid value.

Source code in quanestimation/Resource/Resource.py
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
def SpinSqueezing(rho, basis="Dicke", output="KU"):
    r"""
    Calculation of the spin squeezing parameter for a density matrix.

    The spin squeezing parameter $\xi$ given by Kitagawa and Ueda is defined as:

    $$
    \xi^2 = \frac{N(\Delta J_{\vec{n}_1})^2}{\langle J_{\vec{n}_3}\rangle^2}
    $$

    where $J_{\vec{n}_i}$ are the collective spin operators.

    The spin squeezing parameter $\xi$ given by Wineland etal. is defined as:

    $$
    \xi^2 = \left(\frac{j}{\langle \vec{J}\rangle}\right)^2 \frac{N(\Delta J_{\vec{n}_1})^2}{\langle J_{\vec{n}_3}\rangle^2}
    $$

    Args:
        rho (np.array): 
            Density matrix.
        basis (str, optional): 
            Basis to use: "Dicke" (default) or "Pauli".
        output (str, optional): 
            Type of spin squeezing to calculate:  
                - "KU": Kitagawa-Ueda squeezing parameter.  
                - "WBIMH": Wineland et al. squeezing parameter.  

    Returns:
        (float): 
            Spin squeezing parameter.

    Raises:
        ValueError: If `basis` has invalid value.  
        ValueError: If `output` has invalid value.  
    """

    if basis == "Pauli":
        N = int(np.log(len(rho)) / np.log(2))
        j = N / 2
        coef = 4.0 / float(N)
        sp = np.array([[0.0, 1.0], [0.0, 0.0]])
        sz = np.array([[1., 0.], [0., -1.]])
        jp = []
        jz = []
        for i in range(N):
            if i == 0:
                jp_tp = np.kron(sp, np.identity(2 ** (N - 1)))
                jz_tp = np.kron(sz, np.identity(2 ** (N - 1)))
            elif i == N - 1:
                jp_tp = np.kron(np.identity(2 ** (N - 1)), sp)
                jz_tp = np.kron(np.identity(2 ** (N - 1)), sz)
            else:
                jp_tp = np.kron(
                    np.identity(2 ** i), 
                    np.kron(sp, np.identity(2 ** (N - 1 - i)))
                )
                jz_tp = np.kron(
                    np.identity(2 ** i), 
                    np.kron(sz, np.identity(2 ** (N - 1 - i)))
                )
            jp.append(jp_tp)
            jz.append(jz_tp)
        Jp = sum(jp)
        Jz = 0.5 * sum(jz)
    elif basis == "Dicke":
        N = len(rho) - 1
        j = N / 2 
        coef = 4.0 / float(N)       
        offdiag = [
            np.sqrt(float(j * (j + 1) - m * (m + 1))) 
            for m in np.arange(j, -j - 1, -1)
        ][1:]
        # Ensure we create a complex array
        Jp = np.diag(offdiag, 1).astype(complex)
        Jz = np.diag(np.arange(j, -j - 1, -1))
    else:
        valid_types = ["Dicke", "Pauli"]
        raise ValueError(
                f"Invalid basis: '{basis}'. Supported types: {', '.join(valid_types)}"
            )    

    Jx = 0.5 * (Jp + np.conj(Jp).T)
    Jy = -0.5 * 1j * (Jp - np.conj(Jp).T)

    Jx_mean = np.trace(rho @ Jx)
    Jy_mean = np.trace(rho @ Jy)
    Jz_mean = np.trace(rho @ Jz)

    if Jx_mean == 0 and Jy_mean == 0:
        if Jz_mean == 0:
            raise ValueError("The density matrix does not have a valid spin squeezing.")
        else:
            A = np.trace(rho @ (Jx @ Jx - Jy @ Jy))
            B = np.trace(rho @ (Jx @ Jy + Jy @ Jx))
            C = np.trace(rho @ (Jx @ Jx + Jy @ Jy))
    else:
        costheta = Jz_mean / np.sqrt(Jx_mean**2 + Jy_mean**2 + Jz_mean**2)
        sintheta = np.sin(np.arccos(costheta))
        cosphi = Jx_mean / np.sqrt(Jx_mean**2 + Jy_mean**2)
        sinphi = (np.sin(np.arccos(cosphi)) if Jy_mean > 0 
                  else np.sin(2 * np.pi - np.arccos(cosphi)))

        Jn1 = -Jx * sinphi + Jy * cosphi
        Jn2 = (-Jx * costheta * cosphi 
                - Jy * costheta * sinphi 
                + Jz * sintheta)
        A = np.trace(rho @ (Jn1 @ Jn1 - Jn2 @ Jn2))
        B = np.trace(rho @ (Jn1 @ Jn2 + Jn2 @ Jn1))
        C = np.trace(rho @ (Jn1 @ Jn1 + Jn2 @ Jn2))

    V_minus = 0.5 * (C - np.sqrt(A**2 + B**2))
    V_minus = np.real(V_minus)
    xi = coef * V_minus

    if output == "KU":
        pass
    elif output == "WBIMH":
        xi = (N / 2)**2 * xi / (Jx_mean**2 + Jy_mean**2 + Jz_mean**2)
    else:
        valid_types = ["KU", "WBIMH"]
        raise ValueError(
                f"Invalid basis: '{basis}'. Supported types: {', '.join(valid_types)}"
            )  

    return xi

Target time

Calculation of the time to reach a given precision limit.

This function finds the earliest time \(t\) in tspan where the objective function func reaches or crosses the target value \(f\). The first argument of func must be the time variable.

Parameters:

Name Type Description Default
f float

The target value of the objective function.

required
tspan array

Time points for the evolution.

required
func callable

The objective function to evaluate. Must return a float.

required
*args tuple

Positional arguments to pass to func.

()
**kwargs dict

Keyword arguments to pass to func.

{}

Returns:

Type Description
float

Time to reach the given target precision.

Source code in quanestimation/Resource/Resource.py
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
def TargetTime(f, tspan, func, *args, **kwargs):
    r"""
    Calculation of the time to reach a given precision limit. 

    This function finds the earliest time $t$ in `tspan` where the objective 
    function `func` reaches or crosses the target value $f$. The first argument 
    of func must be the time variable.

    Args:
        f (float): 
            The target value of the objective function.
        tspan (array): 
            Time points for the evolution.
        func (callable): 
            The objective function to evaluate. Must return a float.
        *args (tuple): 
            Positional arguments to pass to `func`.
        **kwargs (dict): 
            Keyword arguments to pass to `func`.

    Returns:
        (float): 
            Time to reach the given target precision.
    """
    # Check if we're already at the target at the first point
    f0 = func(tspan[0], *args, **kwargs)
    if np.isclose(f0, f, atol=1e-8):
        return tspan[0]

    # Iterate through time points
    for i in range(1, len(tspan)):
        f1 = func(tspan[i], *args, **kwargs)

        # Check if we've crossed the target
        if (f0 - f) * (f1 - f) <= 0:
            return tspan[i]
        elif np.isclose(f1, f, atol=1e-8):
            return tspan[i]

        f0 = f1

    # No crossing found
    print("No time is found in the given time span to reach the target.")

    return None

Quantum Cramér-Rao bounds

Classical Fisher information matrix (CFIM)

Calculation of the classical Fisher information matrix for the chosen measurements.

This function computes the classical Fisher information (CFI) and classical Fisher information matrix (CFIM) for a density matrix. The entry of CFIM \(\mathcal{I}\) is defined as

\[ \mathcal{I}_{ab}=\sum_y\frac{1}{p(y|\textbf{x})}[\partial_a p(y|\textbf{x})][\partial_b p(y|\textbf{x})], \]
Symbols
  • \(p(y|\textbf{x})=\mathrm{Tr}(\rho\Pi_y)\).
  • \(\rho\): the parameterized density matrix.

Parameters:

Name Type Description Default
rho array

Density matrix.

required
drho list

List of derivative matrices of the density matrix on the unknown parameters to be estimated. For example, drho[0] is the derivative matrix on the first parameter.

required
M list

List of positive operator-valued measure (POVM). The default measurement is a set of rank-one symmetric informationally complete POVM (SIC-POVM).

[]
eps float

Machine epsilon for numerical stability.

1e-08

Returns:

Type Description
float / array

For single parameter estimation (the length of drho is equal to one), the output is CFI and for multiparameter estimation (the length of drho is more than one), it returns CFIM.

Raises:

Type Description
TypeError

If drho is not a list.

TypeError

If M is not a list.

Example

rho = np.array([[0.5, 0], [0, 0.5]]);

drho = [np.array([[1, 0], [0, -1]])];

cfim = CFIM(rho, drho);

Notes

SIC-POVM is calculated by the Weyl-Heisenberg covariant SIC-POVM fiducial state which can be downloaded from here.

Source code in quanestimation/AsymptoticBound/CramerRao.py
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
def CFIM(rho, drho, M=[], eps=1e-8):
    r"""
    Calculation of the classical Fisher information matrix for the chosen measurements.

    This function computes the classical Fisher information (CFI) and classical Fisher 
    information matrix (CFIM) for a density matrix. The entry of CFIM $\mathcal{I}$
    is defined as

    $$
    \mathcal{I}_{ab}=\sum_y\frac{1}{p(y|\textbf{x})}[\partial_a p(y|\textbf{x})][\partial_b p(y|\textbf{x})],
    $$

    Symbols: 
        - $p(y|\textbf{x})=\mathrm{Tr}(\rho\Pi_y)$.
        - $\rho$: the parameterized density matrix.

    Args: 
        rho (np.array): 
            Density matrix.
        drho (list): 
            List of derivative matrices of the density matrix on the unknown 
            parameters to be estimated. For example, drho[0] is the derivative 
            matrix on the first parameter.
        M (list, optional): 
            List of positive operator-valued measure (POVM). The default 
            measurement is a set of rank-one symmetric informationally complete POVM (SIC-POVM).
        eps (float, optional): 
            Machine epsilon for numerical stability.

    Returns:
        (float/np.array): 
            For single parameter estimation (the length of drho is equal to one), the output is CFI 
            and for multiparameter estimation (the length of drho is more than one), it returns CFIM.

    Raises:
        TypeError: If drho is not a list.
        TypeError: If M is not a list.   

    Example:
        rho = np.array([[0.5, 0], [0, 0.5]]);

        drho = [np.array([[1, 0], [0, -1]])];

        cfim = CFIM(rho, drho);     

    Notes: 
        SIC-POVM is calculated by the Weyl-Heisenberg covariant SIC-POVM fiducial state 
        which can be downloaded from [here](https://www.physics.umb.edu/Research/QBism/solutions.html).
    """

    if not isinstance(drho, list):
        raise TypeError("Please make sure drho is a list!")

    if not M:
        M = SIC(len(rho[0]))
    else:
        if not isinstance(M, list):
            raise TypeError("Please make sure M is a list!")

    num_measurements = len(M)
    num_params = len(drho)
    cfim_res = np.zeros([num_params, num_params])

    for i in range(num_measurements):
        povm_element = M[i]
        p = np.real(np.trace(rho @ povm_element))
        c_add = np.zeros([num_params, num_params])

        if p > eps:
            for param_i in range(num_params):
                drho_i = drho[param_i]
                dp_i = np.real(np.trace(drho_i @ povm_element))

                for param_j in range(param_i, num_params):
                    drho_j = drho[param_j]
                    dp_j = np.real(np.trace(drho_j @ povm_element))
                    c_add[param_i][param_j] = np.real(dp_i * dp_j / p)
                    c_add[param_j][param_i] = np.real(dp_i * dp_j / p)

        cfim_res += c_add

    if num_params == 1:
        return cfim_res[0][0]
    else:
        return cfim_res

Fisher information matrix (FIM)

Calculation of the classical Fisher information matrix (CFIM) for a given probability distributions.

This function computes the classical Fisher information matrix (CFIM) for a given probability distributions. The entry of FIM \(I\) is defined as

\[ I_{ab}=\sum_{y}\frac{1}{p_y}[\partial_a p_y][\partial_b p_y], \]
Symbols
  • \(\{p_y\}\): a set of the discrete probability distribution.

Parameters:

Name Type Description Default
p array

The probability distribution.

required
dp list

Derivatives of the probability distribution on the unknown parameters to be estimated. For example, dp[0] is the derivative vector on the first parameter.

required
eps float

Machine epsilon.

1e-08

Returns:

Type Description
float / array

For single parameter estimation (the length of drho is equal to one), the output is CFI and for multiparameter estimation (the length of drho is more than one), it returns CFIM.

Source code in quanestimation/AsymptoticBound/CramerRao.py
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
def FIM(p, dp, eps=1e-8):
    r"""
    Calculation of the classical Fisher information matrix (CFIM) for a given probability distributions.

    This function computes the classical Fisher information matrix (CFIM) for a given probability 
    distributions. The entry of FIM $I$ is defined as

    $$
    I_{ab}=\sum_{y}\frac{1}{p_y}[\partial_a p_y][\partial_b p_y],
    $$

    Symbols: 
        - $\{p_y\}$: a set of the discrete probability distribution.

    Args: 
        p (np.array): 
            The probability distribution.
        dp (list): 
            Derivatives of the probability distribution on the unknown parameters to 
            be estimated. For example, dp[0] is the derivative vector on the first parameter.
        eps (float, optional): 
            Machine epsilon.

    Returns:
        (float/np.array): 
            For single parameter estimation (the length of drho is equal to one), the output is CFI 
            and for multiparameter estimation (the length of drho is more than one), it returns CFIM.
    """

    num_params = len(dp)
    num_measurements = len(p)
    fim_matrix = np.zeros([num_params, num_params])

    for outcome_idx in range(num_measurements):
        p_value = p[outcome_idx]
        fim_add = np.zeros([num_params, num_params])

        if p_value > eps:
            for param_i in range(num_params):
                dp_i = dp[param_i][outcome_idx]

                for param_j in range(param_i, num_params):
                    dp_j = dp[param_j][outcome_idx]
                    term = np.real(dp_i * dp_j / p_value)
                    fim_add[param_i][param_j] = term
                    fim_add[param_j][param_i] = term

        fim_matrix += fim_add

    if num_params == 1:
        return fim_matrix[0][0]
    else:
        return fim_matrix

Fisher information (FI_Expt)

Calculate the classical Fisher information (CFI) based on experimental data.

Parameters:

Name Type Description Default
data_true array

Experimental data obtained at the true parameter value.

required
data_shifted array

Experimental data obtained at parameter value shifted by delta_x.

required
delta_x float

Small known parameter shift.

required
ftype str

Probability distribution of the data. Options:
- "norm": normal distribution (default).
- "gamma": gamma distribution.
- "rayleigh": Rayleigh distribution.
- "poisson": Poisson distribution.

'norm'

Returns:

Type Description
float

Classical Fisher information

Raises:

Type Description
ValueError

If ftype is not one of the supported types ("norm", "poisson", "gamma", "rayleigh").

Notes

The current implementation may be unstable and is subject to future modification.

Source code in quanestimation/AsymptoticBound/CramerRao.py
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
def FI_Expt(data_true, data_shifted, delta_x, ftype="norm"):
    """
    Calculate the classical Fisher information (CFI) based on experimental data.

    Args:
        data_true (np.array): 
            Experimental data obtained at the true parameter value.
        data_shifted (np.array): 
            Experimental data obtained at parameter value shifted by delta_x.
        delta_x (float): 
            Small known parameter shift.
        ftype (str, optional): 
            Probability distribution of the data. Options:  
                - "norm": normal distribution (default).  
                - "gamma": gamma distribution.  
                - "rayleigh": Rayleigh distribution.  
                - "poisson": Poisson distribution.  

    Returns: 
        (float): 
            Classical Fisher information

    Raises:
        ValueError: 
            If `ftype` is not one of the supported types ("norm", "poisson", "gamma", "rayleigh").    

    Notes:
        The current implementation may be unstable and is subject to future modification.
    """
    fidelity = 0.0
    if ftype == "norm":
        mu_true, std_true = norm.fit(data_true)
        mu_shifted, std_shifted = norm.fit(data_shifted)
        f_function = lambda x: np.sqrt(
            norm.pdf(x, mu_true, std_true) * norm.pdf(x, mu_shifted, std_shifted)
        )
        fidelity, _ = quad(f_function, -np.inf, np.inf)

    elif ftype == "gamma":
        a_true, alpha_true, beta_true = gamma.fit(data_true)
        a_shifted, alpha_shifted, beta_shifted = gamma.fit(data_shifted)
        f_function = lambda x: np.sqrt(
            gamma.pdf(x, a_true, alpha_true, beta_true) *
            gamma.pdf(x, a_shifted, alpha_shifted, beta_shifted)
        )
        fidelity, _ = quad(f_function, 0., np.inf)

    elif ftype == "rayleigh":
        mean_true, var_true = rayleigh.fit(data_true)
        mean_shifted, var_shifted = rayleigh.fit(data_shifted)
        f_function = lambda x: np.sqrt(
            rayleigh.pdf(x, mean_true, var_true) *
            rayleigh.pdf(x, mean_shifted, var_shifted)
        )
        fidelity, _ = quad(f_function, -np.inf, np.inf)

    elif ftype == "poisson":
        k_max = max(max(data_true) + 1, max(data_shifted) + 1)
        k_values = np.arange(k_max)
        p_true = poisson.pmf(k_values, np.mean(data_true))
        p_shifted = poisson.pmf(k_values, np.mean(data_shifted))
        p_true /= np.sum(p_true)
        p_shifted /= np.sum(p_shifted)
        fidelity = np.sum(np.sqrt(p_true * p_shifted))

    else:
        valid_types = ["norm", "poisson", "gamma", "rayleigh"]
        raise ValueError(
            f"Invalid distribution type: '{ftype}'. "
            f"Supported types are: {', '.join(valid_types)}"
        )

    fisher_information = 8 * (1 - fidelity) / delta_x**2
    return fisher_information

Symmetric logarithmic derivative (SLD)

Calculation of the symmetric logarithmic derivative (SLD) for a density matrix.

This function computes the SLD operator \(L_a\), which is determined by

\[ \partial_{a}\rho=\frac{1}{2}(\rho L_{a}+L_{a}\rho) \]

with \(\rho\) the parameterized density matrix. The entries of SLD can be calculated as

\[ \langle\lambda_i|L_{a}|\lambda_j\rangle=\frac{2\langle\lambda_i| \partial_{a}\rho |\lambda_j\rangle}{\lambda_i+\lambda_j \]

for \(\lambda_i~(\lambda_j) \neq 0\). If \(\lambda_i=\lambda_j=0\), the entry of SLD is set to be zero.

Parameters:

Name Type Description Default
rho array

Density matrix.

required
drho list

Derivatives of the density matrix on the unknown parameters to be estimated. For example, drho[0] is the derivative vector on the first parameter.

required
rep str

The basis for the SLDs. Options:
- "original" (default): basis same as input density matrix
- "eigen": basis same as eigenspace of density matrix

'original'
eps float

Machine epsilon.

1e-08

Returns:

Type Description
array / list

For single parameter estimation (i.e., length of drho equals 1), returns a matrix.
For multiparameter estimation (i.e., length of drho is larger than 1), returns a list of matrices.

Raises:

Type Description
TypeError

If drho is not a list.

ValueError

If rep has invalid value.

Source code in quanestimation/AsymptoticBound/CramerRao.py
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
def SLD(rho, drho, rep="original", eps=1e-8):
    r"""
    Calculation of the symmetric logarithmic derivative (SLD) for a density matrix.

    This function computes the SLD operator $L_a$, which is determined by

    $$
    \partial_{a}\rho=\frac{1}{2}(\rho L_{a}+L_{a}\rho)
    $$

    with $\rho$ the parameterized density matrix. The entries of SLD can be calculated as 

    $$
    \langle\lambda_i|L_{a}|\lambda_j\rangle=\frac{2\langle\lambda_i| \partial_{a}\rho |\lambda_j\rangle}{\lambda_i+\lambda_j
    $$

    for $\lambda_i~(\lambda_j) \neq 0$. If $\lambda_i=\lambda_j=0$, the entry of SLD is set to be zero.

    Args:
        rho (np.array): 
            Density matrix.
        drho (list): 
            Derivatives of the density matrix on the unknown parameters to be 
            estimated. For example, drho[0] is the derivative vector on the first parameter.
        rep (str, optional): 
            The basis for the SLDs. Options:  
                - "original" (default): basis same as input density matrix  
                - "eigen": basis same as eigenspace of density matrix
        eps (float, optional): 
            Machine epsilon.

    Returns:
        (np.array/list): 
            For single parameter estimation (i.e., length of `drho` equals 1), returns a matrix.  
            For multiparameter estimation (i.e., length of `drho` is larger than 1), returns a list of matrices.

    Raises:
        TypeError: If `drho` is not a list.  
        ValueError: If `rep` has invalid value. 
    """

    if not isinstance(drho, list):
        raise TypeError("drho must be a list of derivative matrices")

    num_params = len(drho)
    dim = len(rho)
    slds = [None] * num_params

    purity = np.trace(rho @ rho)

    # Handle pure state case
    if np.abs(1 - purity) < eps:
        sld_original = [2 * d for d in drho]

        for i in range(num_params):
            if rep == "original":
                slds[i] = sld_original[i]
            elif rep == "eigen":
                eigenvalues, eigenvectors = np.linalg.eig(rho)
                eigenvalues = np.real(eigenvalues)
                slds[i] = eigenvectors.conj().T @ sld_original[i] @ eigenvectors
            else:
                valid_reps = ["original", "eigen"]
                raise ValueError(f"Invalid rep value: '{rep}'. Valid options: {valid_reps}")

        return slds[0] if num_params == 1 else slds

    # Handle mixed state case
    eigenvalues, eigenvectors = np.linalg.eig(rho)
    eigenvalues = np.real(eigenvalues)

    for param_idx in range(num_params):
        sld_eigenbasis = np.zeros((dim, dim), dtype=np.complex128)

        for i in range(dim):
            for j in range(dim):
                if eigenvalues[i] + eigenvalues[j] > eps:
                    # Calculate matrix element in eigenbasis
                    numerator = 2 * (eigenvectors[:, i].conj().T @ drho[param_idx] @ eigenvectors[:, j])
                    sld_eigenbasis[i, j] = numerator / (eigenvalues[i] + eigenvalues[j])

        # Handle any potential infinities
        sld_eigenbasis[np.isinf(sld_eigenbasis)] = 0.0

        # Transform to requested basis
        if rep == "original":
            slds[param_idx] = eigenvectors @ sld_eigenbasis @ eigenvectors.conj().T
        elif rep == "eigen":
            slds[param_idx] = sld_eigenbasis
        else:
            valid_reps = ["original", "eigen"]
            raise ValueError(f"Invalid rep value: '{rep}'. Valid options: {valid_reps}")

    return slds[0] if num_params == 1 else slds

Right logarithmic derivative (RLD)

Calculation of the right logarithmic derivative (RLD) for a density matrix. The RLD operator \(\mathcal{R}_a\) is defined by

\[ \partial_{a}\rho=\rho \mathcal{R}_a \]

with \(\rho\) the parameterized density matrix. The entries of RLD can be calculated as

\[ \langle\lambda_i| \mathcal{R}_{a} |\lambda_j\rangle=\frac{1}{\lambda_i}\langle\lambda_i| \partial_a\rho |\lambda_j\rangle \]

for \(\lambda_i\neq 0\).

Parameters:

Name Type Description Default
rho array

Density matrix.

required
drho list

Derivatives of the density matrix on the unknown parameters to be estimated. For example, drho[0] is the derivative vector on the first parameter.

required
rep str

The basis for the RLD(s). Options:
- "original" (default): basis same as input density matrix.
- "eigen": basis same as eigenspace of density matrix.

'original'
eps float

Machine epsilon.

1e-08

Returns:

Type Description
array / list

For single parameter estimation (i.e., length of drho equals 1), returns a matrix.
For multiparameter estimation (i.e., length of drho is larger than 1), returns a list of matrices.

Raises:

Type Description
TypeError

If drho is not a list.

ValueError

If rep has invalid value or RLD doesn't exist.

Source code in quanestimation/AsymptoticBound/CramerRao.py
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
def RLD(rho, drho, rep="original", eps=1e-8):
    r"""
    Calculation of the right logarithmic derivative (RLD) for a density matrix.
    The RLD operator $\mathcal{R}_a$ is defined by

    $$
    \partial_{a}\rho=\rho \mathcal{R}_a
    $$

    with $\rho$ the parameterized density matrix. The entries of RLD can be calculated as 

    $$
    \langle\lambda_i| \mathcal{R}_{a} |\lambda_j\rangle=\frac{1}{\lambda_i}\langle\lambda_i| 
    \partial_a\rho |\lambda_j\rangle 
    $$

    for $\lambda_i\neq 0$.

    Args:
        rho (np.array): 
            Density matrix.  
        drho (list):  
            Derivatives of the density matrix on the unknown parameters to be 
            estimated. For example, drho[0] is the derivative vector on the first parameter.
        rep (str, optional): 
            The basis for the RLD(s). Options:  
                - "original" (default): basis same as input density matrix.  
                - "eigen": basis same as eigenspace of density matrix.
        eps (float, optional): 
            Machine epsilon.

    Returns:
        (np.array/list): 
            For single parameter estimation (i.e., length of `drho` equals 1), returns a matrix.  
            For multiparameter estimation (i.e., length of `drho` is larger than 1), returns a list of matrices.

    Raises:
        TypeError: If `drho` is not a list.
        ValueError: If `rep` has invalid value or RLD doesn't exist.
    """

    if not isinstance(drho, list):
        raise TypeError("drho must be a list of derivative matrices")

    num_params = len(drho)
    dim = len(rho)
    rld_list = [None] * num_params

    eigenvalues, eigenvectors = np.linalg.eig(rho)
    eigenvalues = np.real(eigenvalues)

    for param_idx in range(num_params):
        rld_eigenbasis = np.zeros((dim, dim), dtype=np.complex128)

        for i in range(dim):
            for j in range(dim):
                # Calculate matrix element in eigenbasis
                element = (
                    eigenvectors[:, i].conj().T 
                    @ drho[param_idx] 
                    @ eigenvectors[:, j]
                )

                if np.abs(eigenvalues[i]) > eps:
                    rld_eigenbasis[i, j] = element / eigenvalues[i]
                else:
                    if np.abs(element) > eps:
                        raise ValueError(
                            "RLD does not exist. It only exists when the support of "
                            "drho is contained in the support of rho."
                        )

        # Handle any potential infinities
        rld_eigenbasis[np.isinf(rld_eigenbasis)] = 0.0

        # Transform to requested basis
        if rep == "original":
            rld_list[param_idx] = (
                eigenvectors 
                @ rld_eigenbasis 
                @ eigenvectors.conj().T
            )
        elif rep == "eigen":
            rld_list[param_idx] = rld_eigenbasis
        else:
            valid_reps = ["original", "eigen"]
            raise ValueError(
                f"Invalid rep value: '{rep}'. Valid options: {', '.join(valid_reps)}"
            )

    return rld_list[0] if num_params == 1 else rld_list

Left logarithmic derivative (LLD)

Calculation of the left logarithmic derivative (LLD) for a density matrix \(\rho\).

The LLD operator \(\mathcal{R}_a^{\dagger}\) is defined by

\[ \partial_{a}\rho=\mathcal{R}_a^{\dagger}\rho. \]

The entries of LLD can be calculated as

\[ \langle\lambda_i| \mathcal{R}_{a}^{\dagger} |\lambda_j\rangle=\frac{1}{\lambda_j}\langle\lambda_i| \partial_a\rho |\lambda_j\rangle \]

for \(\lambda_j\neq 0\).

Parameters:

Name Type Description Default
rho array

Density matrix.

required
drho list

Derivatives of the density matrix on the unknown parameters to be estimated. For example, drho[0] is the derivative vector on the first parameter.

required
rep str

The basis for the LLD(s). Options:
- "original" (default): basis same as input density matrix.
- "eigen": basis same as eigenspace of density matrix.

'original'
eps float

Machine epsilon.

1e-08

Returns:

Type Description
array / list

For single parameter estimation (i.e., length of drho equals 1), returns a matrix.
For multiparameter estimation (i.e., length of drho is larger than 1), returns a list of matrices.

Raises:

Type Description
TypeError

If drho is not a list.

ValueError

If rep has invalid value or LLD doesn't exist.

Source code in quanestimation/AsymptoticBound/CramerRao.py
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
def LLD(rho, drho, rep="original", eps=1e-8):
    r"""
    Calculation of the left logarithmic derivative (LLD) for a density matrix $\rho$.

    The LLD operator $\mathcal{R}_a^{\dagger}$ is defined by

    $$
    \partial_{a}\rho=\mathcal{R}_a^{\dagger}\rho.
    $$

    The entries of LLD can be calculated as 

    $$
    \langle\lambda_i| \mathcal{R}_{a}^{\dagger} |\lambda_j\rangle=\frac{1}{\lambda_j}\langle\lambda_i| 
    \partial_a\rho |\lambda_j\rangle 
    $$

    for $\lambda_j\neq 0$.

    Args: 
        rho (np.array): 
            Density matrix.
        drho (list): 
            Derivatives of the density matrix on the unknown parameters to be estimated. 
            For example, drho[0] is the derivative vector on the first parameter.
        rep (str, optional): 
            The basis for the LLD(s). Options:  
                - "original" (default): basis same as input density matrix.  
                - "eigen": basis same as eigenspace of density matrix.
        eps (float, optional): 
            Machine epsilon.

    Returns:
        (np.array/list): 
            For single parameter estimation (i.e., length of `drho` equals 1), returns a matrix.  
            For multiparameter estimation (i.e., length of `drho` is larger than 1), returns a list of matrices.

    Raises:
        TypeError: If `drho` is not a list.  
        ValueError: If `rep` has invalid value or LLD doesn't exist.  
    """

    if not isinstance(drho, list):
        raise TypeError("drho must be a list of derivative matrices")

    param_num = len(drho)
    dim = len(rho)
    lld_list = [None] * param_num

    eigenvalues, eigenvectors = np.linalg.eig(rho)
    eigenvalues = np.real(eigenvalues)

    for param_idx in range(param_num):
        lld_eigenbasis = np.zeros((dim, dim), dtype=np.complex128)

        for i in range(dim):
            for j in range(dim):
                # Calculate matrix element in eigenbasis
                element = (
                    eigenvectors[:, i].conj().T 
                    @ drho[param_idx] 
                    @ eigenvectors[:, j]
                )

                if np.abs(eigenvalues[j]) > eps:
                    lld_eigenbasis[i, j] = element / eigenvalues[j]
                else:
                    if np.abs(element) > eps:
                        raise ValueError(
                            "LLD does not exist. It only exists when the support of "
                            "drho is contained in the support of rho."
                        )

        # Handle any potential infinities
        lld_eigenbasis[np.isinf(lld_eigenbasis)] = 0.0

        # Transform to requested basis
        if rep == "original":
            lld_list[param_idx] = (
                eigenvectors 
                @ lld_eigenbasis 
                @ eigenvectors.conj().T
            )
        elif rep == "eigen":
            lld_list[param_idx] = lld_eigenbasis
        else:
            valid_reps = ["original", "eigen"]
            raise ValueError(
                f"Invalid rep value: '{rep}'. Valid options: {', '.join(valid_reps)}"
            )

    return lld_list[0] if param_num == 1 else lld_list

Quantum Fisher information matrix (QFIM)

Calculate the quantum Fisher information (QFI) and quantum Fisher information matrix (QFIM) for all types.

The entry of QFIM \(\mathcal{F}\) is defined as:

\[ \mathcal{F}_{ab}=\frac{1}{2}\mathrm{Tr}(\rho\{L_a, L_b\}) \]

with \(L_a, L_b\) being SLD operators.

Alternatively:

\[ \mathcal{F}_{ab}=\mathrm{Tr}(\rho \mathcal{R}_a \mathcal{R}^{\dagger}_b) \]

with \(\mathcal{R}_a\) being the RLD or LLD operator.

Parameters:

Name Type Description Default
rho array

Density matrix.

required
drho list

Derivatives of the density matrix with respect to the unknown parameters. Each element in the list is a matrix of the same dimension as rho and represents the partial derivative of the density matrix with respect to one parameter. For example, drho[0] is the derivative with respect to the first parameter.

required
LDtype str

Specifies the type of logarithmic derivative to use for QFI/QFIM calculation:
- "SLD": Symmetric Logarithmic Derivative (default).
- "RLD": Right Logarithmic Derivative.
- "LLD": Left Logarithmic Derivative.

'SLD'
exportLD bool

Whether to export the values of logarithmic derivatives.

False
eps float

Machine epsilon.

1e-08

Returns:

Type Description
float / array

For single parameter estimation (i.e., length of drho equals 1), returns QFI.
For multiparameter estimation (i.e., length of drho is larger than 1), returns QFIM.

Raises:

Type Description
TypeError

If drho is not a list.

ValueError

If LDtype is not one of the supported types ("SLD", "RLD", "LLD").

Source code in quanestimation/AsymptoticBound/CramerRao.py
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
def QFIM(rho, drho, LDtype="SLD", exportLD=False, eps=1e-8):
    r"""
    Calculate the quantum Fisher information (QFI) and quantum Fisher 
    information matrix (QFIM) for all types.

    The entry of QFIM $\mathcal{F}$ is defined as:

    $$
    \mathcal{F}_{ab}=\frac{1}{2}\mathrm{Tr}(\rho\{L_a, L_b\})
    $$

    with $L_a, L_b$ being SLD operators.

    Alternatively:

    $$
    \mathcal{F}_{ab}=\mathrm{Tr}(\rho \mathcal{R}_a \mathcal{R}^{\dagger}_b)
    $$

    with $\mathcal{R}_a$ being the RLD or LLD operator.

    Args:
        rho (np.array): 
            Density matrix.
        drho (list): 
            Derivatives of the density matrix with respect to the unknown parameters. 
            Each element in the list is a matrix of the same dimension as `rho` and 
            represents the partial derivative of the density matrix with respect to 
            one parameter. For example, `drho[0]` is the derivative with respect to 
            the first parameter.
        LDtype (str, optional): 
            Specifies the type of logarithmic derivative to use for QFI/QFIM calculation:  
                - "SLD": Symmetric Logarithmic Derivative (default).  
                - "RLD": Right Logarithmic Derivative.  
                - "LLD": Left Logarithmic Derivative.  
        exportLD (bool, optional): 
            Whether to export the values of logarithmic derivatives.  
        eps (float, optional): 
            Machine epsilon.  

    Returns:
        (float/np.array): 
            For single parameter estimation (i.e., length of `drho` equals 1), returns QFI.  
            For multiparameter estimation (i.e., length of `drho` is larger than 1), returns QFIM.  

    Raises:
        TypeError: If `drho` is not a list.
        ValueError: If `LDtype` is not one of the supported types ("SLD", "RLD", "LLD").        
    """

    if not isinstance(drho, list):
        raise TypeError("drho must be a list of derivative matrices")

    num_params = len(drho)
    qfim_result = None
    log_derivatives = None

    # Single parameter estimation
    if num_params == 1:
        if LDtype == "SLD":
            sld = SLD(rho, drho, eps=eps)
            anticommutator = sld @ sld + sld @ sld
            qfim_result = np.real(0.5 * np.trace(rho @ anticommutator))
        elif LDtype == "RLD":
            rld = RLD(rho, drho, eps=eps)
            qfim_result = np.real(np.trace(rho @ rld @ rld.conj().T))
        elif LDtype == "LLD":
            lld = LLD(rho, drho, eps=eps)
            qfim_result = np.real(np.trace(rho @ lld @ lld.conj().T))
        else:
            valid_types = ["SLD", "RLD", "LLD"]
            raise ValueError(
                f"Invalid LDtype: '{LDtype}'. Supported types: {', '.join(valid_types)}"
            )
        log_derivatives = sld if LDtype == "SLD" else rld if LDtype == "RLD" else lld

    # Multiparameter estimation
    else:
        if LDtype == "SLD":
            qfim_result = np.zeros((num_params, num_params))
            sld_list = SLD(rho, drho, eps=eps)
            for i in range(num_params):
                for j in range(i, num_params):
                    anticommutator = sld_list[i] @ sld_list[j] + sld_list[j] @ sld_list[i]
                    qfim_result[i, j] = np.real(0.5 * np.trace(rho @ anticommutator))
                    qfim_result[j, i] = qfim_result[i, j]
            log_derivatives = sld_list

        elif LDtype == "RLD":
            qfim_result = np.zeros((num_params, num_params), dtype=np.complex128)
            rld_list = RLD(rho, drho, eps=eps)
            for i in range(num_params):
                for j in range(i, num_params):
                    term = np.trace(rho @ rld_list[i] @ rld_list[j].conj().T)
                    qfim_result[i, j] = term
                    qfim_result[j, i] = term.conj()
            log_derivatives = rld_list

        elif LDtype == "LLD":
            qfim_result = np.zeros((num_params, num_params), dtype=np.complex128)
            lld_list = LLD(rho, drho, eps=eps)
            for i in range(num_params):
                for j in range(i, num_params):
                    term = np.trace(rho @ lld_list[i] @ lld_list[j].conj().T)
                    qfim_result[i, j] = term
                    qfim_result[j, i] = term.conj()
            log_derivatives = lld_list

        else:
            valid_types = ["SLD", "RLD", "LLD"]
            raise ValueError(
                f"Invalid LDtype: '{LDtype}'. Supported types: {', '.join(valid_types)}"
            )

    if exportLD:
        return qfim_result, log_derivatives
    return qfim_result

Quantum Fisher information matrix with Kraus operators

Calculation of the quantum Fisher information (QFI) and quantum Fisher information matrix (QFIM) for a quantum channel described by Kraus operators.

The quantum channel is given by

\[ \rho=\sum_{i} K_i \rho_0 K_i^{\dagger}, \]

where \(\rho_0\) is the initial state and \(\{K_i\}\) are the Kraus operators.

The derivatives of the density matrix \(\partial_a\rho\) are calculated from the derivatives of the Kraus operators \(\{\partial_a K_i\}\) as

\[ \partial_a\rho=\sum_{i}\left[(\partial_a K_i)\rho_0 K_i^{\dagger}+K_i\rho_0(\partial_a K_i)^{\dagger}\right]. \]

Then the QFI (QFIM) is calculated via the function QFIM with the evolved state \(\rho\) and its derivatives \(\{\partial_a\rho\}\).

Parameters:

Name Type Description Default
rho0 array

Initial density matrix.

required
K list

Kraus operators.

required
dK list

Derivatives of the Kraus operators. It is a nested list where the first index corresponds to the parameter and the second index corresponds to the Kraus operator index. For example, dK[0][1] is the derivative of the second Kraus operator with respect to the first parameter.

required
LDtype str

Types of QFI (QFIM) can be set as the objective function. Options:
- "SLD" (default): QFI (QFIM) based on symmetric logarithmic derivative.
- "RLD": QFI (QFIM) based on right logarithmic derivative.
- "LLD": QFI (QFIM) based on left logarithmic derivative.

'SLD'
exportLD bool

Whether to export the values of logarithmic derivatives.

False
eps float

Machine epsilon.

1e-08

Returns:

Type Description
float / array

For single parameter estimation (the length of dK is equal to one), the output is QFI and for multiparameter estimation (the length of dK is more than one), it returns QFIM.

Source code in quanestimation/AsymptoticBound/CramerRao.py
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
def QFIM_Kraus(rho0, K, dK, LDtype="SLD", exportLD=False, eps=1e-8):
    r"""
    Calculation of the quantum Fisher information (QFI) and quantum Fisher 
    information matrix (QFIM) for a quantum channel described by Kraus operators.

    The quantum channel is given by

    $$
    \rho=\sum_{i} K_i \rho_0 K_i^{\dagger},
    $$

    where $\rho_0$ is the initial state and $\{K_i\}$ are the Kraus operators.

    The derivatives of the density matrix $\partial_a\rho$ are calculated from the 
    derivatives of the Kraus operators $\{\partial_a K_i\}$ as

    $$
    \partial_a\rho=\sum_{i}\left[(\partial_a K_i)\rho_0 K_i^{\dagger}+K_i\rho_0(\partial_a K_i)^{\dagger}\right].
    $$

    Then the QFI (QFIM) is calculated via the function `QFIM` with the evolved state 
    $\rho$ and its derivatives $\{\partial_a\rho\}$.

    Args:
        rho0 (np.array): 
            Initial density matrix.
        K (list): 
            Kraus operators.
        dK (list): 
            Derivatives of the Kraus operators. It is a nested list where the first index 
            corresponds to the parameter and the second index corresponds to the Kraus operator index. 
            For example, `dK[0][1]` is the derivative of the second Kraus operator with respect 
            to the first parameter.
        LDtype (str, optional): 
            Types of QFI (QFIM) can be set as the objective function. Options:  
                - "SLD" (default): QFI (QFIM) based on symmetric logarithmic derivative.  
                - "RLD": QFI (QFIM) based on right logarithmic derivative.  
                - "LLD": QFI (QFIM) based on left logarithmic derivative.  
        exportLD (bool, optional): 
            Whether to export the values of logarithmic derivatives.  
        eps (float, optional): 
            Machine epsilon.  

    Returns:
        (float/np.array): 
            For single parameter estimation (the length of dK is equal to one), the output is QFI 
            and for multiparameter estimation (the length of dK is more than one), it returns QFIM.
    """

    # Transpose dK: from [parameters][operators] to [operators][parameters]
    dK_transposed = [
        [dK[i][j] for i in range(len(K))] 
        for j in range(len(dK[0]))
    ]

    # Compute the evolved density matrix
    rho = sum(Ki @ rho0 @ Ki.conj().T for Ki in K)

    # Compute the derivatives of the density matrix
    drho = [
        sum(
            dKi @ rho0 @ Ki.conj().T + Ki @ rho0 @ dKi.conj().T
            for Ki, dKi in zip(K, dKj)
        )
        for dKj in dK_transposed
    ]

    return QFIM(rho, drho, LDtype=LDtype, exportLD=exportLD, eps=eps)

Quantum Fisher information matrix in Bloch representation

Calculation of the quantum Fisher information (QFI) and quantum Fisher information matrix (QFIM) in Bloch representation.

The Bloch vector representation of a quantum state is defined as

\[ \rho = \frac{1}{d}\left(\mathbb{I} + \sum_{i=1}^{d^2-1} r_i \lambda_i\right), \]

where \(\lambda_i\) are the generators of SU(d) group.

Parameters:

Name Type Description Default
r array

Parameterized Bloch vector.

required
dr list

Derivatives of the Bloch vector with respect to the unknown parameters. Each element in the list is a vector of the same length as r and represents the partial derivative of the Bloch vector with respect to one parameter. For example, dr[0] is the derivative with respect to the first parameter.

required
eps float

Machine epsilon.

1e-08

Returns:

Type Description
float / array

For single parameter estimation (the length of dr is equal to one), the output is QFI and for multiparameter estimation (the length of dr is more than one), it returns QFIM.

Raises:

Type Description
TypeError

If dr is not a list.

ValueError

If the dimension of the Bloch vector is invalid.

Source code in quanestimation/AsymptoticBound/CramerRao.py
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
def QFIM_Bloch(r, dr, eps=1e-8):
    r"""
    Calculation of the quantum Fisher information (QFI) and quantum Fisher 
    information matrix (QFIM) in Bloch representation.

    The Bloch vector representation of a quantum state is defined as

    $$
    \rho = \frac{1}{d}\left(\mathbb{I} + \sum_{i=1}^{d^2-1} r_i \lambda_i\right),
    $$

    where $\lambda_i$ are the generators of SU(d) group.

    Args:
        r (np.array): 
            Parameterized Bloch vector.
        dr (list): 
            Derivatives of the Bloch vector with respect to the unknown parameters. 
            Each element in the list is a vector of the same length as `r` and 
            represents the partial derivative of the Bloch vector with respect to 
            one parameter. For example, `dr[0]` is the derivative with respect to 
            the first parameter.
        eps (float, optional): 
            Machine epsilon.  

    Returns:
        (float/np.array): 
            For single parameter estimation (the length of `dr` is equal to one), 
            the output is QFI and for multiparameter estimation (the length of `dr` 
            is more than one), it returns QFIM.

    Raises:
        TypeError: If `dr` is not a list.  
        ValueError: If the dimension of the Bloch vector is invalid.  
    """

    if not isinstance(dr, list):
        raise TypeError("dr must be a list of derivative vectors")

    num_params = len(dr)
    qfim_result = np.zeros((num_params, num_params))

    # Calculate dimension from Bloch vector length
    dim_float = np.sqrt(len(r) + 1)
    if dim_float.is_integer():
        dim = int(dim_float)
    else:
        raise ValueError("Invalid Bloch vector dimension")

    # Get SU(N) generators
    lambda_generators = suN_generator(dim)

    # Handle single-qubit system
    if dim == 2:
        r_norm = np.linalg.norm(r) ** 2

        # Pure state case
        if np.abs(r_norm - 1.0) < eps:
            for i in range(num_params):
                for j in range(i, num_params):
                    qfim_result[i, j] = np.real(np.inner(dr[i], dr[j]))
                    qfim_result[j, i] = qfim_result[i, j]
        # Mixed state case
        else:
            for i in range(num_params):
                for j in range(i, num_params):
                    term1 = np.inner(dr[i], dr[j])
                    term2 = (np.inner(r, dr[i]) * np.inner(r, dr[j])) / (1 - r_norm)
                    qfim_result[i, j] = np.real(term1 + term2)
                    qfim_result[j, i] = qfim_result[i, j]
    # Handle higher-dimensional systems
    else:
        # Reconstruct density matrix from Bloch vector
        rho = np.eye(dim, dtype=np.complex128) / dim
        for idx in range(dim**2 - 1):
            coeff = np.sqrt(dim * (dim - 1) / 2) * r[idx] / dim
            rho += coeff * lambda_generators[idx]

        # Calculate G matrix
        G = np.zeros((dim**2 - 1, dim**2 - 1), dtype=np.complex128)
        for i in range(dim**2 - 1):
            for j in range(i, dim**2 - 1):
                anticommutator = (
                    lambda_generators[i] @ lambda_generators[j] + 
                    lambda_generators[j] @ lambda_generators[i]
                )
                G[i, j] = 0.5 * np.trace(rho @ anticommutator)
                G[j, i] = G[i, j]

        # Calculate matrix for inversion
        r_vec = np.array(r).reshape(len(r), 1)
        mat = G * dim / (2 * (dim - 1)) - r_vec @ r_vec.T
        mat_inv = inv(mat)

        # Calculate QFIM
        for i in range(num_params):
            for j in range(i, num_params):
                dr_i = np.array(dr[i]).reshape(1, len(r))
                dr_j = np.array(dr[j]).reshape(len(r), 1)
                qfim_result[i, j] = np.real(dr_i @ mat_inv @ dr_j)[0, 0]
                qfim_result[j, i] = qfim_result[i, j]

    return qfim_result[0, 0] if num_params == 1 else qfim_result

Quantum Fisher information matrix with Gaussian states

Calculation of the quantum Fisher information (QFI) and quantum Fisher information matrix (QFIM) for Gaussian states.

The Gaussian state is characterized by its first-order moment (displacement vector) and second-order moment (covariance matrix). The QFIM is calculated using the method described in [1].

Parameters:

Name Type Description Default
R array

First-order moment (displacement vector).

required
dR list

Derivatives of the first-order moment with respect to the unknown parameters. Each element in the list is a vector of the same length as R and represents the partial derivative of the displacement vector with respect to one parameter. For example, dR[0] is the derivative with respect to the first parameter.

required
D array

Second-order moment (covariance matrix).

required
dD list

Derivatives of the second-order moment with respect to the unknown parameters. Each element in the list is a matrix of the same dimension as D and represents the partial derivative of the covariance matrix with respect to one parameter. For example, dD[0] is the derivative with respect to the first parameter.

required

Returns:

Type Description
float / array

For single parameter estimation (the length of dR is equal to one), the output is QFI and for multiparameter estimation (the length of dR is more than one), it returns QFIM.

Notes

This function follows the approach from: [1] Monras, A., Phase space formalism for quantum estimation of Gaussian states, arXiv:1303.3682 (2013).

Source code in quanestimation/AsymptoticBound/CramerRao.py
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
def QFIM_Gauss(R, dR, D, dD):
    r"""
    Calculation of the quantum Fisher information (QFI) and quantum 
    Fisher information matrix (QFIM) for Gaussian states.

    The Gaussian state is characterized by its first-order moment (displacement vector) 
    and second-order moment (covariance matrix). The QFIM is calculated using the 
    method described in [1].

    Args:
        R (np.array): 
            First-order moment (displacement vector).
        dR (list): 
            Derivatives of the first-order moment with respect to the unknown parameters. 
            Each element in the list is a vector of the same length as `R` and represents the partial 
            derivative of the displacement vector with respect to one parameter. For example, `dR[0]` 
            is the derivative with respect to the first parameter.
        D (np.array): 
            Second-order moment (covariance matrix).
        dD (list): 
            Derivatives of the second-order moment with respect to the unknown parameters. 
            Each element in the list is a matrix of the same dimension as `D` and 
            represents the partial derivative of the covariance matrix with respect to 
            one parameter. For example, `dD[0]` is the derivative with respect to 
            the first parameter.

    Returns:
        (float/np.array): 
            For single parameter estimation (the length of `dR` is equal to one), 
            the output is QFI and for multiparameter estimation (the length of `dR` 
            is more than one), it returns QFIM.

    Notes:
        This function follows the approach from:
        [1] Monras, A., Phase space formalism for quantum estimation of Gaussian states, arXiv:1303.3682 (2013).
    """

    num_params = len(dR)
    m = len(R) // 2  # Number of modes
    qfim = np.zeros((num_params, num_params))

    # Compute the covariance matrix from the second-order moments and displacement
    cov_matrix = np.array(
        [
            [D[i][j] - R[i] * R[j] for j in range(2 * m)]
            for i in range(2 * m)
        ]
    )

    # Compute the derivatives of the covariance matrix
    dcov = []
    for k in range(num_params):
        dcov_k = np.zeros((2 * m, 2 * m))
        for i in range(2 * m):
            for j in range(2 * m):
                dcov_k[i, j] = dD[k][i][j] - dR[k][i] * R[j] - R[i] * dR[k][j]
        dcov.append(dcov_k)

    # Compute the square root of the covariance matrix
    cov_sqrt = sqrtm(cov_matrix)

    # Define the symplectic matrix J for m modes
    J_block = np.array([[0, 1], [-1, 0]])
    J = np.kron(J_block, np.eye(m))

    # Compute the matrix B = cov_sqrt @ J @ cov_sqrt
    B = cov_sqrt @ J @ cov_sqrt

    # Permutation matrix to rearrange the basis
    P = np.eye(2 * m)
    # Rearrange the basis: first all q's then all p's
    P = np.vstack([P[::2], P[1::2]])

    # Schur decomposition of B
    _, Q = schur(B)
    eigenvalues = eigvals(B)
    # Extract the imaginary parts of every other eigenvalue
    c = eigenvalues[::2].imag

    # Diagonal matrix with entries 1/sqrt(c_i) for each mode
    diag_inv_sqrt = np.diagflat(1.0 / np.sqrt(c))

    # Construct the matrix S
    temp1 = J @ cov_sqrt @ Q
    temp2 = P @ np.kron(np.array([[0, 1], [-1, 0]]), -diag_inv_sqrt)
    S = inv(temp1 @ temp2).T @ P.T

    # Define the basis matrices for the Gaussian representation
    sigma_x = np.array([[0.0, 1.0], [1.0, 0.0]])
    sigma_y = np.array([[0.0, -1.0j], [1.0j, 0.0]])
    sigma_z = np.array([[1.0, 0.0], [0.0, -1.0]])
    identity = np.eye(2)
    a_gauss = [1j * sigma_y, sigma_z, identity, sigma_x]

    # Construct the basis matrices for the m-mode system
    es = []
    for i in range(m):
        row = []
        for j in range(m):
            e_ij = np.eye(1, m * m, m * i + j).reshape(m, m)
            row.append(e_ij)
        es.append(row)

    # As: a list of two-mode basis matrices for each of the four types
    As = []
    for a in a_gauss:
        A_type = []
        for i in range(m):
            for j in range(m):
                A_ij = np.kron(es[i][j], a) / np.sqrt(2)
                A_type.append(A_ij)
        As.append(A_type)

    # Compute the coefficients g for each parameter and each basis matrix
    g = []
    for k in range(num_params):
        g_k = []
        for A_type in As:
            g_type = []
            for A_mat in A_type:
                term = np.trace(inv(S) @ dcov[k] @ inv(S.T) @ A_mat.T)
                g_type.append(term)
            g_k.append(g_type)
        g.append(g_k)

    # Initialize the matrices G for each parameter
    G_matrices = [np.zeros((2 * m, 2 * m), dtype=np.complex128) for _ in range(num_params)]

    # Construct the matrices G for each parameter
    for k in range(num_params):
        for i in range(m):
            for j in range(m):
                for l in range(4):  # For each of the four types
                    denom = 4 * c[i] * c[j] + (-1) ** (l + 1)
                    A_l_ij = As[l][i * m + j]
                    G_matrices[k] += np.real(
                        g[k][l][i * m + j] / denom * inv(S.T) @ A_l_ij @ inv(S)
                    )

    # Compute the QFIM
    for i in range(num_params):
        for j in range(num_params):
            term1 = np.trace(G_matrices[i] @ dcov[j])
            term2 = dR[i] @ inv(cov_matrix) @ dR[j]
            qfim[i, j] = np.real(term1 + term2)

    if num_params == 1:
        return qfim[0, 0]
    else:
        return qfim

Holevo Cramér-Rao bound

Calculate the Holevo Cramer-Rao bound (HCRB) via semidefinite programming (SDP).

The HCRB is defined as:

\[ \min_{\{X_i\}} \left\{ \mathrm{Tr}(\mathrm{Re}Z) + \mathrm{Tr}(| \mathrm{Im} Z |) \right\}, \]

where \(Z_{ij} = \mathrm{Tr}(\rho X_i X_j)\) and \(V\) is the covariance matrix.

Parameters:

Name Type Description Default
rho array

Density matrix.

required
drho list

Derivatives of the density matrix with respect to unknown parameters.
For example, drho[0] is the derivative with respect to the first parameter.

required
W array

Weight matrix for the bound.

required
eps float

Machine epsilon for numerical stability.

1e-08

Returns:

Type Description
float

The value of the Holevo Cramer-Rao bound.

Raises:

Type Description
TypeError

If drho is not a list.

Notes

In the single-parameter scenario, the HCRB is equivalent to the QFI.
For a rank-one weight matrix, the HCRB is equivalent to the inverse of the QFIM.

Source code in quanestimation/AsymptoticBound/AnalogCramerRao.py
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
def HCRB(rho, drho, W, eps=1e-8):
    r"""
    Calculate the Holevo Cramer-Rao bound (HCRB) via semidefinite programming (SDP).

    The HCRB is defined as:

    $$
    \min_{\{X_i\}} \left\{ \mathrm{Tr}(\mathrm{Re}Z) + \mathrm{Tr}(| \mathrm{Im} Z |) \right\}, 
    $$

    where $Z_{ij} = \mathrm{Tr}(\rho X_i X_j)$ and $V$ is the covariance matrix.

    Args:
        rho (np.array): 
            Density matrix.
        drho (list): 
            Derivatives of the density matrix with respect to unknown parameters.  
            For example, `drho[0]` is the derivative with respect to the first parameter.
        W (np.array): 
            Weight matrix for the bound.
        eps (float, optional): 
            Machine epsilon for numerical stability.

    Returns: 
        (float): 
            The value of the Holevo Cramer-Rao bound.

    Raises:
        TypeError: If `drho` is not a list.

    Notes:
        In the single-parameter scenario, the HCRB is equivalent to the QFI.  
        For a rank-one weight matrix, the HCRB is equivalent to the inverse of the QFIM.
    """

    if not isinstance(drho, list):
        raise TypeError("drho must be a list of derivative matrices")

    if len(drho) == 1:
        print(
            "In single parameter scenario, HCRB is equivalent to QFI. "
            "Returning QFI value."
        )
        return QFIM(rho, drho, eps=eps)

    if matrix_rank(W) == 1:
        print(
            "For rank-one weight matrix, HCRB is equivalent to QFIM. "
            "Returning Tr(W @ inv(QFIM))."
        )
        qfim = QFIM(rho, drho, eps=eps)
        return np.trace(W @ np.linalg.pinv(qfim))
    dim = len(rho)
    num = dim * dim
    num_params = len(drho)

    # Generate basis matrices
    basis = [np.identity(dim)] + suN_generator(dim)
    basis = [b / np.sqrt(2) for b in basis]

    # Compute vectorized derivatives
    vec_drho = []
    for param_idx in range(num_params):
        components = [
            np.real(np.trace(drho[param_idx] @ basis_mat))
            for basis_mat in basis
        ]
        vec_drho.append(np.array(components))

    # Compute S matrix
    S = np.zeros((num, num), dtype=np.complex128)
    for i in range(num):
        for j in range(num):
            S[i, j] = np.trace(basis[i] @ basis[j] @ rho)

    # Regularize and factor S
    precision = len(str(int(1 / eps))) - 1
    lu, d, _ = sp.linalg.ldl(S.round(precision))
    R = (lu @ sp.linalg.sqrtm(d)).conj().T

    # Define optimization variables
    V = cp.Variable((num_params, num_params))
    X = cp.Variable((num, num_params))

    # Define constraints
    constraints = [
        cp.bmat([
            [V, X.T @ R.conj().T],
            [R @ X, np.identity(num)]
        ]) >> 0
    ]

    # Add linear constraints
    for i in range(num_params):
        for j in range(num_params):
            constraint_val = X[:, i].T @ vec_drho[j]
            if i == j:
                constraints.append(constraint_val == 1)
            else:
                constraints.append(constraint_val == 0)

    # Solve the optimization problem
    problem = cp.Problem(cp.Minimize(cp.trace(W @ V)), constraints)
    problem.solve()

    return problem.value

Nagaoka-Hayashi bound

Calculation of the Nagaoka-Hayashi bound (NHB) via the semidefinite program (SDP).

The NHB is defined as:

\[ \min_{X} \left\{ \mathrm{Tr}[W \mathrm{Re}(Z)] + \|\sqrt{W} \mathrm{Im}(Z) \sqrt{W}\|_1 \right\}, \]

where \(Z_{ij} = \mathrm{Tr}(\rho X_i X_j)\) and \(V\) is the covariance matrix.

Parameters:

Name Type Description Default
rho array

Density matrix.

required
drho list

Derivatives of the density matrix with respect to unknown parameters.
For example, drho[0] is the derivative with respect to the first parameter.

required
W array

Weight matrix for the bound.

required

Returns:

Type Description
float

The value of the Nagaoka-Hayashi bound.

Raises:

Type Description
TypeError

If drho is not a list.

Source code in quanestimation/AsymptoticBound/AnalogCramerRao.py
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
def NHB(rho, drho, W):
    r"""
    Calculation of the Nagaoka-Hayashi bound (NHB) via the semidefinite program (SDP).

    The NHB is defined as:

    $$
    \min_{X} \left\{ \mathrm{Tr}[W \mathrm{Re}(Z)] + \|\sqrt{W} \mathrm{Im}(Z) \sqrt{W}\|_1 \right\}, 
    $$

    where $Z_{ij} = \mathrm{Tr}(\rho X_i X_j)$ and $V$ is the covariance matrix.

    Args:
        rho (np.array): 
            Density matrix.
        drho (list): 
            Derivatives of the density matrix with respect to unknown parameters.  
            For example, `drho[0]` is the derivative with respect to the first parameter.
        W (np.array): 
            Weight matrix for the bound.

    Returns: 
        (float): 
            The value of the Nagaoka-Hayashi bound.

    Raises:
        TypeError: 
            If `drho` is not a list.
    """
    if not isinstance(drho, list):
        raise TypeError("drho must be a list of derivative matrices")

    dim = len(rho)
    num_params = len(drho)

    # Initialize a temporary matrix for L components
    L_temp = [[None for _ in range(num_params)] for _ in range(num_params)]

    # Create Hermitian variables for the upper triangle and mirror to lower triangle
    for i in range(num_params):
        for j in range(i, num_params):
            L_temp[i][j] = cp.Variable((dim, dim), hermitian=True)
            if i != j:
                L_temp[j][i] = L_temp[i][j]

    # Construct the block matrix L
    L_blocks = [cp.hstack(L_temp[i]) for i in range(num_params)]
    L = cp.vstack(L_blocks)

    # Create Hermitian variables for X
    X = [cp.Variable((dim, dim), hermitian=True) for _ in range(num_params)]

    # Construct the block matrix constraint
    block_matrix = cp.bmat([
        [L, cp.vstack(X)],
        [cp.hstack(X), np.identity(dim)]
    ])
    constraints = [block_matrix >> 0]

    # Add trace constraints
    for i in range(num_params):
        constraints.append(cp.trace(X[i] @ rho) == 0)
        for j in range(num_params):
            if i == j:
                constraints.append(cp.trace(X[i] @ drho[j]) == 1)
            else:
                constraints.append(cp.trace(X[i] @ drho[j]) == 0)

    # Define and solve the optimization problem
    objective = cp.Minimize(cp.real(cp.trace(cp.kron(W, rho) @ L)))
    prob = cp.Problem(objective, constraints)
    prob.solve()

    return prob.value

Bayesian Cramér-Rao bounds

Bayesian classical Fisher information matrix (BCFIM)

Calculation of the Bayesian classical Fisher information matrix (BCFIM).

This function computes the Bayesian classical Fisher information (BCFI) or Bayesian classical Fisher information matrix (BCFIM). The BCFIM is defined as:

\[ \mathcal{I}_{\mathrm{Bayes}} = \int p(\textbf{x}) \mathcal{I} \, \mathrm{d}\textbf{x}. \]

where \(\mathcal{I}\) is the classical Fisher information matrix (CFIM) and \(p(\textbf{x})\) is the prior distribution.

Parameters:

Name Type Description Default
x list

Parameter regimes for integration. Each element is an array representing the values of one parameter.

required
p array

Prior distribution over the parameter space. Must have the same dimensions as the product of the lengths of the arrays in x.

required
rho list

Parameterized density matrices. Each element corresponds to a point in the parameter space defined by x.

required
drho list

Derivatives of the density matrices with respect to the parameters. For single parameter estimation (length of x is 1), drho is a list of derivatives at each parameter point. For multiparameter estimation, drho is a multidimensional list where drho[i] is a list of derivatives with respect to each parameter at the i-th parameter point, and drho[i][j] is the derivative of the density matrix at the i-th parameter point with respect to the j-th parameter.

required
M list

Positive operator-valued measure (POVM). Default is a set of rank-one symmetric informationally complete POVM (SIC-POVM).

[]
eps float

Machine epsilon for numerical stability.

1e-08

Returns:

Type Description
float / array

For single parameter estimation (length of x is 1), returns BCFI.
For multiparameter estimation (length of x > 1), returns BCFIM.

Raises:

Type Description
TypeError

If M is provided but not a list.

Notes

SIC-POVM is calculated using Weyl-Heisenberg covariant SIC-POVM fiducial states available at http://www.physics.umb.edu/Research/QBism/solutions.html.

Source code in quanestimation/BayesianBound/BayesCramerRao.py
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
def BCFIM(x, p, rho, drho, M=[], eps=1e-8):
    r"""
    Calculation of the Bayesian classical Fisher information matrix (BCFIM).

    This function computes the Bayesian classical Fisher information (BCFI) or Bayesian classical 
    Fisher information matrix (BCFIM). The BCFIM is defined as:

    $$
        \mathcal{I}_{\mathrm{Bayes}} = \int p(\textbf{x}) \mathcal{I} \, \mathrm{d}\textbf{x}.
    $$

    where $\mathcal{I}$ is the classical Fisher information matrix (CFIM) and $p(\textbf{x})$ 
    is the prior distribution.

    Args:
        x (list): 
            Parameter regimes for integration. Each element is an array 
            representing the values of one parameter.
        p (np.array): 
            Prior distribution over the parameter space. Must have the same dimensions 
            as the product of the lengths of the arrays in `x`.
        rho (list): 
            Parameterized density matrices. Each element corresponds to 
            a point in the parameter space defined by `x`.
        drho (list): 
            Derivatives of the density matrices with respect to the parameters. For single parameter estimation (length of `x` is 1), 
            `drho` is a list of derivatives at each parameter point. For multiparameter estimation, `drho` is a 
            multidimensional list where `drho[i]` is a list of derivatives with respect to each parameter at the i-th parameter point, 
            and `drho[i][j]` is the derivative of the density matrix at the i-th parameter point with respect to the j-th parameter.
        M (list, optional): 
            Positive operator-valued measure (POVM). Default is a set of rank-one symmetric informationally complete POVM (SIC-POVM).
        eps (float, optional): 
            Machine epsilon for numerical stability.

    Returns:
        (float/np.array): 
            For single parameter estimation (length of `x` is 1), returns BCFI.             
            For multiparameter estimation (length of `x` > 1), returns BCFIM.

    Raises:
        TypeError: 
            If `M` is provided but not a list.

    Notes:
        SIC-POVM is calculated using Weyl-Heisenberg covariant SIC-POVM fiducial states 
        available at [http://www.physics.umb.edu/Research/QBism/solutions.html](http://www.physics.umb.edu/Research/QBism/solutions.html).
    """

    para_num = len(x)
    if para_num == 1:
        #### single parameter scenario ####
        if M == []:
            M = SIC(len(rho[0]))
        else:
            if type(M) != list:
                raise TypeError("Please make sure M is a list!")

        p_num = len(p)
        if type(drho[0]) == list:
            drho = [drho[i][0] for i in range(p_num)]
        p_num = len(p)
        F_tp = np.zeros(p_num)
        for m in range(p_num):
            F_tp[m] = CFIM(rho[m], [drho[m]], M=M, eps=eps)

        arr = [p[i] * F_tp[i] for i in range(p_num)]
        return simpson(arr, x[0])
    else:
        #### multiparameter scenario ####
        p_shape = np.shape(p)
        p_ext = extract_ele(p, para_num)
        rho_ext = extract_ele(rho, para_num)
        drho_ext = extract_ele(drho, para_num)

        p_list, rho_list, drho_list = [], [], []
        for p_ele, rho_ele, drho_ele in zip(p_ext, rho_ext, drho_ext):
            p_list.append(p_ele)
            rho_list.append(rho_ele)
            drho_list.append(drho_ele)

        dim = len(rho_list[0])
        if M == []:
            M = SIC(dim)
        else:
            if type(M) != list:
                raise TypeError("Please make sure M is a list!")

        F_list = [
            [[0.0 for i in range(len(p_list))] for j in range(para_num)]
            for k in range(para_num)
        ]
        for i in range(len(p_list)):
            F_tp = CFIM(rho_list[i], drho_list[i], M=M, eps=eps)
            for pj in range(para_num):
                for pk in range(para_num):
                    F_list[pj][pk][i] = F_tp[pj][pk]

        BCFIM_res = np.zeros([para_num, para_num])
        for para_i in range(0, para_num):
            for para_j in range(para_i, para_num):
                F_ij = np.array(F_list[para_i][para_j]).reshape(p_shape)
                arr = p * F_ij
                for si in reversed(range(para_num)):
                    arr = simpson(arr, x[si])
                BCFIM_res[para_i][para_j] = arr
                BCFIM_res[para_j][para_i] = arr
        return BCFIM_res

Bayesian quantum Fisher information matrix (BQFIM)

Calculation of the Bayesian quantum Fisher information matrix (BQFIM).

This function computes the Bayesian quantum Fisher information (BQFI) or Bayesian quantum Fisher information matrix (BQFIM). The BQFIM is defined as:

\[ \mathcal{F}_{\mathrm{Bayes}} = \int p(\textbf{x}) \mathcal{F} \, \mathrm{d}\textbf{x}. \]

where \(\mathcal{F}\) is the quantum Fisher information matrix (QFIM) and \(p(\textbf{x})\) is the prior distribution.

Parameters:

Name Type Description Default
x list

Parameter regimes for integration. Each element is an array representing the values of one parameter.

required
p array

Prior distribution over the parameter space. Must have the same dimensions as the product of the lengths of the arrays in x.

required
rho list

Parameterized density matrices. Each element corresponds to a point in the parameter space defined by x.

required
drho list

Derivatives of the density matrices with respect to the parameters. drho[i][j] is the derivative of the density matrix at the i-th parameter point with respect to the j-th parameter.

required
LDtype str

Type of logarithmic derivative (default: "SLD"). Options:
- "SLD": Symmetric logarithmic derivative
- "RLD": Right logarithmic derivative
- "LLD": Left logarithmic derivative

'SLD'
eps float

Machine epsilon for numerical stability.

1e-08

Returns:

Type Description
float / array

For single parameter estimation (length of x is 1), returns BQFI. For multiparameter estimation (length of x > 1), returns BQFIM.

Source code in quanestimation/BayesianBound/BayesCramerRao.py
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
def BQFIM(x, p, rho, drho, LDtype="SLD", eps=1e-8):
    r"""
    Calculation of the Bayesian quantum Fisher information matrix (BQFIM).

    This function computes the Bayesian quantum Fisher information (BQFI) or Bayesian quantum 
    Fisher information matrix (BQFIM). The BQFIM is defined as:

    $$
        \mathcal{F}_{\mathrm{Bayes}} = \int p(\textbf{x}) \mathcal{F} \, \mathrm{d}\textbf{x}.
    $$

    where $\mathcal{F}$ is the quantum Fisher information matrix (QFIM) and $p(\textbf{x})$ 
    is the prior distribution.

    Args:
        x (list): Parameter regimes for integration. Each element is an array 
            representing the values of one parameter.
        p (np.array): Prior distribution over the parameter space. Must have the same dimensions 
            as the product of the lengths of the arrays in `x`.
        rho (list): Parameterized density matrices. Each element corresponds to 
            a point in the parameter space defined by `x`.
        drho (list): Derivatives of the density matrices with respect to 
            the parameters. `drho[i][j]` is the derivative of the density matrix at the i-th 
            parameter point with respect to the j-th parameter.
        LDtype (str, optional): Type of logarithmic derivative (default: "SLD"). Options:  
            - "SLD": Symmetric logarithmic derivative  
            - "RLD": Right logarithmic derivative  
            - "LLD": Left logarithmic derivative  
        eps (float, optional): Machine epsilon for numerical stability.

    Returns:
        (float/np.array): 
            For single parameter estimation (length of `x` is 1), returns BQFI. 
            For multiparameter estimation (length of `x` > 1), returns BQFIM.
    """

    para_num = len(x)
    if para_num == 1:
        #### single parameter scenario ####
        p_num = len(p)
        if type(drho[0]) == list:
            drho = [drho[i][0] for i in range(p_num)]

        F_tp = np.zeros(p_num)
        for m in range(p_num):
            F_tp[m] = QFIM(rho[m], [drho[m]], LDtype=LDtype, eps=eps)
        arr = [p[i] * F_tp[i] for i in range(p_num)]
        return simpson(arr, x[0])
    else:
        #### multiparameter scenario ####
        p_shape = np.shape(p)
        p_ext = extract_ele(p, para_num)
        rho_ext = extract_ele(rho, para_num)
        drho_ext = extract_ele(drho, para_num)

        p_list, rho_list, drho_list = [], [], []
        for p_ele, rho_ele, drho_ele in zip(p_ext, rho_ext, drho_ext):
            p_list.append(p_ele)
            rho_list.append(rho_ele)
            drho_list.append(drho_ele)

        F_list = [
            [[0.0 for i in range(len(p_list))] for j in range(para_num)]
            for k in range(para_num)
        ]
        for i in range(len(p_list)):
            F_tp = QFIM(rho_list[i], drho_list[i], LDtype=LDtype, eps=eps)
            for pj in range(para_num):
                for pk in range(para_num):
                    F_list[pj][pk][i] = F_tp[pj][pk]

        BQFIM_res = np.zeros([para_num, para_num])
        for para_i in range(0, para_num):
            for para_j in range(para_i, para_num):
                F_ij = np.array(F_list[para_i][para_j]).reshape(p_shape)
                arr = p * F_ij
                for si in reversed(range(para_num)):
                    arr = simpson(arr, x[si])
                BQFIM_res[para_i][para_j] = arr
                BQFIM_res[para_j][para_i] = arr
        return BQFIM_res

Bayesian Cramér-Rao bound (BCRB)

Calculation of the Bayesian Cramer-Rao bound (BCRB).

This function computes the Bayesian Cramer-Rao bound (BCRB) for single or multiple parameters.

The covariance matrix with prior distribution \(p(\textbf{x})\) is:

\[ \mathrm{cov}(\hat{\textbf{x}},\{\Pi_y\}) = \int p(\textbf{x}) \sum_y \mathrm{Tr} (\rho\Pi_y) (\hat{\textbf{x}}-\textbf{x})(\hat{\textbf{x}}-\textbf{x})^{\mathrm{T}} \mathrm{d}\textbf{x}. \]

This function calculates three types of BCRB:

Type 1:

\[ \mathrm{cov} \geq \int p(\textbf{x}) \left( B \mathcal{I}^{-1} B + \textbf{b} \textbf{b}^{\mathrm{T}} \right) \mathrm{d}\textbf{x}. \]

Type 2: $$ \mathrm{cov} \geq \mathcal{B} \mathcal{I}_{\mathrm{Bayes}}^{-1} \mathcal{B} + \int p(\textbf{x}) \textbf{b} \textbf{b}^{\mathrm{T}} \mathrm{d}\textbf{x}. $$

Type 3: $$ \mathrm{cov} \geq \int p(\textbf{x}) \mathcal{G} \left( \mathcal{I}_p + \mathcal{I} \right)^{-1} \mathcal{G}^{\mathrm{T}} \mathrm{d}\textbf{x}. $$

Symbols
  • \(\textbf{b}\): bias vector
  • \(\textbf{b}'\): its derivatives
  • \(B\): diagonal matrix with \(B_{ii} = 1 + [\textbf{b}']_{i}\)
  • \(\mathcal{I}\): classical Fisher information matrix (CFIM)
  • \(\mathcal{B} = \int p(\textbf{x}) B \mathrm{d}\textbf{x}\)
  • \(\mathcal{I}_{\mathrm{Bayes}} = \int p(\textbf{x}) \mathcal{I} \mathrm{d}\textbf{x}\)
  • \([\mathcal{I}_{p}]_{ab} = [\partial_a \ln p(\textbf{x})][\partial_b \ln p(\textbf{x})]\)
  • \(\mathcal{G}_{ab} = [\partial_b \ln p(\textbf{x})][\textbf{b}]_a + B_{aa}\delta_{ab}\)

Parameters:

Name Type Description Default
x list

Parameter regimes for integration.

required
p array

Prior distribution over the parameter space. Must have the same dimensions as the product of the lengths of the arrays in x.

required
dp list

Derivatives of the prior distribution with respect to the parameters.

required
rho list

Parameterized density matrices. Each element corresponds to a point in the parameter space defined by x.

required
drho list

Derivatives of the density matrices with respect to the parameters. drho[i][j] is the derivative of the density matrix at the i-th parameter point with respect to the j-th parameter.

required
M list

Positive operator-valued measure (POVM). Default is a set of rank-one symmetric informationally complete POVM (SIC-POVM).

[]
b list

Bias vector. Default is zero bias.

[]
db list

Derivatives of the bias vector. Default is zero.

[]
btype int

Type of BCRB to calculate (1, 2, or 3).

1
eps float

Machine epsilon for numerical stability.

1e-08

Returns:

Type Description
float / array

For single parameter estimation (length of x is 1), returns BCRB. For multiparameter estimation (length of x > 1), returns BCRB matrix.

Raises:

Type Description
TypeError

If M is provided but not a list.

NameError

If btype is not in {1, 2, 3}.

Notes

SIC-POVM is calculated using Weyl-Heisenberg covariant SIC-POVM fiducial states available at http://www.physics.umb.edu/Research/QBism/solutions.html.

Source code in quanestimation/BayesianBound/BayesCramerRao.py
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
def BCRB(x, p, dp, rho, drho, M=[], b=[], db=[], btype=1, eps=1e-8):
    r"""
    Calculation of the Bayesian Cramer-Rao bound (BCRB).

    This function computes the Bayesian Cramer-Rao bound (BCRB) for single or multiple parameters.

    The covariance matrix with prior distribution $p(\textbf{x})$ is:

    $$
        \mathrm{cov}(\hat{\textbf{x}},\{\Pi_y\}) = \int p(\textbf{x}) \sum_y \mathrm{Tr}
        (\rho\Pi_y) (\hat{\textbf{x}}-\textbf{x})(\hat{\textbf{x}}-\textbf{x})^{\mathrm{T}}
        \mathrm{d}\textbf{x}.
    $$

    This function calculates three types of BCRB:

    **Type 1:**

    $$
        \mathrm{cov} \geq \int p(\textbf{x}) \left( B \mathcal{I}^{-1} B 
        + \textbf{b} \textbf{b}^{\mathrm{T}} \right) \mathrm{d}\textbf{x}.
    $$

    **Type 2:**
    $$
        \mathrm{cov} \geq \mathcal{B} \mathcal{I}_{\mathrm{Bayes}}^{-1} \mathcal{B} 
        + \int p(\textbf{x}) \textbf{b} \textbf{b}^{\mathrm{T}} \mathrm{d}\textbf{x}.
    $$

    **Type 3:**
    $$
        \mathrm{cov} \geq \int p(\textbf{x}) \mathcal{G} \left( \mathcal{I}_p 
        + \mathcal{I} \right)^{-1} \mathcal{G}^{\mathrm{T}} \mathrm{d}\textbf{x}.
    $$

    Symbols:
        - $\textbf{b}$: bias vector
        - $\textbf{b}'$: its derivatives
        - $B$: diagonal matrix with $B_{ii} = 1 + [\textbf{b}']_{i}$
        - $\mathcal{I}$: classical Fisher information matrix (CFIM)
        - $\mathcal{B} = \int p(\textbf{x}) B \mathrm{d}\textbf{x}$
        - $\mathcal{I}_{\mathrm{Bayes}} = \int p(\textbf{x}) \mathcal{I} \mathrm{d}\textbf{x}$
        - $[\mathcal{I}_{p}]_{ab} = [\partial_a \ln p(\textbf{x})][\partial_b \ln p(\textbf{x})]$
        - $\mathcal{G}_{ab} = [\partial_b \ln p(\textbf{x})][\textbf{b}]_a + B_{aa}\delta_{ab}$

    Args:
        x (list): 
            Parameter regimes for integration.
        p (np.array): 
            Prior distribution over the parameter space. Must have the same dimensions 
            as the product of the lengths of the arrays in `x`.
        dp (list): 
            Derivatives of the prior distribution with respect to the parameters.
        rho (list): 
            Parameterized density matrices. Each element corresponds to 
            a point in the parameter space defined by `x`.
        drho (list): 
            Derivatives of the density matrices with respect to 
            the parameters. `drho[i][j]` is the derivative of the density matrix at the i-th 
            parameter point with respect to the j-th parameter.
        M (list, optional): 
            Positive operator-valued measure (POVM). Default is 
            a set of rank-one symmetric informationally complete POVM (SIC-POVM).
        b (list, optional): 
            Bias vector. Default is zero bias.
        db (list, optional): 
            Derivatives of the bias vector. Default is zero.
        btype (int, optional): 
            Type of BCRB to calculate (1, 2, or 3).
        eps (float, optional): 
            Machine epsilon for numerical stability.

    Returns:
        (float/np.array): 
            For single parameter estimation (length of `x` is 1), returns BCRB. 
            For multiparameter estimation (length of `x` > 1), returns BCRB matrix.

    Raises:
        TypeError: If `M` is provided but not a list.
        NameError: If `btype` is not in {1, 2, 3}.

    Notes:
        SIC-POVM is calculated using Weyl-Heisenberg covariant SIC-POVM fiducial states 
        available at [http://www.physics.umb.edu/Research/QBism/solutions.html](http://www.physics.umb.edu/Research/QBism/solutions.html).
    """

    para_num = len(x)
    if para_num == 1:
        #### single parameter scenario ####
        p_num = len(p)
        if not b:
            b = np.zeros(p_num)
            db = np.zeros(p_num)
        elif not db:
            db = np.zeros(p_num)

        if M == []:
            M = SIC(len(rho[0]))
        else:
            if type(M) != list:
                raise TypeError("Please make sure M is a list!")

        if type(drho[0]) == list:
            drho = [drho[i][0] for i in range(p_num)]
        if type(b[0]) == list or type(b[0]) == np.ndarray:
            b = b[0]
        if type(db[0]) == list or type(db[0]) == np.ndarray:
            db = db[0]

        F_tp = np.zeros(p_num)
        for m in range(p_num):
            F_tp[m] = CFIM(rho[m], [drho[m]], M=M, eps=eps)

        if btype == 1:
            arr = [
                p[i] * ((1 + db[i]) ** 2 / F_tp[i] + b[i] ** 2) for i in range(p_num)
            ]
            F = simpson(arr, x[0])
            return F
        elif btype == 2:
            arr = [p[i] * F_tp[i] for i in range(p_num)]
            F1 = simpson(arr, x[0])
            arr2 = [p[j] * (1 + db[j]) for j in range(p_num)]
            B = simpson(arr2, x[0])
            arr3 = [p[k] * b[k] ** 2 for k in range(p_num)]
            bb = simpson(arr3, x[0])
            F = B**2 / F1 + bb
            return F
        elif btype == 3:
            I_tp = [np.real(dp[i] * dp[i] / p[i] ** 2) for i in range(p_num)]
            arr = [p[j]*(dp[j]*b[j]/p[j]+(1 + db[j]))**2 / (I_tp[j] + F_tp[j]) for j in range(p_num)]
            F = simpson(arr, x[0])
            return F
        else:
            raise NameError("NameError: btype should be choosen in {1, 2, 3}.")
    else:
        #### multiparameter scenario ####
        if not b:
            b, db = [], []
            for i in range(para_num):
                b.append(np.zeros(len(x[i])))
                db.append(np.zeros(len(x[i])))
        elif not db:
            db = []
            for i in range(para_num):
                db.append(np.zeros(len(x[i])))

        p_shape = np.shape(p)
        p_ext = extract_ele(p, para_num)
        dp_ext = extract_ele(dp, para_num)
        rho_ext = extract_ele(rho, para_num)
        drho_ext = extract_ele(drho, para_num)
        b_pro = product(*b)
        db_pro = product(*db)

        p_list, rho_list, drho_list = [], [], []
        for p_ele, rho_ele, drho_ele in zip(p_ext, rho_ext, drho_ext):
            p_list.append(p_ele)
            rho_list.append(rho_ele)
            drho_list.append(drho_ele)
        dp_list = [dpi for dpi in dp_ext]

        b_list, db_list = [], []
        for b_ele, db_ele in zip(b_pro, db_pro):
            b_list.append([b_ele[i] for i in range(para_num)])
            db_list.append([db_ele[j] for j in range(para_num)])

        dim = len(rho_list[0])
        if M == []:
            M = SIC(dim)
        else:
            if type(M) != list:
                raise TypeError("Please make sure M is a list!")
        if btype == 1:
            F_list = [
                [[0.0 for i in range(len(p_list))] for j in range(para_num)]
                for k in range(para_num)
            ]
            for i in range(len(p_list)):
                F_tp = CFIM(rho_list[i], drho_list[i], M=M, eps=eps)
                F_inv = np.linalg.pinv(F_tp)
                B = np.diag([(1.0 + db_list[i][j]) for j in range(para_num)])
                term1 = np.dot(B, np.dot(F_inv, B))
                term2 = np.dot(
                    np.array(b_list[i]).reshape(para_num, 1),
                    np.array(b_list[i]).reshape(1, para_num),
                )
                for pj in range(para_num):
                    for pk in range(para_num):
                        F_list[pj][pk][i] = term1[pj][pk] + term2[pj][pk]

            res = np.zeros([para_num, para_num])
            for para_i in range(0, para_num):
                for para_j in range(para_i, para_num):
                    F_ij = np.array(F_list[para_i][para_j]).reshape(p_shape)
                    arr = p * F_ij
                    for si in reversed(range(para_num)):
                        arr = simpson(arr, x[si])
                    res[para_i][para_j] = arr
                    res[para_j][para_i] = arr
            return res
        elif btype == 2:
            F_list = [
                [[0.0 for i in range(len(p_list))] for j in range(para_num)]
                for k in range(para_num)
            ]
            B_list = [
                [[0.0 for i in range(len(p_list))] for j in range(para_num)]
                for k in range(para_num)
            ]
            bb_list = [
                [[0.0 for i in range(len(p_list))] for j in range(para_num)]
                for k in range(para_num)
            ]
            for i in range(len(p_list)):
                F_tp = CFIM(rho_list[i], drho_list[i], M=M, eps=eps)
                B_tp = np.diag([(1.0 + db_list[i][j]) for j in range(para_num)])
                bb_tp = np.dot(
                    np.array(b_list[i]).reshape(para_num, 1),
                    np.array(b_list[i]).reshape(1, para_num),
                )
                for pj in range(para_num):
                    for pk in range(para_num):
                        F_list[pj][pk][i] = F_tp[pj][pk]
                        B_list[pj][pk][i] = B_tp[pj][pk]
                        bb_list[pj][pk][i] = bb_tp[pj][pk]

            F_res = np.zeros([para_num, para_num])
            for para_i in range(0, para_num):
                for para_j in range(para_i, para_num):
                    F_ij = np.array(F_list[para_i][para_j]).reshape(p_shape)
                    arr = p * F_ij
                    for si in reversed(range(para_num)):
                        arr = simpson(arr, x[si])
                    F_res[para_i][para_j] = arr
                    F_res[para_j][para_i] = arr
            B_res = np.zeros([para_num, para_num])
            bb_res = np.zeros([para_num, para_num])
            for para_m in range(para_num):
                for para_n in range(para_num):
                    B_mn = np.array(B_list[para_m][para_n]).reshape(p_shape)
                    bb_mn = np.array(bb_list[para_m][para_n]).reshape(p_shape)
                    arr2 = p * B_mn
                    arr3 = p * bb_mn
                    for sj in reversed(range(para_num)):
                        arr2 = simpson(arr2, x[sj])
                        arr3 = simpson(arr3, x[sj])
                    B_res[para_m][para_n] = arr2
                    bb_res[para_m][para_n] = arr3
            res = np.dot(B_res, np.dot(np.linalg.pinv(F_res), B_res)) + bb_res
            return res
        elif btype == 3:
            F_list = [
                [[0.0 for i in range(len(p_list))] for j in range(para_num)]
                for k in range(para_num)
            ]
            for i in range(len(p_list)):
                F_tp = CFIM(rho_list[i], drho_list[i], M=M, eps=eps)
                I_tp = np.zeros((para_num, para_num))
                G_tp = np.zeros((para_num, para_num))
                for pm in range(para_num):
                    for pn in range(para_num):
                        if pm == pn:
                            G_tp[pm][pn] = dp_list[i][pn]*b_list[i][pm]/p_list[i]+(1.0 + db_list[i][pm])
                        else:
                            G_tp[pm][pn] = dp_list[i][pn]*b_list[i][pm]/p_list[i]
                        I_tp[pm][pn] = dp_list[i][pm] * dp_list[i][pn] / p_list[i] ** 2

                F_tot = np.dot(G_tp, np.dot(np.linalg.pinv(F_tp + I_tp), G_tp.T))
                for pj in range(para_num):
                    for pk in range(para_num):
                        F_list[pj][pk][i] = F_tot[pj][pk]

            res = np.zeros([para_num, para_num])
            for para_i in range(0, para_num):
                for para_j in range(para_i, para_num):
                    F_ij = np.array(F_list[para_i][para_j]).reshape(p_shape)
                    arr = p * F_ij
                    for si in reversed(range(para_num)):
                        arr = simpson(arr, x[si])
                    res[para_i][para_j] = arr
                    res[para_j][para_i] = arr
            return res
        else:
            raise NameError("NameError: btype should be choosen in {1, 2, 3}.")

Bayesian quantum Cramér-Rao bound (BQCRB)

Calculation of the Bayesian quantum Cramer-Rao bound (BQCRB).

The covariance matrix with a prior distribution \(p(\textbf{x})\) is defined as

\[ \mathrm{cov}(\hat{\textbf{x}},\{\Pi_y\})=\int p(\textbf{x})\sum_y\mathrm{Tr} (\rho\Pi_y)(\hat{\textbf{x}}-\textbf{x})(\hat{\textbf{x}}-\textbf{x})^{\mathrm{T}} \mathrm{d}\textbf{x}, \]
Symbols
  • \(\textbf{x}=(x_0,x_1,\dots)^{\mathrm{T}}\): the unknown parameters to be estimated and the integral \(\int\mathrm{d}\textbf{x}:=\iiint\mathrm{d}x_0\mathrm{d}x_1\cdots\).
  • \(\{\Pi_y\}\): a set of positive operator-valued measure (POVM).
  • \(\rho\): the parameterized density matrix.

This function calculates three types of the BQCRB. The first one is

\[ \mathrm{cov}(\hat{\textbf{x}},\{\Pi_y\})\geq\int p(\textbf{x})\left(B\mathcal{F}^{-1}B +\textbf{b}\textbf{b}^{\mathrm{T}}\right)\mathrm{d}\textbf{x}, \]
Symbols
  • \(\textbf{b}\) and \(\textbf{b}'\): the vectors of biase and its derivatives on parameters.
  • \(B\): a diagonal matrix with the \(i\)th entry \(B_{ii}=1+[\textbf{b}']_{i}\)
  • \(\mathcal{F}\): the QFIM for all types.

The second one is

\[ \mathrm{cov}(\hat{\textbf{x}},\{\Pi_y\})\geq \mathcal{B}\,\mathcal{F}_{\mathrm{Bayes}}^{-1}\, \mathcal{B}+\int p(\textbf{x})\textbf{b}\textbf{b}^{\mathrm{T}}\mathrm{d}\textbf{x}, \]
Symbols
  • \(\mathcal{B}=\int p(\textbf{x})B\mathrm{d}\textbf{x}\): the average of \(B\)
  • \(\mathcal{F}_{\mathrm{Bayes}}=\int p(\textbf{x})\mathcal{F}\mathrm{d}\textbf{x}\): the average QFIM.

The third one is

\[ \mathrm{cov}(\hat{\textbf{x}},\{\Pi_y\})\geq \int p(\textbf{x}) \mathcal{G}\left(\mathcal{I}_p+\mathcal{F}\right)^{-1}\mathcal{G}^{\mathrm{T}}\mathrm{d}\textbf{x}. \]
Symbols
  • \([\mathcal{I}_{p}]_{ab}:=[\partial_a \ln p(\textbf{x})][\partial_b \ln p(\textbf{x})]\).
  • \(\mathcal{G}_{ab}:=[\partial_b\ln p(\textbf{x})][\textbf{b}]_a+B_{aa}\delta_{ab}\).

Parameters:

Name Type Description Default
x list

The regimes of the parameters for the integral.

required
p (array, multidimensional)

The prior distribution.

required
rho (list, multidimensional)

Parameterized density matrix.

required
drho (list, multidimensional)

Derivatives of the parameterized density matrix (rho) with respect to the unknown parameters to be estimated.

required
b list

Vector of biases of the form \(\textbf{b}=(b(x_0),b(x_1),\dots)^{\mathrm{T}}\).

[]
db list

Derivatives of b with respect to the unknown parameters to be estimated, It should be expressed as \(\textbf{b}'=(\partial_0 b(x_0),\partial_1 b(x_1),\dots)^{\mathrm{T}}\).

[]
btype int

Types of the BQCRB. Options are:
1 (default) -- It means to calculate the first type of the BQCRB.
2 -- It means to calculate the second type of the BQCRB. 3 -- It means to calculate the third type of the BCRB.

1
LDtype str

Types of QFI (QFIM) can be set as the objective function. Options are:
- "SLD" (default) -- QFI (QFIM) based on symmetric logarithmic derivative (SLD).
- "RLD" -- QFI (QFIM) based on right logarithmic derivative (RLD).
- "LLD" -- QFI (QFIM) based on left logarithmic derivative (LLD).

'SLD'
eps (float, optional)

Machine epsilon.

1e-08

Returns:

Type Description
float / array

For single parameter estimation (the length of x equals to one), the output is a float and for multiparameter estimation (the length of x is larger than one), it returns a matrix.

Source code in quanestimation/BayesianBound/BayesCramerRao.py
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
def BQCRB(x, p, dp, rho, drho, b=[], db=[], btype=1, LDtype="SLD", eps=1e-8):
    r"""
    Calculation of the Bayesian quantum Cramer-Rao bound (BQCRB). 

    The covariance matrix with a prior distribution $p(\textbf{x})$ is defined as

    $$
        \mathrm{cov}(\hat{\textbf{x}},\{\Pi_y\})=\int p(\textbf{x})\sum_y\mathrm{Tr}
        (\rho\Pi_y)(\hat{\textbf{x}}-\textbf{x})(\hat{\textbf{x}}-\textbf{x})^{\mathrm{T}}
        \mathrm{d}\textbf{x},
    $$

    Symbols:
        - $\textbf{x}=(x_0,x_1,\dots)^{\mathrm{T}}$: the unknown parameters to be estimated
            and the integral $\int\mathrm{d}\textbf{x}:=\iiint\mathrm{d}x_0\mathrm{d}x_1\cdots$.
        - $\{\Pi_y\}$: a set of positive operator-valued measure (POVM). 
        - $\rho$: the parameterized density matrix.

    This function calculates three types of the BQCRB. The first one is

    $$
        \mathrm{cov}(\hat{\textbf{x}},\{\Pi_y\})\geq\int p(\textbf{x})\left(B\mathcal{F}^{-1}B
        +\textbf{b}\textbf{b}^{\mathrm{T}}\right)\mathrm{d}\textbf{x},
    $$

    Symbols: 
        - $\textbf{b}$ and $\textbf{b}'$: the vectors of biase and its derivatives on parameters.
        - $B$: a diagonal matrix with the $i$th entry $B_{ii}=1+[\textbf{b}']_{i}$
        - $\mathcal{F}$: the QFIM for all types.

    The second one is

    $$
        \mathrm{cov}(\hat{\textbf{x}},\{\Pi_y\})\geq \mathcal{B}\,\mathcal{F}_{\mathrm{Bayes}}^{-1}\,
        \mathcal{B}+\int p(\textbf{x})\textbf{b}\textbf{b}^{\mathrm{T}}\mathrm{d}\textbf{x},
    $$

    Symbols: 
        - $\mathcal{B}=\int p(\textbf{x})B\mathrm{d}\textbf{x}$: the average of $B$ 
        - $\mathcal{F}_{\mathrm{Bayes}}=\int p(\textbf{x})\mathcal{F}\mathrm{d}\textbf{x}$: the average QFIM.

    The third one is

    $$
        \mathrm{cov}(\hat{\textbf{x}},\{\Pi_y\})\geq \int p(\textbf{x})
        \mathcal{G}\left(\mathcal{I}_p+\mathcal{F}\right)^{-1}\mathcal{G}^{\mathrm{T}}\mathrm{d}\textbf{x}.
    $$

    Symbols: 
        - $[\mathcal{I}_{p}]_{ab}:=[\partial_a \ln p(\textbf{x})][\partial_b \ln p(\textbf{x})]$.
        - $\mathcal{G}_{ab}:=[\partial_b\ln p(\textbf{x})][\textbf{b}]_a+B_{aa}\delta_{ab}$.

    Args:
        x (list): 
            The regimes of the parameters for the integral.
        p (np.array, multidimensional): 
            The prior distribution.
        rho (list, multidimensional): 
            Parameterized density matrix.
        drho (list, multidimensional): 
            Derivatives of the parameterized density matrix (rho) with respect to the unknown parameters to be estimated.
        b (list): 
            Vector of biases of the form $\textbf{b}=(b(x_0),b(x_1),\dots)^{\mathrm{T}}$.
        db (list): 
            Derivatives of b with respect to the unknown parameters to be estimated, It should be 
            expressed as $\textbf{b}'=(\partial_0 b(x_0),\partial_1 b(x_1),\dots)^{\mathrm{T}}$.
        btype (int): 
            Types of the BQCRB. Options are:  
                1 (default) -- It means to calculate the first type of the BQCRB.  
                2 -- It means to calculate the second type of the BQCRB.
                3 -- It means to calculate the third type of the BCRB.
        LDtype (str): 
            Types of QFI (QFIM) can be set as the objective function. Options are:  
                - "SLD" (default) -- QFI (QFIM) based on symmetric logarithmic derivative (SLD).  
                - "RLD" -- QFI (QFIM) based on right logarithmic derivative (RLD).  
                - "LLD" -- QFI (QFIM) based on left logarithmic derivative (LLD).
        eps (float,optional): 
            Machine epsilon.

    Returns:
        (float/np.array): 
            For single parameter estimation (the length of `x` equals to one), the 
            output is a float and for multiparameter estimation (the length of `x` is larger than one), 
            it returns a matrix.
    """

    para_num = len(x)

    if para_num == 1:
        #### single parameter scenario ####
        p_num = len(p)

        if not b:
            b = np.zeros(p_num)
            db = np.zeros(p_num)
        elif not db:
            db = np.zeros(p_num)

        if type(drho[0]) == list:
            drho = [drho[i][0] for i in range(p_num)]
        if type(b[0]) == list or type(b[0]) == np.ndarray:
            b = b[0]
        if type(db[0]) == list or type(db[0]) == np.ndarray:
            db = db[0]

        F_tp = np.zeros(p_num)
        for m in range(p_num):
            F_tp[m] = QFIM(rho[m], [drho[m]], LDtype=LDtype, eps=eps)

        if btype == 1:
            arr = [
                p[i] * ((1 + db[i]) ** 2 / F_tp[i] + b[i] ** 2) for i in range(p_num)
            ]
            F = simpson(arr, x[0])
            return F
        elif btype == 2:
            arr2 = [p[i] * F_tp[i] for i in range(p_num)]
            F2 = simpson(arr2, x[0])
            arr2 = [p[j] * (1 + db[j]) for j in range(p_num)]
            B = simpson(arr2, x[0])
            arr3 = [p[k] * b[k] ** 2 for k in range(p_num)]
            bb = simpson(arr3, x[0])
            F = B**2 / F2 + bb
            return F
        elif btype == 3:
            I_tp = [np.real(dp[i] * dp[i] / p[i] ** 2) for i in range(p_num)]
            arr = [p[j]*(dp[j]*b[j]/p[j]+(1 + db[j]))**2 / (I_tp[j] + F_tp[j]) for j in range(p_num)]
            F = simpson(arr, x[0])
            return F
        else:
            raise NameError("NameError: btype should be choosen in {1, 2, 3}.")
    else:
        #### multiparameter scenario ####
        if not b:
            b, db = [], []
            for i in range(para_num):
                b.append(np.zeros(len(x[i])))
                db.append(np.zeros(len(x[i])))
        elif not db:
            db = []
            for i in range(para_num):
                db.append(np.zeros(len(x[i])))

        p_shape = np.shape(p)
        p_ext = extract_ele(p, para_num)
        dp_ext = extract_ele(dp, para_num)
        rho_ext = extract_ele(rho, para_num)
        drho_ext = extract_ele(drho, para_num)
        b_pro = product(*b)
        db_pro = product(*db)

        p_list, rho_list, drho_list = [], [], []
        for p_ele, rho_ele, drho_ele in zip(p_ext, rho_ext, drho_ext):
            p_list.append(p_ele)
            rho_list.append(rho_ele)
            drho_list.append(drho_ele)
        dp_list = [dpi for dpi in dp_ext]

        b_list, db_list = [], []
        for b_ele, db_ele in zip(b_pro, db_pro):
            b_list.append([b_ele[i] for i in range(para_num)])
            db_list.append([db_ele[j] for j in range(para_num)])

        if btype == 1:
            F_list = [
                [[0.0 for i in range(len(p_list))] for j in range(para_num)]
                for k in range(para_num)
            ]
            for i in range(len(p_list)):
                F_tp = QFIM(rho_list[i], drho_list[i], LDtype=LDtype, eps=eps)
                F_inv = np.linalg.pinv(F_tp)
                B = np.diag([(1.0 + db_list[i][j]) for j in range(para_num)])
                term1 = np.dot(B, np.dot(F_inv, B))
                term2 = np.dot(
                    np.array(b_list[i]).reshape(para_num, 1),
                    np.array(b_list[i]).reshape(1, para_num),
                )
                for pj in range(para_num):
                    for pk in range(para_num):
                        F_list[pj][pk][i] = term1[pj][pk] + term2[pj][pk]

            res = np.zeros([para_num, para_num])
            for para_i in range(0, para_num):
                for para_j in range(para_i, para_num):
                    F_ij = np.array(F_list[para_i][para_j]).reshape(p_shape)
                    arr = p * F_ij
                    for si in reversed(range(para_num)):
                        arr = simpson(arr, x[si])
                    res[para_i][para_j] = arr
                    res[para_j][para_i] = arr
            return res
        elif btype == 2:
            F_list = [
                [[0.0 for i in range(len(p_list))] for j in range(para_num)]
                for k in range(para_num)
            ]
            B_list = [
                [[0.0 for i in range(len(p_list))] for j in range(para_num)]
                for k in range(para_num)
            ]
            bb_list = [
                [[0.0 for i in range(len(p_list))] for j in range(para_num)]
                for k in range(para_num)
            ]
            for i in range(len(p_list)):
                F_tp = QFIM(rho_list[i], drho_list[i], LDtype=LDtype, eps=eps)
                B_tp = np.diag([(1.0 + db_list[i][j]) for j in range(para_num)])
                bb_tp = np.dot(
                    np.array(b_list[i]).reshape(para_num, 1),
                    np.array(b_list[i]).reshape(1, para_num),
                )
                for pj in range(para_num):
                    for pk in range(para_num):
                        F_list[pj][pk][i] = F_tp[pj][pk]
                        B_list[pj][pk][i] = B_tp[pj][pk]
                        bb_list[pj][pk][i] = bb_tp[pj][pk]

            F_res = np.zeros([para_num, para_num])
            for para_i in range(0, para_num):
                for para_j in range(para_i, para_num):
                    F_ij = np.array(F_list[para_i][para_j]).reshape(p_shape)
                    arr = p * F_ij
                    for si in reversed(range(para_num)):
                        arr = simpson(arr, x[si])
                    F_res[para_i][para_j] = arr
                    F_res[para_j][para_i] = arr
            B_res = np.zeros([para_num, para_num])
            bb_res = np.zeros([para_num, para_num])
            for para_m in range(para_num):
                for para_n in range(para_num):
                    B_mn = np.array(B_list[para_m][para_n]).reshape(p_shape)
                    bb_mn = np.array(bb_list[para_m][para_n]).reshape(p_shape)
                    arr2 = p * B_mn
                    arr3 = p * bb_mn
                    for sj in reversed(range(para_num)):
                        arr2 = simpson(arr2, x[sj])
                        arr3 = simpson(arr3, x[sj])
                    B_res[para_m][para_n] = arr2
                    bb_res[para_m][para_n] = arr3
            res = np.dot(B_res, np.dot(np.linalg.pinv(F_res), B_res)) + bb_res
            return res
        elif btype == 3:
            F_list = [
                [[0.0 for i in range(len(p_list))] for j in range(para_num)]
                for k in range(para_num)
            ]
            for i in range(len(p_list)):
                F_tp = QFIM(rho_list[i], drho_list[i], LDtype=LDtype, eps=eps)
                I_tp = np.zeros((para_num, para_num))
                G_tp = np.zeros((para_num, para_num))
                for pm in range(para_num):
                    for pn in range(para_num):
                        if pm == pn:
                            G_tp[pm][pn] = dp_list[i][pn]*b_list[i][pm]/p_list[i]+(1.0 + db_list[i][pm])
                        else:
                            G_tp[pm][pn] = dp_list[i][pn]*b_list[i][pm]/p_list[i]
                        I_tp[pm][pn] = dp_list[i][pm] * dp_list[i][pn] / p_list[i] ** 2

                F_tot = np.dot(G_tp, np.dot(np.linalg.pinv(F_tp + I_tp), G_tp.T))
                for pj in range(para_num):
                    for pk in range(para_num):
                        F_list[pj][pk][i] = F_tot[pj][pk]

            res = np.zeros([para_num, para_num])
            for para_i in range(0, para_num):
                for para_j in range(para_i, para_num):
                    F_ij = np.array(F_list[para_i][para_j]).reshape(p_shape)
                    arr = p * F_ij
                    for si in reversed(range(para_num)):
                        arr = simpson(arr, x[si])
                    res[para_i][para_j] = arr
                    res[para_j][para_i] = arr
            return res
        else:
            raise NameError("NameError: btype should be choosen in {1, 2, 3}.")

Optimal biased bound (OBB)

Calculate the optimal biased bound (OBB) for single parameter estimation.

The OBB is defined as:

\[ \mathrm{var}(\hat{x},\{\Pi_y\}) \geq \int p(x) \left( \frac{(1+b')^2}{F} + b^2 \right) \mathrm{d}x \]
Symbols
  • \(b\): bias, \(b'\): its derivative.
  • \(F\): quantum Fisher information (QFI).

This bound is solved using a boundary value problem approach.

Parameters:

Name Type Description Default
x array

Parameter regime for integration.

required
p array

Prior distribution.

required
dp array

Derivative of the prior distribution with respect to the parameter.

required
rho list

Parameterized density matrices.

required
drho list

First derivatives of the density matrices with respect to the parameter.

required
d2rho list

Second-order derivatives of the density matrices with respect to the parameter.

required
LDtype str

Type of logarithmic derivative (default: "SLD"). Options:
- "SLD": Symmetric logarithmic derivative.
- "RLD": Right logarithmic derivative.
- "LLD": Left logarithmic derivative.

'SLD'
eps float

Machine epsilon.

1e-08

Returns:

Type Description
float

The optimal biased bound value for single parameter estimation.

Notes

This function uses a boundary value problem solver to compute the optimal bias function.

Source code in quanestimation/BayesianBound/BayesCramerRao.py
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
def OBB(x, p, dp, rho, drho, d2rho, LDtype="SLD", eps=1e-8):
    r"""
    Calculate the optimal biased bound (OBB) for single parameter estimation.

    The OBB is defined as:

    $$
    \mathrm{var}(\hat{x},\{\Pi_y\}) \geq \int p(x) \left( \frac{(1+b')^2}{F} + b^2 \right) \mathrm{d}x
    $$

    Symbols:
        - $b$: bias, $b'$: its derivative.
        - $F$: quantum Fisher information (QFI).

    This bound is solved using a boundary value problem approach.

    Args:
        x (np.array): 
            Parameter regime for integration.
        p (np.array): 
            Prior distribution.
        dp (np.array): 
            Derivative of the prior distribution with respect to the parameter.
        rho (list): 
            Parameterized density matrices.
        drho (list): 
            First derivatives of the density matrices with respect to the parameter.
        d2rho (list): 
            Second-order derivatives of the density matrices with respect to the parameter.
        LDtype (str, optional): 
            Type of logarithmic derivative (default: "SLD"). Options:  
                - "SLD": Symmetric logarithmic derivative.  
                - "RLD": Right logarithmic derivative.  
                - "LLD": Left logarithmic derivative.  
        eps (float, optional): 
            Machine epsilon.

    Returns: 
        (float): 
            The optimal biased bound value for single parameter estimation.

    Notes: 
        This function uses a boundary value problem solver to compute the optimal bias function.
    """

    #### single parameter scenario ####
    p_num = len(p)

    if type(drho[0]) == list:
        drho = [drho[i][0] for i in range(p_num)]
    if type(d2rho[0]) == list:
        d2rho = [d2rho[i][0] for i in range(p_num)]
    if type(dp[0]) == list or type(dp[0]) == np.ndarray:
        dp = [dp[i][0] for i in range(p_num)]
    if type(x[0]) != float or type(x[0]) != int:
        x = x[0]

    F, J = np.zeros(p_num), np.zeros(p_num)
    bias, dbias = np.zeros(p_num), np.zeros(p_num)
    for m in range(p_num):
        f, LD = QFIM(rho[m], [drho[m]], LDtype=LDtype, exportLD=True, eps=eps)
        F[m] = f
        term1 = np.dot(d2rho[m], LD)
        term2 = np.dot(d2rho[m], LD.conj().T)
        term3 = np.dot(np.dot(LD, LD), drho[m])
        dF = np.real(np.trace(term1 + term2 - term3))
        J[m] = dp[m] / p[m] - dF / f

    y_guess = np.zeros((2, x.size))
    fun = lambda m, n: OBB_func(m, n, x, J, F)
    result = solve_bvp(fun, boundary_condition, x, y_guess)
    res = result.sol(x)
    bias, dbias = res[0], res[1]

    value = [p[i] * ((1 + dbias[i]) ** 2 / F[i] + bias[i] ** 2) for i in range(p_num)]
    return simpson(value, x)

Van Trees bound (VTB)

Calculate the Van Trees bound (VTB), a Bayesian version of the Cramer-Rao bound.

The covariance matrix with prior distribution \(p(\textbf{x})\) is:

\[ \mathrm{cov}(\hat{\textbf{x}},\{\Pi_y\}) = \int p(\textbf{x}) \sum_y \mathrm{Tr} (\rho\Pi_y) (\hat{\textbf{x}}-\textbf{x})(\hat{\textbf{x}}-\textbf{x})^{\mathrm{T}} \mathrm{d}\textbf{x}. \]

The VTB is given by:

\[ \mathrm{cov} \geq \left(\mathcal{I}_{\mathrm{prior}} + \mathcal{I}_{\mathrm{Bayes}}\right)^{-1}. \]
Symbols
  • \(\mathcal{I}_{\mathrm{prior}} = \int p(\textbf{x}) \mathcal{I}_{p} \, \mathrm{d}\textbf{x}\) is the classical Fisher information matrix (CFIM) for the prior distribution \(p(\textbf{x})\).
  • \(\mathcal{I}_{\mathrm{Bayes}} = \int p(\textbf{x}) \mathcal{I} \, \mathrm{d}\textbf{x}\) is the average CFIM over the prior.

Parameters:

Name Type Description Default
x list

Parameter regimes for integration.

required
p array

Prior distribution.

required
dp list

Derivatives of the prior distribution with respect to the parameters.

required
rho list

Parameterized density matrices.

required
drho list

Derivatives of the density matrices with respect to the parameters.

required
M list

Positive operator-valued measure (POVM). Default is SIC-POVM.

[]
eps float

Machine epsilon.

1e-08

Returns:

Type Description
float / array

For single parameter: float. For multiple parameters: matrix.

Notes

SIC-POVM uses Weyl-Heisenberg covariant fiducial states from http://www.physics.umb.edu/Research/QBism/solutions.html.

Source code in quanestimation/BayesianBound/BayesCramerRao.py
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
def VTB(x, p, dp, rho, drho, M=[], eps=1e-8):
    r"""
    Calculate the Van Trees bound (VTB), a Bayesian version of the Cramer-Rao bound.

    The covariance matrix with prior distribution $p(\textbf{x})$ is:

    $$
        \mathrm{cov}(\hat{\textbf{x}},\{\Pi_y\}) = \int p(\textbf{x}) \sum_y \mathrm{Tr}
        (\rho\Pi_y) (\hat{\textbf{x}}-\textbf{x})(\hat{\textbf{x}}-\textbf{x})^{\mathrm{T}}
        \mathrm{d}\textbf{x}.
    $$

    The VTB is given by:

    $$
    \mathrm{cov} \geq \left(\mathcal{I}_{\mathrm{prior}} + \mathcal{I}_{\mathrm{Bayes}}\right)^{-1}.
    $$

    Symbols:  
        - $\mathcal{I}_{\mathrm{prior}} = \int p(\textbf{x}) \mathcal{I}_{p} \, \mathrm{d}\textbf{x}$ 
            is the classical Fisher information matrix (CFIM) for the prior distribution $p(\textbf{x})$.    
        - $\mathcal{I}_{\mathrm{Bayes}} = \int p(\textbf{x}) \mathcal{I} \, \mathrm{d}\textbf{x}$ 
            is the average CFIM over the prior.

    Args:
        x (list): 
            Parameter regimes for integration.
        p (np.array): 
            Prior distribution.
        dp (list): 
            Derivatives of the prior distribution with respect to the parameters.
        rho (list): 
            Parameterized density matrices.
        drho (list): 
            Derivatives of the density matrices with respect to the parameters.
        M (list, optional): 
            Positive operator-valued measure (POVM). Default is SIC-POVM.
        eps (float, optional): 
            Machine epsilon.

    Returns:
        (float/np.array): 
            For single parameter: float. For multiple parameters: matrix.

    Notes: 
        SIC-POVM uses Weyl-Heisenberg covariant fiducial states from 
        [http://www.physics.umb.edu/Research/QBism/solutions.html](http://www.physics.umb.edu/Research/QBism/solutions.html).
    """

    para_num = len(x)
    p_num = len(p)

    if para_num == 1:
        #### single parameter scenario ####
        if M == []:
            M = SIC(len(rho[0]))
        else:
            if type(M) != list:
                raise TypeError("Please make sure M is a list!")

        if type(drho[0]) == list:
            drho = [drho[i][0] for i in range(p_num)]
        if type(dp[0]) == list or type(dp[0]) == np.ndarray:
            dp = [dp[i][0] for i in range(p_num)]

        F_tp = np.zeros(p_num)
        for m in range(p_num):
            F_tp[m] = CFIM(rho[m], [drho[m]], M=M, eps=eps)


        arr1 = [np.real(dp[i] * dp[i] / p[i]) for i in range(p_num)]
        I = simpson(arr1, x[0])
        arr2 = [np.real(F_tp[j] * p[j]) for j in range(p_num)]
        F = simpson(arr2, x[0])
        return 1.0 / (I + F)
    else:
        #### multiparameter scenario ####
        p_shape = np.shape(p)
        p_ext = extract_ele(p, para_num)
        dp_ext = extract_ele(dp, para_num)
        rho_ext = extract_ele(rho, para_num)
        drho_ext = extract_ele(drho, para_num)

        p_list, rho_list, drho_list = [], [], []
        for p_ele, rho_ele, drho_ele in zip(p_ext, rho_ext, drho_ext):
            p_list.append(p_ele)
            rho_list.append(rho_ele)
            drho_list.append(drho_ele)
        dp_list = [dpi for dpi in dp_ext]

        dim = len(rho_list[0])
        if M == []:
            M = SIC(dim)
        else:
            if type(M) != list:
                raise TypeError("Please make sure M is a list!")

        F_list = [
                [[0.0 for i in range(len(p_list))] for j in range(para_num)]
                for k in range(para_num)
            ]
        I_list = [
                [[0.0 for i in range(len(p_list))] for j in range(para_num)]
                for k in range(para_num)
            ]
        for i in range(len(p_list)):
            F_tp = CFIM(rho_list[i], drho_list[i], M=M, eps=eps)
            for pj in range(para_num):
                for pk in range(para_num):
                    F_list[pj][pk][i] = F_tp[pj][pk]
                    I_list[pj][pk][i] = (
                            dp_list[i][pj] * dp_list[i][pk] / p_list[i] ** 2
                        )

        F_res = np.zeros([para_num, para_num])
        I_res = np.zeros([para_num, para_num])
        for para_i in range(0, para_num):
            for para_j in range(para_i, para_num):
                F_ij = np.array(F_list[para_i][para_j]).reshape(p_shape)
                I_ij = np.array(I_list[para_i][para_j]).reshape(p_shape)
                arr1 = p * F_ij
                arr2 = p * I_ij
                for si in reversed(range(para_num)):
                    arr1 = simpson(arr1, x[si])
                    arr2 = simpson(arr2, x[si])
                F_res[para_i][para_j] = arr1
                F_res[para_j][para_i] = arr1
                I_res[para_i][para_j] = arr2
                I_res[para_j][para_i] = arr2
        return np.linalg.pinv(F_res + I_res)

Quantum Van Trees bound (QVTB)

Calculate the quantum Van Trees bound (QVTB), a Bayesian version of the quantum Cramer-Rao bound.

The covariance matrix with prior distribution \(p(\textbf{x})\) is:

\[ \mathrm{cov}(\hat{\textbf{x}},\{\Pi_y\}) = \int p(\textbf{x}) \sum_y \mathrm{Tr} (\rho\Pi_y) (\hat{\textbf{x}}-\textbf{x})(\hat{\textbf{x}}-\textbf{x})^{\mathrm{T}} \mathrm{d}\textbf{x}. \]

The QVTB is given by:

$$ \mathrm{cov} \geq \left(\mathcal{I}{\mathrm{prior}} + \mathcal{F}. $$}}\right)^{-1

Symbols
  • \(\mathcal{I}_{\mathrm{prior}} = \int p(\textbf{x}) \mathcal{I}_{p} \, \mathrm{d}\textbf{x}\):
    the classical Fisher information matrix (CFIM) for the prior distribution \(p(\textbf{x})\).
  • \(\mathcal{F}_{\mathrm{Bayes}} = \int p(\textbf{x}) \mathcal{F} \, \mathrm{d}\textbf{x}\):
    the average quantum Fisher information matrix (QFIM) over the prior.

Parameters:

Name Type Description Default
x list

Parameter regimes for integration.

required
p array

Prior distribution.

required
dp list

Derivatives of the prior distribution with respect to the parameters.

required
rho list

Parameterized density matrices.

required
drho list

Derivatives of the density matrices with respect to the parameters.

required
LDtype string

Type of logarithmic derivative (default: "SLD"). Options:
- "SLD": Symmetric logarithmic derivative.
- "RLD": Right logarithmic derivative.
- "LLD": Left logarithmic derivative.

'SLD'
eps float

Machine epsilon.

1e-08

Returns:

Type Description
float / array

For single parameter: float. For multiple parameters: matrix.

Source code in quanestimation/BayesianBound/BayesCramerRao.py
 896
 897
 898
 899
 900
 901
 902
 903
 904
 905
 906
 907
 908
 909
 910
 911
 912
 913
 914
 915
 916
 917
 918
 919
 920
 921
 922
 923
 924
 925
 926
 927
 928
 929
 930
 931
 932
 933
 934
 935
 936
 937
 938
 939
 940
 941
 942
 943
 944
 945
 946
 947
 948
 949
 950
 951
 952
 953
 954
 955
 956
 957
 958
 959
 960
 961
 962
 963
 964
 965
 966
 967
 968
 969
 970
 971
 972
 973
 974
 975
 976
 977
 978
 979
 980
 981
 982
 983
 984
 985
 986
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
def QVTB(x, p, dp, rho, drho, LDtype="SLD", eps=1e-8):
    r"""
    Calculate the quantum Van Trees bound (QVTB), a Bayesian version of the quantum Cramer-Rao bound.

    The covariance matrix with prior distribution $p(\textbf{x})$ is:

    $$
    \mathrm{cov}(\hat{\textbf{x}},\{\Pi_y\}) = \int p(\textbf{x}) \sum_y \mathrm{Tr}
    (\rho\Pi_y) (\hat{\textbf{x}}-\textbf{x})(\hat{\textbf{x}}-\textbf{x})^{\mathrm{T}}
    \mathrm{d}\textbf{x}.
    $$

    The QVTB is given by:

    \$$
    \mathrm{cov} \geq \left(\mathcal{I}_{\mathrm{prior}} + \mathcal{F}_{\mathrm{Bayes}}\right)^{-1}.
    $$

    Symbols:
        - $\mathcal{I}_{\mathrm{prior}} = \int p(\textbf{x}) \mathcal{I}_{p} \, \mathrm{d}\textbf{x}$:  
            the classical Fisher information matrix (CFIM) for the prior distribution $p(\textbf{x})$.
        - $\mathcal{F}_{\mathrm{Bayes}} = \int p(\textbf{x}) \mathcal{F} \, \mathrm{d}\textbf{x}$:  
            the average quantum Fisher information matrix (QFIM) over the prior.

    Args:
        x (list): 
            Parameter regimes for integration.
        p (np.array): 
            Prior distribution.
        dp (list): 
            Derivatives of the prior distribution with respect to the parameters.
        rho (list): 
            Parameterized density matrices.
        drho (list): 
            Derivatives of the density matrices with respect to the parameters.
        LDtype (string, optional): 
            Type of logarithmic derivative (default: "SLD"). Options:  
                - "SLD": Symmetric logarithmic derivative.  
                - "RLD": Right logarithmic derivative.  
                - "LLD": Left logarithmic derivative.  
        eps (float, optional): 
            Machine epsilon.

    Returns: 
        (float/np.array): 
            For single parameter: float. For multiple parameters: matrix.
    """
    para_num = len(x)
    p_num = len(p)

    if para_num == 1:
        if type(drho[0]) == list:
            drho = [drho[i][0] for i in range(p_num)]
        if type(dp[0]) == list or type(dp[0]) == np.ndarray:
            dp = [dp[i][0] for i in range(p_num)]

        F_tp = np.zeros(p_num)
        for m in range(p_num):
            F_tp[m] = QFIM(rho[m], [drho[m]], LDtype=LDtype, eps=eps)

        arr1 = [np.real(dp[i] * dp[i] / p[i]) for i in range(p_num)]
        I = simpson(arr1, x[0])
        arr2 = [np.real(F_tp[j] * p[j]) for j in range(p_num)]
        F = simpson(arr2, x[0])
        return 1.0 / (I + F)
    else:
        #### multiparameter scenario ####
        p_shape = np.shape(p)
        p_ext = extract_ele(p, para_num)
        dp_ext = extract_ele(dp, para_num)
        rho_ext = extract_ele(rho, para_num)
        drho_ext = extract_ele(drho, para_num)

        p_list, rho_list, drho_list = [], [], []
        for p_ele, rho_ele, drho_ele in zip(p_ext, rho_ext, drho_ext):
            p_list.append(p_ele)
            rho_list.append(rho_ele)
            drho_list.append(drho_ele)
        dp_list = [dpi for dpi in dp_ext]

        F_list = [
                [[0.0 for i in range(len(p_list))] for j in range(para_num)]
                for k in range(para_num)
            ]
        I_list = [
                [[0.0 for i in range(len(p_list))] for j in range(para_num)]
                for k in range(para_num)
            ]
        for i in range(len(p_list)):
            F_tp = QFIM(rho_list[i], drho_list[i], LDtype=LDtype, eps=eps)
            for pj in range(para_num):
                for pk in range(para_num):
                    F_list[pj][pk][i] = F_tp[pj][pk]
                    I_list[pj][pk][i] = (
                            dp_list[i][pj] * dp_list[i][pk] / p_list[i] ** 2
                        )

        F_res = np.zeros([para_num, para_num])
        I_res = np.zeros([para_num, para_num])
        for para_i in range(0, para_num):
            for para_j in range(para_i, para_num):
                F_ij = np.array(F_list[para_i][para_j]).reshape(p_shape)
                I_ij = np.array(I_list[para_i][para_j]).reshape(p_shape)
                arr1 = p * F_ij
                arr2 = p * I_ij
                for si in reversed(range(para_num)):
                    arr1 = simpson(arr1, x[si])
                    arr2 = simpson(arr2, x[si])
                F_res[para_i][para_j] = arr1
                F_res[para_j][para_i] = arr1
                I_res[para_i][para_j] = arr2
                I_res[para_j][para_i] = arr2
        return np.linalg.pinv(F_res + I_res)

Quantum Ziv-Zakai bound

Calculation of the quantum Ziv-Zakai bound (QZZB). The expression of QZZB with a prior distribution p(x) in a finite regime \([\alpha,\beta]\) is

\[\begin{aligned} \mathrm{var}(\hat{x},\{\Pi_y\}) \geq & \frac{1}{2}\int_0^\infty \mathrm{d}\tau\tau \mathcal{V}\int_{-\infty}^{\infty} \mathrm{d}x\min\!\left\{p(x), p(x+\tau)\right\} \nonumber \\ & \times\left(1-\frac{1}{2}||\rho(x)-\rho(x+\tau)||\right). \end{aligned}\]
Symbols
  • \(||\cdot||\): the trace norm
  • \(\mathcal{V}\): the "valley-filling" operator satisfying \(\mathcal{V}f(\tau)=\max_{h\geq 0}f(\tau+h)\).
  • \(\rho(x)\): the parameterized density matrix.

Parameters:

Name Type Description Default
x list

The regimes of the parameters for the integral.

required
p ndarray

The prior distribution as a multidimensional array.

required
rho list

Parameterized density matrix as a multidimensional list.

required
eps float

Machine epsilon. Defaults to 1e-8.

1e-08

Returns:

Type Description
float

Quantum Ziv-Zakai bound (QZZB).

Raises:

Type Description
ValueError

If the length of x and p do not match.

Source code in quanestimation/BayesianBound/ZivZakai.py
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
def QZZB(x, p, rho, eps=1e-8):
    r"""
    Calculation of the quantum Ziv-Zakai bound (QZZB). The expression of QZZB with a 
    prior distribution p(x) in a finite regime $[\alpha,\beta]$ is

    \begin{aligned}
        \mathrm{var}(\hat{x},\{\Pi_y\}) \geq &  \frac{1}{2}\int_0^\infty \mathrm{d}\tau\tau
        \mathcal{V}\int_{-\infty}^{\infty} \mathrm{d}x\min\!\left\{p(x), p(x+\tau)\right\} \nonumber \\
        & \times\left(1-\frac{1}{2}||\rho(x)-\rho(x+\tau)||\right).
    \end{aligned}

    Symbols:
        - $||\cdot||$: the trace norm
        - $\mathcal{V}$: the "valley-filling" operator satisfying $\mathcal{V}f(\tau)=\max_{h\geq 0}f(\tau+h)$. 
        - $\rho(x)$: the parameterized density matrix.

    Args:
        x (list): 
            The regimes of the parameters for the integral.
        p (np.ndarray): 
            The prior distribution as a multidimensional array.
        rho (list): 
            Parameterized density matrix as a multidimensional list.
        eps (float, optional): 
            Machine epsilon. Defaults to 1e-8.

    Returns:
        (float): 
            Quantum Ziv-Zakai bound (QZZB).

    Raises:
        ValueError: 
            If the length of x and p do not match.
    """

    if type(x[0]) == list or type(x[0]) == np.ndarray:
        x = x[0]
    p_num = len(p)
    tau = [xi - x[0] for xi in x]
    f_tau = np.zeros(p_num)
    for i in range(p_num):
        arr = [
            np.real(2 * min(p[j], p[j + i]) * helstrom_dm(rho[j], rho[j + i], eps))
            for j in range(p_num - i)
        ]
        f_tp = simpson(arr, x[0 : p_num - i])
        f_tau[i] = f_tp
    arr2 = [tau[m] * max(f_tau[m:]) for m in range(p_num)]
    I = simpson(arr2, tau)
    return 0.5 * I

Bayesian estimation

Maximum a posteriori probability (MAP)

Bayesian estimation. The prior distribution is updated via the posterior distribution obtained by the Bayes' rule, and the estimated value of parameters are updated via the expectation value of the distribution or maximum a posteriori probability (MAP).

Parameters:

Name Type Description Default
x list

The regimes of the parameters for the integral.

required
p ndarray

The prior distribution as a multidimensional array.

required
rho list

Parameterized density matrix as a multidimensional list.

required
y ndarray

The experimental results obtained in practice.

required
M list

A set of positive operator-valued measure (POVM). Defaults to a set of rank-one symmetric informationally complete POVM (SIC-POVM).

[]
estimator str

Estimators for the bayesian estimation. Options are: "mean" (default) - The expectation value of the distribution. "MAP" - Maximum a posteriori probability.

'mean'
savefile bool

Whether to save all posterior distributions. If True, generates "pout.npy" and "xout.npy" containing all posterior distributions and estimated values across iterations. If False, only saves the final posterior distribution and all estimated values. Defaults to False.

False

Returns:

Type Description
tuple

pout (np.ndarray): The posterior distribution in the final iteration.

xout (float/list): The estimated values in the final iteration.

Raises:

Type Description
TypeError

If M is not a list.

ValueError

If estimator is not "mean" or "MAP".

Note

SIC-POVM is calculated by the Weyl-Heisenberg covariant SIC-POVM fiducial state which can be downloaded from here.

Source code in quanestimation/BayesianBound/BayesEstimation.py
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
def Bayes(x, p, rho, y, M=[], estimator="mean", savefile=False):
    """
    Bayesian estimation. The prior distribution is updated via the posterior distribution 
    obtained by the Bayes' rule, and the estimated value of parameters are updated via 
    the expectation value of the distribution or maximum a posteriori probability (MAP).

    Args:
        x (list): 
            The regimes of the parameters for the integral.
        p (np.ndarray): 
            The prior distribution as a multidimensional array.
        rho (list): 
            Parameterized density matrix as a multidimensional list.
        y (np.ndarray): 
            The experimental results obtained in practice.
        M (list, optional): 
            A set of positive operator-valued measure (POVM). Defaults to a set of rank-one 
            symmetric informationally complete POVM (SIC-POVM).
        estimator (str, optional): 
            Estimators for the bayesian estimation. Options are:
                "mean" (default) - The expectation value of the distribution.
                "MAP" - Maximum a posteriori probability.
        savefile (bool, optional): 
            Whether to save all posterior distributions. If True, generates "pout.npy" and 
            "xout.npy" containing all posterior distributions and estimated values across 
            iterations. If False, only saves the final posterior distribution and all 
            estimated values. Defaults to False.

    Returns:
        (tuple): 
            pout (np.ndarray): 
                The posterior distribution in the final iteration.

            xout (float/list): 
                The estimated values in the final iteration.

    Raises:
        TypeError: 
            If `M` is not a list.
        ValueError: 
            If estimator is not "mean" or "MAP".

    Note: 
        SIC-POVM is calculated by the Weyl-Heisenberg covariant SIC-POVM fiducial state 
        which can be downloaded from [here](http://www.physics.umb.edu/Research/QBism/solutions.html).
    """

    para_num = len(x)
    max_episode = len(y)
    if para_num == 1:
        #### single parameter scenario ####
        if M == []:
            M = SIC(len(rho[0]))
        else:
            if type(M) != list:
                raise TypeError("Please make sure M is a list!")
        if savefile == False:
            x_out = []
            if estimator == "mean":
                for mi in range(max_episode):
                    res_exp = int(y[mi])
                    pyx = np.zeros(len(x[0]))
                    for xi in range(len(x[0])):
                        p_tp = np.real(np.trace(np.dot(rho[xi], M[res_exp])))
                        pyx[xi] = p_tp
                    arr = [pyx[m] * p[m] for m in range(len(x[0]))]
                    py = simpson(arr, x[0])
                    p_update = pyx * p / py
                    p = p_update
                    mean = simpson([p[m]*x[0][m] for m in range(len(x[0]))], x[0])
                    x_out.append(mean)
            elif estimator == "MAP":
                for mi in range(max_episode):
                    res_exp = int(y[mi])
                    pyx = np.zeros(len(x[0]))
                    for xi in range(len(x[0])):
                        p_tp = np.real(np.trace(np.dot(rho[xi], M[res_exp])))
                        pyx[xi] = p_tp
                    arr = [pyx[m] * p[m] for m in range(len(x[0]))]
                    py = simpson(arr, x[0])
                    p_update = pyx * p / py
                    p = p_update
                    indx = np.where(p == max(p))[0][0]
                    x_out.append(x[0][indx])
            else:
                raise ValueError(
                "{!r} is not a valid value for estimator, supported values are 'mean' and 'MAP'.".format(estimator))
            np.save("pout", p)
            np.save("xout", x_out)
            return p, x_out[-1]
        else:
            p_out, x_out = [], []
            if estimator == "mean":
                for mi in range(max_episode):
                    res_exp = int(y[mi])
                    pyx = np.zeros(len(x[0]))
                    for xi in range(len(x[0])):
                        p_tp = np.real(np.trace(np.dot(rho[xi], M[res_exp])))
                        pyx[xi] = p_tp
                    arr = [pyx[m] * p[m] for m in range(len(x[0]))]
                    py = simpson(arr, x[0])
                    p_update = pyx * p / py
                    p = p_update
                    mean = simpson([p[m]*x[0][m] for m in range(len(x[0]))], x[0])
                    p_out.append(p)
                    x_out.append(mean)
            elif estimator == "MAP":
                for mi in range(max_episode):
                    res_exp = int(y[mi])
                    pyx = np.zeros(len(x[0]))
                    for xi in range(len(x[0])):
                        p_tp = np.real(np.trace(np.dot(rho[xi], M[res_exp])))
                        pyx[xi] = p_tp
                    arr = [pyx[m] * p[m] for m in range(len(x[0]))]
                    py = simpson(arr, x[0])
                    p_update = pyx * p / py
                    p = p_update
                    indx = np.where(p == max(p))[0][0]
                    p_out.append(p)
                    x_out.append(x[0][indx])
            else:
                raise ValueError(
                "{!r} is not a valid value for estimator, supported values are 'mean' and 'MAP'.".format(estimator))
            np.save("pout", p_out)
            np.save("xout", x_out)
            return p, x_out[-1]
    else:
        #### multiparameter scenario ####
        p_shape = np.shape(p)
        p_ext = extract_ele(p, para_num)
        rho_ext = extract_ele(rho, para_num)

        p_list, rho_list = [], []
        for p_ele, rho_ele in zip(p_ext, rho_ext):
            p_list.append(p_ele)
            rho_list.append(rho_ele)

        dim = len(rho_list[0])
        if M == []:
            M = SIC(dim)
        else:
            if type(M) != list:
                raise TypeError("Please make sure M is a list!")

        if savefile == False:
            x_out = []
            if estimator == "mean":
                for mi in range(max_episode):
                    res_exp = int(y[mi])
                    pyx_list = np.zeros(len(p_list))
                    for xi in range(len(p_list)):
                        p_tp = np.real(np.trace(np.dot(rho_list[xi], M[res_exp])))
                        pyx_list[xi] = p_tp
                    pyx = pyx_list.reshape(p_shape)
                    arr = p * pyx
                    for si in reversed(range(para_num)):
                        arr = simpson(arr, x[si])
                    py = arr
                    p_update = p * pyx / py
                    p = p_update

                    mean = integ(x, p)
                    x_out.append(mean)
            elif estimator == "MAP":
                for mi in range(max_episode):
                    res_exp = int(y[mi])
                    pyx_list = np.zeros(len(p_list))
                    for xi in range(len(p_list)):
                        p_tp = np.real(np.trace(np.dot(rho_list[xi], M[res_exp])))
                        pyx_list[xi] = p_tp
                    pyx = pyx_list.reshape(p_shape)
                    arr = p * pyx
                    for si in reversed(range(para_num)):
                        arr = simpson(arr, x[si])
                    py = arr
                    p_update = p * pyx / py
                    p = p_update

                    indx = np.where(np.array(p) == np.max(np.array(p)))
                    x_out.append([x[i][indx[i][0]] for i in range(para_num)])
            else:
                raise ValueError(
                "{!r} is not a valid value for estimator, supported values are 'mean' and 'MAP'.".format(estimator))
            np.save("pout", p)
            np.save("xout", x_out)
            return p, x_out[-1]
        else:
            p_out, x_out = [], []
            if estimator == "mean":
                for mi in range(max_episode):
                    res_exp = int(y[mi])
                    pyx_list = np.zeros(len(p_list))
                    for xi in range(len(p_list)):
                        p_tp = np.real(np.trace(np.dot(rho_list[xi], M[res_exp])))
                        pyx_list[xi] = p_tp
                    pyx = pyx_list.reshape(p_shape)
                    arr = p * pyx
                    for si in reversed(range(para_num)):
                        arr = simpson(arr, x[si])
                    py = arr
                    p_update = p * pyx / py
                    p = p_update

                    mean = integ(x, p)
                    p_out.append(p)
                    x_out.append(mean)
            elif estimator == "MAP":
                for mi in range(max_episode):
                    res_exp = int(y[mi])
                    pyx_list = np.zeros(len(p_list))
                    for xi in range(len(p_list)):
                        p_tp = np.real(np.trace(np.dot(rho_list[xi], M[res_exp])))
                        pyx_list[xi] = p_tp
                    pyx = pyx_list.reshape(p_shape)
                    arr = p * pyx
                    for si in reversed(range(para_num)):
                        arr = simpson(arr, x[si])
                    py = arr
                    p_update = p * pyx / py
                    p = p_update

                    indx = np.where(np.array(p) == np.max(np.array(p)))
                    p_out.append(p)
                    x_out.append([x[i][indx[i][0]] for i in range(para_num)])
            else:
                raise ValueError(
                "{!r} is not a valid value for estimator, supported values are 'mean' and 'MAP'.".format(estimator))
            np.save("pout", p_out)
            np.save("xout", x_out)
            return p, x_out[-1]

Maximum likelihood estimation (MLE)

Maximum likelihood estimation (MLE) for parameter estimation.

Parameters:

Name Type Description Default
x list

The regimes of the parameters for the integral.

required
rho list

Parameterized density matrix as a multidimensional list.

required
y ndarray

The experimental results obtained in practice.

required
M list

A set of positive operator-valued measure (POVM). Defaults to a set of rank-one symmetric informationally complete POVM (SIC-POVM).

[]
savefile bool

Whether to save all likelihood functions. If True, generates "Lout.npy" and "xout.npy" containing all likelihood functions and estimated values across iterations. If False, only saves the final likelihood function and all estimated values. Defaults to False.

False

Returns:

Type Description
tuple

Lout (np.ndarray): The likelihood function in the final iteration.

xout (float/list): The estimated values in the final iteration.

Raises:

Type Description
TypeError

If M is not a list.

Note

SIC-POVM is calculated by the Weyl-Heisenberg covariant SIC-POVM fiducial state which can be downloaded from here.

Source code in quanestimation/BayesianBound/BayesEstimation.py
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
def MLE(x, rho, y, M=[], savefile=False):
    """
    Maximum likelihood estimation (MLE) for parameter estimation.

    Args:
        x (list): 
            The regimes of the parameters for the integral.
        rho (list): 
            Parameterized density matrix as a multidimensional list.
        y (np.ndarray): 
            The experimental results obtained in practice.
        M (list, optional): 
            A set of positive operator-valued measure (POVM). Defaults to a set of rank-one 
            symmetric informationally complete POVM (SIC-POVM).
        savefile (bool, optional): 
            Whether to save all likelihood functions. If True, generates "Lout.npy" and 
            "xout.npy" containing all likelihood functions and estimated values across 
            iterations. If False, only saves the final likelihood function and all 
            estimated values. Defaults to False.

    Returns:
        (tuple): 
            Lout (np.ndarray): 
                The likelihood function in the final iteration.

            xout (float/list): 
                The estimated values in the final iteration.

    Raises:
        TypeError: If `M` is not a list.

    Note: 
        SIC-POVM is calculated by the Weyl-Heisenberg covariant SIC-POVM fiducial state 
        which can be downloaded from [here](http://www.physics.umb.edu/Research/QBism/solutions.html).
    """

    para_num = len(x)
    max_episode = len(y)
    if para_num == 1:
        #### single parameter scenario ####
        if M == []:
            M = SIC(len(rho[0]))
        else:
            if type(M) != list:
                raise TypeError("Please make sure M is a list!")

        if savefile == False:
            x_out = []
            L_out = np.ones(len(x[0]))
            for mi in range(max_episode):
                res_exp = int(y[mi])
                for xi in range(len(x[0])):
                    p_tp = np.real(np.trace(np.dot(rho[xi], M[res_exp])))
                    L_out[xi] = L_out[xi] * p_tp
                indx = np.where(L_out == max(L_out))[0][0]
                x_out.append(x[0][indx])
            np.save("Lout", L_out)
            np.save("xout", x_out)

            return L_out, x_out[-1]
        else:
            L_out, x_out = [], []
            L_tp = np.ones(len(x[0]))
            for mi in range(max_episode):
                res_exp = int(y[mi])
                for xi in range(len(x[0])):
                    p_tp = np.real(np.trace(np.dot(rho[xi], M[res_exp])))
                    L_tp[xi] = L_tp[xi] * p_tp
                indx = np.where(L_tp == max(L_tp))[0][0]
                L_out.append(L_tp)
                x_out.append(x[0][indx])

            np.save("Lout", L_out)
            np.save("xout", x_out)
            return L_tp, x_out[-1]
    else:
        #### multiparameter scenario ####
        p_shape = []
        for i in range(para_num):
            p_shape.append(len(x[i]))
        rho_ext = extract_ele(rho, para_num)

        rho_list = []
        for rho_ele in rho_ext:
            rho_list.append(rho_ele)

        dim = len(rho_list[0])
        if M == []:
            M = SIC(dim)
        else:
            if type(M) != list:
                raise TypeError("Please make sure M is a list!")

        if savefile == False:
            x_out = []
            L_list = np.ones(len(rho_list))
            for mi in range(max_episode):
                res_exp = int(y[mi])
                for xi in range(len(rho_list)):
                    p_tp = np.real(np.trace(np.dot(rho_list[xi], M[res_exp])))
                    L_list[xi] = L_list[xi] * p_tp
                L_out = L_list.reshape(p_shape)
                indx = np.where(L_out == np.max(L_out))
                x_out.append([x[i][indx[i][0]] for i in range(para_num)])
            np.save("Lout", L_out)
            np.save("xout", x_out)

            return L_out, x_out[-1]
        else:
            L_out, x_out = [], []
            L_list = np.ones(len(rho_list))
            for mi in range(max_episode):
                res_exp = int(y[mi])
                for xi in range(len(rho_list)):
                    p_tp = np.real(np.trace(np.dot(rho_list[xi], M[res_exp])))
                    L_list[xi] = L_list[xi] * p_tp
                L_tp = L_list.reshape(p_shape)
                indx = np.where(L_tp == np.max(L_tp))
                L_out.append(L_tp)
                x_out.append([x[i][indx[i][0]] for i in range(para_num)])

            np.save("Lout", L_out)
            np.save("xout", x_out)
            return L_tp, x_out[-1]

Average Bayesian cost (BayesCost)

Calculation of the average Bayesian cost with a quadratic cost function.

Parameters:

Name Type Description Default
x list

The regimes of the parameters for the integral.

required
p array

The prior distribution as a multidimensional array.

required
xest list

The estimators.

required
rho list

Parameterized density matrix as a multidimensional list.

required
M list

A set of positive operator-valued measure (POVM).

required
W array

Weight matrix. Defaults to an identity matrix.

[]
eps float

Machine epsilon.

1e-08

Returns:

Type Description
float

The average Bayesian cost.

Raises:

Type Description
TypeError

If M is not a list.

Source code in quanestimation/BayesianBound/BayesEstimation.py
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
def BayesCost(x, p, xest, rho, M, W=[], eps=1e-8):
    """
    Calculation of the average Bayesian cost with a quadratic cost function.

    Args:
        x (list): 
            The regimes of the parameters for the integral.
        p (array): 
            The prior distribution as a multidimensional array.
        xest (list): 
            The estimators.
        rho (list): 
            Parameterized density matrix as a multidimensional list.
        M (list): 
            A set of positive operator-valued measure (POVM).
        W (array, optional): 
            Weight matrix. Defaults to an identity matrix.
        eps (float, optional): 
            Machine epsilon.

    Returns:
        (float): 
            The average Bayesian cost.

    Raises:
        TypeError: 
            If `M` is not a list.
    """
    para_num = len(x)
    if para_num == 1:
        # single-parameter scenario
        if M == []:
            M = SIC(len(rho[0]))
        else:
            if type(M) != list:
                raise TypeError("Please make sure M is a list!")
        p_num = len(x[0])
        value = [p[i]*sum([np.trace(np.dot(rho[i], M[mi]))*(x[0][i]-xest[mi][0])**2 for mi in range(len(M))]) for i in range(p_num)]
        C = simpson(value, x[0])
        return np.real(C)
    else:
        # multi-parameter scenario
        p_shape = np.shape(p)
        p_ext = extract_ele(p, para_num)
        rho_ext = extract_ele(rho, para_num)

        p_list, rho_list = [], []
        for p_ele, rho_ele in zip(p_ext, rho_ext):
            p_list.append(p_ele)
            rho_list.append(rho_ele)

        x_pro = product(*x)
        x_list = []
        for x_ele in x_pro:
            x_list.append([x_ele[i] for i in range(para_num)])

        dim = len(rho_list[0])
        p_num = len(p_list)

        if W == []:
            W = np.identity(para_num)

        if M == []:
            M = SIC(dim)
        else:
            if type(M) != list:
                raise TypeError("Please make sure M is a list!")

        value = [0.0 for i in range(p_num)]
        for i in range(p_num):
            x_tp = np.array(x_list[i])
            xCx = 0.0
            for mi in range(len(M)):
                xCx += np.trace(np.dot(rho_list[i], M[mi]))*np.dot((x_tp-xest[mi]).reshape(1, -1), np.dot(W, (x_tp-xest[mi]).reshape(-1, 1)))[0][0]
            value[i] = p_list[i]*xCx
        C = np.array(value).reshape(p_shape)
        for si in reversed(range(para_num)):
            C = simpson(C, x[si])
        return np.real(C)

Bayesian cost bound (BCB)

Calculation of the Bayesian cost bound with a quadratic cost function.

Parameters:

Name Type Description Default
x list

The regimes of the parameters for the integral.

required
p array

The prior distribution as a multidimensional array.

required
rho list

Parameterized density matrix as a multidimensional list.

required
W array

Weight matrix. Defaults to an identity matrix.

[]
eps float

Machine epsilon. Defaults to 1e-8.

1e-08

Returns:

Type Description
float

The value of the minimum Bayesian cost.

Note

This function calculates the Bayesian cost bound for parameter estimation.

Source code in quanestimation/BayesianBound/BayesEstimation.py
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
def BCB(x, p, rho, W=[], eps=1e-8):
    """
    Calculation of the Bayesian cost bound with a quadratic cost function.

    Args:
        x (list): 
            The regimes of the parameters for the integral.
        p (array): 
            The prior distribution as a multidimensional array.
        rho (list): 
            Parameterized density matrix as a multidimensional list.
        W (array, optional): 
            Weight matrix. Defaults to an identity matrix.
        eps (float, optional): 
            Machine epsilon. Defaults to 1e-8.

    Returns:
        (float): 
            The value of the minimum Bayesian cost.

    Note:
        This function calculates the Bayesian cost bound for parameter estimation.
    """
    para_num = len(x)
    if para_num == 1:
        # single-parameter scenario
        dim = len(rho[0])
        p_num = len(x[0])
        value = [p[i]*x[0][i]**2 for i in range(p_num)]
        delta2_x = simpson(value, x[0])
        rho_avg = np.zeros((dim, dim), dtype=np.complex128)
        rho_pri = np.zeros((dim, dim), dtype=np.complex128)
        for di in range(dim):
            for dj in range(dim):
                rho_avg_arr = [p[m]*rho[m][di][dj] for m in range(p_num)]
                rho_pri_arr = [p[n]*x[0][n]*rho[n][di][dj] for n in range(p_num)]
                rho_avg[di][dj] = simpson(rho_avg_arr, x[0])
                rho_pri[di][dj] = simpson(rho_pri_arr, x[0])
        Lambda = Lambda_avg(rho_avg, [rho_pri], eps=eps)
        minBC = delta2_x-np.real(np.trace(np.dot(np.dot(rho_avg, Lambda[0]), Lambda[0])))
        return minBC
    else:
        # multi-parameter scenario
        p_shape = np.shape(p)
        p_ext = extract_ele(p, para_num)
        rho_ext = extract_ele(rho, para_num)

        p_list, rho_list = [], []
        for p_ele, rho_ele in zip(p_ext, rho_ext):
            p_list.append(p_ele)
            rho_list.append(rho_ele)

        dim = len(rho_list[0])
        p_num = len(p_list)

        x_pro = product(*x)
        x_list = []
        for x_ele in x_pro:
            x_list.append([x_ele[i] for i in range(para_num)])

        if W == []:
            W = np.identity(para_num)

        value = [0.0 for i in range(p_num)]
        for i in range(p_num):
            x_tp = np.array(x_list[i])
            xCx = np.dot(x_tp.reshape(1, -1), np.dot(W, x_tp.reshape(-1, 1)))[0][0]
            value[i] = p_list[i]*xCx
        delta2_x = np.array(value).reshape(p_shape)
        for si in reversed(range(para_num)):
            delta2_x = simpson(delta2_x, x[si])
        rho_avg = np.zeros((dim, dim), dtype=np.complex128)
        rho_pri = [np.zeros((dim, dim), dtype=np.complex128) for i in range(para_num)]
        for di in range(dim):
            for dj in range(dim):
                rho_avg_arr = [p_list[m]*rho_list[m][di][dj] for m in range(p_num)]
                rho_avg_tp = np.array(rho_avg_arr).reshape(p_shape)
                for si in reversed(range(para_num)):
                    rho_avg_tp = simpson(rho_avg_tp, x[si])
                rho_avg[di][dj] = rho_avg_tp

                for para_i in range(para_num):
                    rho_pri_arr = [p_list[n]*x_list[n][para_i]*rho_list[n][di][dj] for n in range(p_num)]
                    rho_pri_tp = np.array(rho_pri_arr).reshape(p_shape)
                    for si in reversed(range(para_num)):
                        rho_pri_tp = simpson(rho_pri_tp, x[si])

                    rho_pri[para_i][di][dj] = rho_pri_tp
        Lambda = Lambda_avg(rho_avg, rho_pri, eps=eps)
        Mat = np.zeros((para_num, para_num), dtype=np.complex128)
        for para_m in range(para_num):
            for para_n in range(para_num):
                Mat += W[para_m][para_n]*np.dot(Lambda[para_m], Lambda[para_n])

        minBC = delta2_x-np.real(np.trace(np.dot(rho_avg, Mat)))
        return minBC

Common utilities

Bayes input

Generate input variables for Bayesian estimation.

Parameters:

Name Type Description Default
x array

Parameter regimes

required
func callable

Function returning H or K

required
dfunc callable

Function returning dH or dK

required
channel str

"dynamics" or "Kraus" (default: "dynamics")

'dynamics'

Returns:

Type Description
tuple

Tuple of (H_list, dH_list) or (K_list, dK_list)

Raises:

Type Description
ValueError

For invalid channel.

Source code in quanestimation/Common/Common.py
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
def BayesInput(x, func, dfunc, channel="dynamics"):
    """
    Generate input variables for Bayesian estimation.

    Args:
        x (np.array): 
            Parameter regimes
        func (callable): 
            Function returning H or K
        dfunc (callable): 
            Function returning dH or dK
        channel (str, optional): 
            "dynamics" or "Kraus" (default: "dynamics")

    Returns:
        (tuple): 
            Tuple of (H_list, dH_list) or (K_list, dK_list)

    Raises:
        ValueError: 
            For invalid channel.
    """
    x_all = product(*x)
    if channel == "dynamics":
        H_list, dH_list = [], []
        for xi in x_all:
            H_list.append(func(*xi))
            dH_list.append(dfunc(*xi))
        return H_list, dH_list
    elif channel == "Kraus":
        K_list, dK_list = [], []
        for xi in x_all:
            K_list.append(func(*xi))
            dK_list.append(dfunc(*xi))
        return K_list, dK_list
    else:
        raise ValueError(
            "{!r} is not a valid channel. Supported values: "
            "'dynamics' or 'Kraus'.".format(channel)
        )

SIC-POVM

Generate SIC-POVM for given dimension.

Parameters:

Name Type Description Default
dim float

Dimension of the system.

required

Returns:

Type Description
list

List of SIC-POVM elements.

Raises:

Type Description
ValueError

If dimension > 151.

Source code in quanestimation/Common/Common.py
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
def SIC(dim):
    """
    Generate SIC-POVM for given dimension.

    Args:
        dim (float): 
            Dimension of the system.

    Returns:
        (list): 
            List of SIC-POVM elements.

    Raises:
        ValueError: 
            If dimension > 151.
    """
    if dim <= 151:
        file_path = os.path.join(
            os.path.dirname(os.path.dirname(__file__)),
            "sic_fiducial_vectors/d%d.txt" % (dim),
        )
        data = np.loadtxt(file_path)
        fiducial = data[:, 0] + data[:, 1] * 1.0j
        fiducial = np.array(fiducial).reshape(len(fiducial), 1)
        M = sic_povm(fiducial)
        return M
    else:
        raise ValueError(
            "The dimension of the space should be less or equal to 151."
        )

SU(\(N\)) generators

Generate sorted SU(N) generators.

Parameters:

Name Type Description Default
n float

Dimension of the system.

required

Returns:

Type Description
list

List of SU(N) generators.

Source code in quanestimation/Common/Common.py
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
def suN_generator(n):
    """
    Generate sorted SU(N) generators.

    Args:
        n (float): 
            Dimension of the system. 

    Returns:
        (list): 
            List of SU(N) generators.
    """
    symm, anti_symm, diag = suN_unsorted(n)
    if n == 2:
        return [symm[0], anti_symm[0], diag[0]]
    else:
        Lambda = [0.0] * len(symm + anti_symm + diag)

        Lambda[0], Lambda[1], Lambda[2] = symm[0], anti_symm[0], diag[0]

        repeat_times = 2
        m1, n1, k1 = 0, 3, 1
        while True:
            m1 += n1
            j, l = 0, 0
            for i in range(repeat_times):
                Lambda[m1 + j] = symm[k1]
                Lambda[m1 + j + 1] = anti_symm[k1]
                j += 2
                k1 += 1

            repeat_times += 1
            n1 = n1 + 2
            if k1 == len(symm):
                break

        m2, n2, k2 = 2, 5, 1
        while True:
            m2 += n2
            Lambda[m2] = diag[k2]
            n2 = n2 + 2
            k2 = k2 + 1
            if k2 == len(diag):
                break
        return Lambda