Utils & Metrics

_clip

pyldl.algorithms.utils._clip(func)

_reduction

pyldl.algorithms.utils._reduction(func)

binaryzation

pyldl.algorithms.utils.binaryzation(y: ndarray, method='threshold', param: any | None = None) ndarray

Transform label distribution matrix to logical label matrix.

Parameters:
  • y (np.ndarray) – Label distribution matrix (shape: \([n,\, l]\)).

  • method ({'threshold', 'topk'}, optional) –

    Type of binaryzation method, defaults to ‘threshold’. The options are ‘threshold’ and ‘topk’, which can refer to:

    [BIN-KWT+24]

    Zhiqiang Kou, Jing Wang, Jiawei Tang, Yuheng Jia, Boyu Shi, and Xin Geng. Exploiting multi-label correlation in label distribution learning. In Proceedings of the International Joint Conference on Artificial Intelligence, 4326–4334. 2024. URL: https://doi.org/10.24963/ijcai.2024/478.

  • param (any, optional) – Parameter of binaryzation method, defaults to None. If None, the default value is .5 for ‘threshold’ and \(\lfloor l / 2 \rfloor\) for ‘topk’.

Returns:

Logical label matrix (shape: \([n,\, l]\)).

Return type:

np.ndarray

kl_divergence

pyldl.algorithms.utils.kl_divergence(*args, reduction=<function average>)

pairwise_euclidean

pyldl.algorithms.utils.pairwise_euclidean(X: ndarray | Tensor, Y: ndarray | Tensor | None = None) ndarray | Tensor

Pairwise Euclidean distance.

Parameters:
  • X (Union[np.ndarray, tf.Tensor]) – Matrix \(\boldsymbol{X}\) (shape: \([m_X,\, n_X]\)).

  • Y (Union[np.ndarray, tf.Tensor], optional) – Matrix \(\boldsymbol{Y}\) (shape: \([m_Y,\, n_Y]\)), if None, \(\boldsymbol{Y} = \boldsymbol{X}\), defaults to None.

Returns:

Pairwise Euclidean distance (shape: \([m_X,\, m_Y]\)).

Return type:

Union[np.ndarray, tf.Tensor]

proj

pyldl.algorithms.utils.proj(Y: ndarray) ndarray

This approach is proposed in paper [Con16].

Parameters:

Y (np.ndarray) – Matrix \(\boldsymbol{Y}\).

Returns:

The projection onto the probability simplex.

Return type:

np.ndarray

soft_thresholding

pyldl.algorithms.utils.soft_thresholding(A: ndarray, tau: float) ndarray

Soft thresholding operation. It is defined as \(\text{soft}(\boldsymbol{A}, \tau) = \text{sgn}(\boldsymbol{A}) \odot \max\lbrace \lvert \boldsymbol{A} \rvert - \tau, 0 \rbrace\), where \(\odot\) denotes element-wise multiplication.

Parameters:
  • A (np.ndarray) – Matrix \(\boldsymbol{A}\).

  • tau (float) – \(\tau\).

Returns:

The result of soft thresholding operation.

Return type:

np.ndarray

solvel21

pyldl.algorithms.utils.solvel21(A: ndarray, tau: float) ndarray

This approach is proposed in paper [CY14].

The solution to the optimization problem \(\mathop{\arg\min}_{\boldsymbol{X}} \Vert \boldsymbol{X} - \boldsymbol{A} \Vert_\text{F}^2 + \tau \Vert \boldsymbol{X} \Vert_{2,\,1}\) is given by the following formula:

\[\begin{split}\vec{x}_{\bullet j}^{\ast} = \left\{ \begin{aligned} & \frac{\Vert \vec{a}_{\bullet j} \Vert - \tau}{\Vert \vec{a}_{\bullet j} \Vert} \vec{a}_{\bullet j}, & \tau \le \Vert \vec{a}_{\bullet j} \Vert \\ & 0, & \text{otherwise} \end{aligned} \right.\end{split}\]

where \(\vec{x}_{\bullet j}\) is the \(j\)-th column of matrix \(\boldsymbol{X}\), and \(\vec{a}_{\bullet j}\) is the \(j\)-th column of matrix \(\boldsymbol{A}\).

Parameters:
  • A (np.ndarray) – Matrix \(\boldsymbol{A}\).

  • tau (float) – \(\tau\).

Returns:

The solution to the optimization problem.

Return type:

np.ndarray

svt

pyldl.algorithms.utils.svt(A: ndarray, tau: float) ndarray

Singular value thresholding (SVT) is proposed in paper [CCS10].

The solution to the optimization problem \(\mathop{\arg\min}_{\boldsymbol{X}} \Vert \boldsymbol{X} - \boldsymbol{A} \Vert_\text{F}^2 + \tau \Vert \boldsymbol{X} \Vert_{\ast}\) is given by \(\boldsymbol{U} \max \lbrace \boldsymbol{\Sigma} - \tau, 0 \rbrace \boldsymbol{V}^\text{T}\), where \(\boldsymbol{A} = \boldsymbol{U} \boldsymbol{\Sigma} \boldsymbol{V}^\text{T}\) is the singular value decomposition of matrix \(\boldsymbol{A}\).

Parameters:
  • A (np.ndarray) – Matrix \(\boldsymbol{A}\).

  • tau (float) – \(\tau\).

Returns:

The solution to the optimization problem.

Return type:

np.ndarray

artificial

pyldl.utils.artificial(X, a=1.0, b=0.5, c=0.2, d=1.0, w1=array([[4., 2., 1.]]), w2=array([[1., 2., 4.]]), w3=array([[1., 4., 2.]]), lambda1=0.01, lambda2=0.01)

load_dataset

pyldl.utils.load_dataset(name, dir='dataset')

make_ldl

pyldl.utils.make_ldl(n_samples=200, **kwargs)

plot_artificial

pyldl.utils.plot_artificial(n_samples=50, model=None, file_name=None, **kwargs)

random_missing

pyldl.utils.random_missing(y, missing_rate=0.9)

canberra

pyldl.metrics.canberra(*args, reduction=<function average>)

chebyshev

pyldl.metrics.chebyshev(*args, reduction=<function average>)

clark

pyldl.metrics.clark(*args, reduction=<function average>)

cosine

pyldl.metrics.cosine(*args, reduction=<function average>)

dpa

pyldl.metrics.dpa(*args, reduction=<function average>)

error_probability

pyldl.metrics.error_probability(*args, reduction=<function average>)

euclidean

pyldl.metrics.euclidean(*args, reduction=<function average>)

fidelity

pyldl.metrics.fidelity(*args, reduction=<function average>)

intersection

pyldl.metrics.intersection(*args, reduction=<function average>)

kendall

pyldl.metrics.kendall(*args, reduction=<function average>)

mean_absolute_error

pyldl.metrics.mean_absolute_error(*args, reduction=<function average>)

score

pyldl.metrics.score(y: ndarray, y_pred: ndarray, metrics: list[str] | None = None, return_dict: bool = False)

sorensen

pyldl.metrics.sorensen(*args, reduction=<function average>)

spearman

pyldl.metrics.spearman(*args, reduction=<function average>)

squared_chi2

pyldl.metrics.squared_chi2(*args, reduction=<function average>)

zero_one_loss

pyldl.metrics.zero_one_loss(*args, reduction=<function average>)

References

[Con16]

Laurent Condat. Fast projection onto the simplex and the l1 ball. Mathematical Programming, 158(1):575–585, 2016. URL: https://doi.org/10.1007/s10107-015-0946-6.

[CY14]

Jinhui Chen and Jian Yang. Robust subspace segmentation via low-rank representation. IEEE Transactions on Cybernetics, 44(8):1432–1445, 2014. URL: https://doi.org/10.1109/TCYB.2013.2286106.

[CCS10]

Jian-Feng Cai, Emmanuel J Candès, and Zuowei Shen. A singular value thresholding algorithm for matrix completion. SIAM Journal on optimization, 20(4):1956–1982, 2010. URL: https://doi.org/10.1137/080738970.