LDL¶
BaseLDL¶
- class pyldl.algorithms.base.BaseLDL(random_state: int | None = None)¶
Let \(\mathcal{X} = \mathbb{R}^{q}\) denote the input space and \(\mathcal{Y} = \lbrace y_i \rbrace_{i=1}^{l}\) denote the label space. The description degree of \(y \in \mathcal{Y}\) to \(\boldsymbol{x} \in \mathcal{X}\) is denoted by \(d_{\boldsymbol{x}}^{y}\). Then the label distribution of \(\boldsymbol{x}\) is defined as \(\boldsymbol{d} = \lbrace d_{\boldsymbol{x}}^{y} \rbrace_{y \in \mathcal{Y}}\). Note that \(\boldsymbol{d}\) is under the constraints of probability simplex, i.e., \(\boldsymbol{d} \in \Delta^{l-1}\), where \(\Delta^{l-1} = \lbrace \boldsymbol{d} \in \mathbb{R}^{l} \,|\, \boldsymbol{d} \geq 0,\, \boldsymbol{d}^{\text{T}} \boldsymbol{1} = 1 \rbrace\). Given a training set of \(n\) samples \(\mathcal{S} = \lbrace (\boldsymbol{x}_i,\, \boldsymbol{d}_i) \rbrace_{i=1}^{n}\), the goal of LDL is to learn a conditional probability mass function \(p(\boldsymbol{d} \,|\, \boldsymbol{x})\).
LRLDL¶
- class pyldl.algorithms._LRLDL(mode='threshold', param=None, random_state=None)¶
Base class for
pyldl.algorithms.TLRLDL
andpyldl.algorithms.TKLRLDL
.ADMM is used as optimization algorithm.
- _update_V()¶
Please note that Eq. (11) in paper [LDL-KWT+24] should be corrected to:
\[\boldsymbol{\Gamma}_1 \leftarrow \boldsymbol{\Gamma}_1 + \mu \left(\boldsymbol{W}\boldsymbol{X}^{\text{T}}\boldsymbol{O} - \boldsymbol{G}\right)\text{.}\]
- _update_W()¶
Please note that Eq. (8) in paper [LDL-KWT+24] should be corrected to:
\[\begin{split}\begin{aligned} \boldsymbol{W} \leftarrow & \left(\left(\mu \boldsymbol{G} + \boldsymbol{\Gamma}_1 + \boldsymbol{L}\right) \boldsymbol{O}^{\text{T}} \boldsymbol{X} + \boldsymbol{D}\boldsymbol{X} \right) \\ & \left( \boldsymbol{X}^{\text{T}}\boldsymbol{X} + 2 \lambda \boldsymbol{I} + (1+\mu) \boldsymbol{X}^{\text{T}}\boldsymbol{O}\boldsymbol{O}^{\text{T}}\boldsymbol{X} \right)^{-1}\text{,} \end{aligned}\end{split}\]where \(\boldsymbol{I}\) is the identity matrix.
And Eq. (10) should be corrected to:
\[\begin{split}\begin{aligned} \boldsymbol{O} \leftarrow & \left( (1+\mu) \boldsymbol{X}\boldsymbol{W}^{\text{T}} \left( \boldsymbol{X}\boldsymbol{W}^{\text{T}} \right)^{\text{T}} + 2 \lambda \boldsymbol{I} \right)^{-1} \\ & \boldsymbol{X}\boldsymbol{W}^{\text{T}} \left(\boldsymbol{L} + \mu \boldsymbol{G} - \boldsymbol{\Gamma}_1\right)\text{.} \end{aligned}\end{split}\]
TKLRLDL¶
- class pyldl.algorithms.TKLRLDL(param=None, random_state=None)¶
TKLRLDL
is proposed in paper [LDL-KWT+24].A top-\(k\)
binaryzation
method is used to generate the logical label matrix.
TLRLDL¶
- class pyldl.algorithms.TLRLDL(param=None, random_state=None)¶
TLRLDL
is proposed in paper [LDL-KWT+24].A threshold-based
binaryzation
method is used to generate the logical label matrix.
LDL-LRR¶
- class pyldl.algorithms.LDL_LRR(*args, **kwargs)¶
LDL-LRR
is proposed in paper [LDL-JSL+23].BFGS is used as the optimization algorithm.
LDL-DPA¶
- class pyldl.algorithms.LDL_DPA(*args, **kwargs)¶
LDL-DPA
is proposed in paper [LDL-JQLL24].BFGS is used as the optimization algorithm.
CAD¶
- class pyldl.algorithms.CAD(*args, **kwargs)¶
CAD
is proposed in paper [LDL-WZYY23].
QFD2¶
- class pyldl.algorithms.QFD2(*args, **kwargs)¶
QFD2
is proposed in paper [LDL-WZYY23].
CJS¶
- class pyldl.algorithms.CJS(*args, **kwargs)¶
CJS
is proposed in paper [LDL-WZYY23].
DF-LDL¶
- class pyldl.algorithms.DF_LDL(estimator=None, random_state=None)¶
DF-LDL
is proposed in paper [LDL-GGAT+21].
LDL-SCL¶
- class pyldl.algorithms.LDL_SCL(*args, **kwargs)¶
LDL-SCL
is proposed in paper [LDL-ZJL18].Adam is used as optimizer.
See also:
[LDL-SCL-JLZ+21]Xiuyi Jia, Zechao Li, Xiang Zheng, Weiwei Li, and Sheng-Jun Huang. Label distribution learning with label correlations on local samples. IEEE Transactions on Knowledge and Data Engineering, 33(4):1619–1631, 2021. URL: https://doi.org/10.1109/TKDE.2019.2943337.
LDL-LCLR¶
- class pyldl.algorithms.LDL_LCLR(random_state: int | None = None)¶
LDL-LCLR
is proposed in paper [LDL-RJLZ19].ADMM is used as the optimization algorithm.
- _update_W()¶
Please note that Eq. (9) in paper [LDL-RJLZ19] should be corrected to:
\[\begin{split}\begin{aligned} \nabla_\boldsymbol{W} = & \boldsymbol{X}^{\text{T}} \left(\hat{\boldsymbol{D}} - \boldsymbol{D}\right) + 2 \lambda_1 \boldsymbol{W} - \boldsymbol{X}^{\text{T}} \left(\left(\hat{\boldsymbol{D}} - \hat{\boldsymbol{D}}^2\right) \odot \boldsymbol{\Gamma}_1\right) \boldsymbol{S}^{\text{T}} \\ - & \rho \boldsymbol{X}^{\text{T}} \left(\left(\hat{\boldsymbol{D}} - \hat{\boldsymbol{D}}^2\right) \odot \left(\boldsymbol{D} - \hat{\boldsymbol{D}}\boldsymbol{S} - \boldsymbol{E}\right)\right) \boldsymbol{S}^{\text{T}}\text{,} \end{aligned}\end{split}\]where \(\odot\) denotes element-wise multiplication.
LDLSF¶
- class pyldl.algorithms.LDLSF(random_state: int | None = None)¶
LDLSF
is proposed in paper [LDL-RJL+19].ADMM is used as optimization algorithm.
LDLLC¶
- class pyldl.algorithms.LDLLC(*args, **kwargs)¶
LDLLC
is proposed in paper [LDL-JLLZ18].BFGS is used as optimization algorithm.
BCPNN¶
- class pyldl.algorithms.BCPNN(*args, **kwargs)¶
BCPNN
is proposed in paper [LDL-YSS17].BCPNN
is based onCPNN
. See also:[BCPNN-GYZ13]Xin Geng, Chao Yin, and Zhi-Hua Zhou. Facial age estimation by learning from label distributions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(10):2401–2412, 2013. URL: https://doi.org/10.1109/TPAMI.2013.51.
ACPNN¶
- class pyldl.algorithms.ACPNN(*args, **kwargs)¶
ACPNN
is proposed in paper [LDL-YSS17].ACPNN
is based onCPNN
. See also:[ACPNN-GYZ13]Xin Geng, Chao Yin, and Zhi-Hua Zhou. Facial age estimation by learning from label distributions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(10):2401–2412, 2013. URL: https://doi.org/10.1109/TPAMI.2013.51.
LDLF¶
- class pyldl.algorithms.LDLF(*args, **kwargs)¶
LDLF
is proposed in paper [LDL-SZGY17].The algorithms employs deep neural decision forests. See also:
[LDLF-KFCB15]Peter Kontschieder, Madalina Fiterau, Antonio Criminisi, and Samuel Rota Bulo. Deep neural decision forests. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 1467–1475. 2015. URL: https://doi.org/10.1109/ICCV.2015.172.
Adam is used as the optimizer.
SA¶
- class pyldl.algorithms._SA(random_state=None)¶
Base class for
pyldl.algorithms.SA_IIS
andpyldl.algorithms.SA_BFGS
.SA refers to specialized algorithms, where MaxEnt is employed as model.
SA-BFGS¶
SA-IIS¶
- class pyldl.algorithms.SA_IIS(random_state=None)¶
SA-IIS
is proposed in paper [LDL-Gen16].IIS is used as optimization algorithm.
IIS-LLD is the early version of
SA-IIS
. See also:[SA-IIS-GYZ13]Xin Geng, Chao Yin, and Zhi-Hua Zhou. Facial age estimation by learning from label distributions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(10):2401–2412, 2013. URL: https://doi.org/10.1109/TPAMI.2013.51.
[SA-IIS-GSMZ10]Xin Geng, Kate Smith-Miles, and Zhi-Hua Zhou. Facial age estimation by learning from label distributions. In Proceedings of the AAAI Conference on Artificial Intelligence, 451–456. 2010. URL: https://doi.org/10.1609/aaai.v24i1.7657.
[SA-IIS-ZWG15]Zhaoxiang Zhang, Mo Wang, and Xin Geng. Crowd counting in public video surveillance by label distribution learning. Neurocomputing, 166:151–163, 2015. URL: https://doi.org/10.1016/j.neucom.2015.03.083.
AA-\(k\)NN¶
AA-BP¶
PT¶
- class pyldl.algorithms._PT(random_state: int | None = None)¶
Base class for
pyldl.algorithms.PT_Bayes
andpyldl.algorithms.PT_SVM
.PT refers to problem transformation.
PT-Bayes¶
PT-SVM¶
LDSVR¶
CPNN¶
References¶
Zhiqiang Kou, Jing Wang, Jiawei Tang, Yuheng Jia, Boyu Shi, and Xin Geng. Exploiting multi-label correlation in label distribution learning. In Proceedings of the International Joint Conference on Artificial Intelligence, 4326–4334. 2024. URL: https://doi.org/10.24963/ijcai.2024/478.
Xiuyi Jia, Xiaoxia Shen, Weiwei Li, Yunan Lu, and Jihua Zhu. Label distribution learning by maintaining label ranking relation. IEEE Transactions on Knowledge and Data Engineering, 35(2):1695–1707, 2023. URL: https://doi.org/10.1109/TKDE.2021.3099294.
Xiuyi Jia, Tian Qin, Yunan Lu, and Weiwei Li. Adaptive weighted ranking-oriented label distribution learning. IEEE Transactions on Neural Networks and Learning Systems, 35(8):11302–11316, 2024. URL: https://doi.org/10.1109/TNNLS.2023.3258976.
Changsong Wen, Xin Zhang, Xingxu Yao, and Jufeng Yang. Ordinal label distribution learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 23481–23491. 2023. URL: https://doi.org/10.1109/ICCV51070.2023.02146.
Manuel González, Germán González-Almagro, Isaac Triguero, José-Ramón Cano, and Salvador García. Decomposition-fusion for label distribution learning. Information Fusion, 66:64–75, 2021. URL: https://doi.org/10.1016/j.inffus.2020.08.024.
Xiang Zheng, Xiuyi Jia, and Weiwei Li. Label distribution learning by exploiting sample correlations locally. In Proceedings of the AAAI Conference on Artificial Intelligence, 4556–4563. 2018. URL: https://doi.org/10.1609/aaai.v32i1.11693.
Tingting Ren, Xiuyi Jia, Weiwei Li, and Shu Zhao. Label distribution learning with label correlations via low-rank approximation. In Proceedings of the International Joint Conference on Artificial Intelligence, 3325–3331. 2019. URL: https://doi.org/10.24963/ijcai.2019/461.
Tingting Ren, Xiuyi Jia, Weiwei Li, Lei Chen, and Zechao Li. Label distribution learning with label-specific features. In Proceedings of the International Joint Conference on Artificial Intelligence, 3318–3324. 2019. URL: https://doi.org/10.24963/ijcai.2019/460.
Xiuyi Jia, Weiwei Li, Junyu Liu, and Yu Zhang. Label distribution learning by exploiting label correlations. In Proceedings of the AAAI Conference on Artificial Intelligence, 3310–3317. 2018. URL: https://doi.org/10.1609/aaai.v32i1.11664.
Jufeng Yang, Ming Sun, and Xiaoxiao Sun. Learning visual sentiment distributions via augmented conditional probability neural network. In Proceedings of the AAAI Conference on Artificial Intelligence, 224–230. 2017. URL: https://doi.org/10.1609/aaai.v31i1.10485.
Wei Shen, Kai Zhao, Yilu Guo, and Alan Yuille. Label distribution learning forests. In Advances in Neural Information Processing Systems, 834–843. 2017.
Xin Geng. Label distribution learning. IEEE Transactions on Knowledge and Data Engineering, 28(7):1734–1748, 2016. URL: https://doi.org/10.1109/TKDE.2016.2545658.
Xin Geng and Peng Hou. Pre-release prediction of crowd opinion on movies by label distribution learning. In Proceedings of the International Joint Conference on Artificial Intelligence, 3511–3517. 2015.
Xin Geng, Chao Yin, and Zhi-Hua Zhou. Facial age estimation by learning from label distributions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(10):2401–2412, 2013. URL: https://doi.org/10.1109/TPAMI.2013.51.
Further Reading¶
Peiqiu Yu and Xiuyi Jia. Exploiting indirect linear correlation for label distribution learning. Neurocomputing, pages 128022, 2024. URL: https://doi.org/10.1016/j.neucom.2024.128022.
Chao Tan, Sheng Chen, Xin Geng, and Genlin Ji. A label distribution manifold learning algorithm. Pattern Recognition, 135:109112, 2023. URL: https://doi.org/10.1016/j.patcog.2022.109112.
Jing Wang and Xin Geng. Label distribution learning by exploiting label distribution manifold. IEEE Transactions on Neural Networks and Learning Systems, 34(2):839–852, 2023. URL: https://doi.org/10.1109/TNNLS.2021.3103178.
Qiang Li, Jingjing Wang, Zhaoliang Yao, Yachun Li, Pengju Yang, Jingwei Yan, Chunmao Wang, and Shiliang Pu. Unimodal-concentrated loss: fully adaptive label distribution learning for ordinal regression. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 20513–20522. 2022. URL: https://doi.org/10.1109/CVPR52688.2022.01986.
Tianyue Zhang, Yingke Mao, Furao Shen, and Jian Zhao. Label distribution learning through exploring nonnegative components. Neurocomputing, 501:212–221, 2022. URL: https://doi.org/10.1016/j.neucom.2022.06.017.
Suping Xu, Hengrong Ju, Lin Shang, Witold Pedrycz, Xibei Yang, and Chun Li. Label distribution learning: a local collaborative mechanism. International Journal of Approximate Reasoning, 121:59–84, 2020. URL: https://doi.org/10.1016/j.ijar.2020.02.003.
Xiuyi Jia, Xiang Zheng, Weiwei Li, Changqing Zhang, and Zechao Li. Facial emotion distribution learning by exploiting low-rank label correlations locally. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9841–9850. 2019. URL: https://doi.org/10.1109/CVPR.2019.01007.
Jing Wang and Xin Geng. Theoretical analysis of label distribution learning. In Proceedings of the AAAI Conference on Artificial Intelligence, 5256–5263. 2019. URL: https://doi.org/10.1609/aaai.v33i01.33015256.
Ke Wang and Xin Geng. Discrete binary coding based label distribution learning. In Proceedings of the International Joint Conference on Artificial Intelligence, 3733–3739. 2019. URL: https://doi.org/10.24963/ijcai.2019/518.
Suping Xu, Lin Shang, and Furao Shen. Latent semantics encoding for label distribution learning. In Proceedings of the International Joint Conference on Artificial Intelligence, 3982–3988. 2019. URL: https://doi.org/10.24963/ijcai.2019/553.
Mengting Chen, Xinggang Wang, Bin Feng, and Wenyu Liu. Structured random forest for label distribution learning. Neurocomputing, 320:171–182, 2018. URL: https://doi.org/10.1016/j.neucom.2018.09.002.
Ke Wang and Xin Geng. Discrete binary coding based label distribution learning. In Proceedings of the International Joint Conference on Artificial Intelligence, 3733–3739. 2018. URL: https://doi.org/10.24963/ijcai.2018/386.
Peng Zhao and Zhi-Hua Zhou. Label distribution learning by optimal transport. In Proceedings of the AAAI Conference on Artificial Intelligence, 4506–4513. 2018. URL: https://doi.org/10.1609/aaai.v32i1.11609.
Peng Hou, Xin Geng, Zeng-Wei Huo, and Jia-Qi Lv. Semi-supervised adaptive label distribution learning for facial age estimation. In Proceedings of the AAAI Conference on Artificial Intelligence, 2015–2021. 2017. URL: https://doi.org/10.1609/aaai.v31i1.10822.
Bin-Bin Gao, Chao Xing, Chen-Wei Xie, Jianxin Wu, and Xin Geng. Deep label distribution learning with label ambiguity. IEEE Transactions on Image Processing, 26(6):2825–2838, 2017. URL: https://doi.org/10.1109/TIP.2017.2689998.
Chao Xing, Xin Geng, and Hui Xue. Logistic boosting regression for label distribution learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4489–4497. 2016. URL: https://doi.org/10.1109/CVPR.2016.486.
Xu Yang, Xin Geng, and Deyu Zhou. Sparsity conditional energy label distribution learning for age estimation. In Proceedings of the International Joint Conference on Artificial Intelligence, 2259–2265. 2016. URL: https://www.ijcai.org/Abstract/16/322.