Others

Oversampling

SSG-LDL

class pyldl.algorithms.SSG_LDL(n=300, k=5, fx=0.5, fy=0.5, random_state=None)

SSG-LDL is proposed in paper [O-GLCG21].

References

[O-GLCG21]

Manuel González, Julián Luengo, José-Ramón Cano, and Salvador García. Synthetic sample generation for label distribution learning. Information Sciences, 544:197–213, 2021. URL: https://doi.org/10.1016/j.ins.2020.07.071.

Transfer Learning

LDL-DA

class pyldl.algorithms.LDL_DA(*args, **kwargs)

LDL-DA is proposed in paper [O-WLJ24].

static augment(src: ndarray, tgt: ndarray) tuple[ndarray, ndarray]

Feature augmentation.

Parameters:
  • src (np.ndarray) – Source data (shape: \([m_s,\, n_s]\)).

  • tgt (np.ndarray) – Target data (shape: \([m_t,\, n_t]\)).

Returns:

Augmented source and target data (shape: \([m_s,\, n_s + n_t]\) and \([m_t,\, n_s + n_t]\), respectively).

Return type:

tuple[np.ndarray, np.ndarray]

fit(sX: ndarray, sy: ndarray, tX: ndarray, ty: ndarray, *, callbacks=None, X_val=None, y_val=None, ft_epochs: int = 1000, ft_optimizer: Optimizer | None = None, alpha: float = 0.01, beta: float = 0.01, r: int = 2, margin: float | None = None, fine_tune: bool = True, **kwargs)

Fit the model.

Parameters:
  • sX (np.ndarray) – Source features.

  • sy (np.ndarray) – Source label distributions.

  • tX (np.ndarray) – Target features.

  • ty (np.ndarray) – Target label distributions.

  • ft_epochs (int, optional) – Fine-tuning epochs, defaults to 1000.

  • ft_optimizer (keras.optimizers.Optimizer, optional) – Fine-tuning optimizer, if None, the default optimizer is used, defaults to None.

  • alpha (float) – Hyperparameter to control the contrastive alignment loss, defaults to 1e-2.

  • beta (float) – Hyperparameter to control the prototype alignment loss, defaults to 1e-2.

  • r (int) – Number of prototypes, defaults to 2.

  • margin (float, optional) – Margin for the similarity measure, defaults to None. If None, cosine similarity is used; otherwise, max-margin euclidean distance is used.

  • fine_tune (bool, optional) – Whether to fine-tune the model, defaults to True.

Returns:

Fitted model.

Return type:

LDL_DA

static pairwise_cosine(X: Tensor, Y: Tensor) Tensor

Pairwise cosine similarity.

Parameters:
  • X (tf.Tensor) – Matrix \(\boldsymbol{X}\) (shape: \([m_X,\, n_X]\)).

  • Y (tf.Tensor) – Matrix \(\boldsymbol{Y}\) (shape: \([m_Y,\, n_Y]\)).

Returns:

Pairwise cosine similarity (shape: \([m_X,\, m_Y]\)).

Return type:

tf.Tensor

static pairwise_jsd(X: Tensor, Y: Tensor) Tensor

Pairwise Jensen-Shannon divergence.

Parameters:
  • X (tf.Tensor) – Matrix \(\boldsymbol{X}\) (shape: \([m_X,\, n_X]\)).

  • Y (tf.Tensor) – Matrix \(\boldsymbol{Y}\) (shape: \([m_Y,\, n_Y]\)).

Returns:

Pairwise Jensen-Shannon divergence (shape: \([m_X,\, m_Y]\)).

Return type:

tf.Tensor

static pairwise_label(X, Y)

Pairwise label comparison. True if two labels are the same, otherwise False.

Parameters:
  • X (tf.Tensor) – Matrix \(\boldsymbol{X}\) (shape: \([m_X,\, n_X]\)).

  • Y (tf.Tensor) – Matrix \(\boldsymbol{Y}\) (shape: \([m_Y,\, n_Y]\)).

Returns:

Pairwise label comparison (shape: \([m_X,\, m_Y]\)).

Return type:

tf.Tensor

static reorder_y(y: ndarray, order: tuple[int]) ndarray

Reorder label distributions for consistent label semantics.

Parameters:
  • y (np.ndarray) – Label distributions.

  • order (tuple[int]) – New order.

Returns:

Reordered label distributions.

Return type:

np.ndarray

References

[O-WLJ24]

Haitao Wu, Weiwei Li, and Xiuyi Jia. Domain adaptation for label distribution learning. IEEE Transactions on Big Data, 2024. Early Access. URL: https://doi.org/10.1109/TBDATA.2024.3442562.

Further Reading

[O-ZQAG23]

Xingyu Zhao, Lei Qi, Yuexuan An, and Xin Geng. Generalizable label distribution learning. In Proceedings of the ACM International Conference on Multimedia, 8932–8941. 2023. URL: https://doi.org/10.1145/3581783.3611693.

[O-QSL+22]

Lei Qi, Jiaying Shen, Jiaqi Liu, Yinghuan Shi, and Xin Geng. Label distribution learning for generalizable multi-source person re-identification. IEEE Transactions on Information Forensics and Security, 17:3139–3150, 2022. URL: https://doi.org/10.1109/TIFS.2022.3204219.

Predicting from Ranking

Further Reading

[O-LLLJ23]

Yunan Lu, Weiwei Li, Huaxiong Li, and Xiuyi Jia. Predicting label distribution from tie-allowed multi-label ranking. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(12):15364–15379, 2023. URL: https://doi.org/10.1109/TPAMI.2023.3300310.

[O-LJ22]

Yunan Lu and Xiuyi Jia. Predicting label distribution from multi-label ranking. In Advances in Neural Information Processing Systems, 36931–36943. 2022.

Inaccurate/Noisy Labels

Further Reading

[O-HLLJ24]

Liang He, Yunan Lu, Weiwei Li, and Xiuyi Jia. Generative calibration of inaccurate annotation for label distribution learning. In Proceedings of the AAAI Conference on Artificial Intelligence, 12394–12401. 2024. URL: https://doi.org/10.1609/aaai.v38i11.29131.

[O-WL22]

Huan Wang and Yan-Fu Li. Robust mechanical fault diagnosis with noisy label based on multistage true label distribution learning. IEEE Transactions on Reliability, 72(3):975–988, 2022. URL: https://doi.org/10.1109/TR.2022.3190942.

[O-LLCJ22]

Weiwei Li, Yuqing Lu, Lei Chen, and Xiuyi Jia. Label distribution learning with noisy labels via three-way decisions. International Journal of Approximate Reasoning, 150:19–34, 2022. URL: https://doi.org/10.1016/j.ijar.2022.08.009.

[O-SLW+21]

Zeren Sun, Huafeng Liu, Qiong Wang, Tianfei Zhou, Qi Wu, and Zhenmin Tang. Co-ldl: a co-training-based label distribution learning method for tackling label noise. IEEE Transactions on Multimedia, 24:1093–1104, 2021. URL: https://doi.org/10.1109/TMM.2021.3116430.

[O-LXZG20]

Yun-Peng Liu, Ning Xu, Yu Zhang, and Xin Geng. Label distribution for learning with noisy labels. In Proceedings of the International Conference on International Joint Conferences on Artificial Intelligence, 2568–2574. 2020. URL: https://doi.org/10.24963/ijcai.2020/356.

Weakly/Semi-Supervision

Further Reading

[O-LZZ+22]

Xinyuan Liu, Jihua Zhu, Qinghai Zheng, Zhiqiang Tian, and Zhongyu Li. Semi-supervised label distribution learning with co-regularization. Neurocomputing, 491:353–364, 2022. URL: https://doi.org/10.1016/j.neucom.2022.03.041.

[O-JWD+21]

Xiuyi Jia, Tao Wen, Weiping Ding, Huaxiong Li, and Weiwei Li. Semi-supervised label distribution learning via projection graph embedding. Information Sciences, 581:840–855, 2021. URL: https://doi.org/10.1016/j.ins.2021.10.009.

[O-JRC+19]

Xiuyi Jia, Tingting Ren, Lei Chen, Jun Wang, Jihua Zhu, and Xianzhong Long. Weakly supervised label distribution learning based on transductive matrix completion with sample correlations. Pattern Recognition Letters, 125:453–462, 2019. URL: https://doi.org/10.1016/j.patrec.2019.06.012.

Population-Level LDL

Further Reading

[O-WLP+23]

Tharindu Cyril Weerasooriya, Sarah Luger, Saloni Poddar, Ashiqur Khudabukhsh, and Christopher Homan. Subjective crowd disagreements for subjective data: uncovering meaningful crowdopinion with population-level learning. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, 950–966. 2023. URL: https://doi.org/10.18653/v1/2023.acl-long.54.

[O-WLH20]

Tharindu Cyril Weerasooriya, Tong Liu, and Christopher M Homan. Neighborhood-based pooling for population-level label distribution learning. In Proceedings of the European Conference on Artificial Intelligence, pages 490–497. 2020. URL: https://doi.org/10.3233/FAIA200130.

[O-LVSBH19]

Tong Liu, Akash Venkatachalam, Pratik Sanjay Bongale, and Christopher Homan. Learning to predict population-level label distributions. In Proceedings of the World Wide Web Conference, 1111–1120. 2019. URL: https://doi.org/10.1145/3308560.3317082.

More Derivative Tasks

[O-JGHZ24]

Yufei Jin, Richard Gao, Yi He, and Xingquan Zhu. Gldl: graph label distribution learning. In Proceedings of the AAAI Conference on Artificial Intelligence, 12965–12974. 2024. URL: https://doi.org/10.1609/aaai.v38i11.29194.

[O-DLF+23]

Xinyue Dong, Tingjin Luo, Ruidong Fan, Wenzhang Zhuge, and Chenping Hou. Active label distribution learning via kernel maximum mean discrepancy. Frontiers of Computer Science, 17(4):174327, 2023. URL: https://doi.org/10.1007/s11704-022-1624-5.

[O-HVW+23]

Jintao Huang, Chi-Man Vong, Guangtai Wang, Wenbin Qian, Yimin Zhou, and CL Philip Chen. Joint label enhancement and label distribution learning via stacked graph regularization-based polynomial fuzzy broad learning system. IEEE Transactions on Fuzzy Systems, 31(9):3290–3304, 2023. URL: https://doi.org/10.1109/TFUZZ.2023.3249192.

[O-XTZ+23]

Chao Xu, Hong Tao, Jing Zhang, Dewen Hu, and Chenping Hou. Label distribution changing learning with sample space expanding. Journal of Machine Learning Research, 24(36):1–48, 2023.

[O-XLZG23]

Ning Xu, Yun-Peng Liu, Yan Zhang, and Xin Geng. Progressive enhancement of label distributions for partial multilabel learning. IEEE Transactions on Neural Networks and Learning Systems, 34(8):4856–4867, 2023. URL: https://doi.org/10.1109/TNNLS.2021.3125366.

[O-ZAXG23]

Xingyu Zhao, Yuexuan An, Ning Xu, and Xin Geng. Continuous label distribution learning. Pattern Recognition, 133:109056, 2023. URL: https://doi.org/10.1016/j.patcog.2022.109056.

[O-ZAX+23]

Xingyu Zhao, Yuexuan An, Ning Xu, Jing Wang, and Xin Geng. Imbalanced label distribution learning. In Proceedings of the AAAI Conference on Artificial Intelligence, 11336–11344. 2023. URL: https://doi.org/10.1609/aaai.v37i9.26341.

[O-DLD+22]

Zhixuan Deng, Tianrui Li, Dayong Deng, Keyu Liu, Pengfei Zhang, Shiming Zhang, and Zhipeng Luo. Feature selection for label distribution learning using dual-similarity based neighborhood fuzzy entropy. Information Sciences, 615:385–404, 2022. URL: https://doi.org/10.1016/j.ins.2022.10.054.

[O-HVQ+22]

Jintao Huang, Chi-Man Vong, Wenbin Qian, Qin Huang, and Yimin Zhou. Online label distribution learning using random vector functional-link network. IEEE Transactions on Emerging Topics in Computational Intelligence, 7(4):1177–1190, 2022. URL: https://doi.org/10.1109/TETCI.2022.3230400.

[O-LLZ+22]

Yaojin Lin, Haoyang Liu, Hong Zhao, Qinghua Hu, Xingquan Zhu, and Xindong Wu. Hierarchical feature selection based on label distribution learning. IEEE Transactions on Knowledge and Data Engineering, 35(6):5964–5976, 2022. URL: https://doi.org/10.1109/TKDE.2022.3177246.

[O-QXYS22]

Wenbin Qian, Yinsong Xiong, Jun Yang, and Wenhao Shu. Feature selection for label distribution learning via feature similarity and label correlation. Information Sciences, 582:38–59, 2022. URL: https://doi.org/10.1016/j.ins.2021.08.076.

[O-RXLG22]

Yi Ren, Ning Xu, Miaogen Ling, and Xin Geng. Label distribution for multimodal machine learning. Frontiers of Computer Science, 16:1–11, 2022. URL: https://doi.org/10.1007/s11704-021-0611-6.

[O-DGZ+21]

Xinyue Dong, Shilin Gu, Wenzhang Zhuge, Tingjin Luo, and Chenping Hou. Active label distribution learning. Neurocomputing, 436:12–21, 2021. URL: https://doi.org/10.1016/j.neucom.2020.12.128.

[O-LZL+21]

Xinyuan Liu, Jihua Zhu, Zhongyu Li, Zhiqiang Tian, Xiuyi Jia, and Lei Chen. Unified framework for learning with label distribution. Information Fusion, 75:116–130, 2021. URL: https://doi.org/10.1016/j.inffus.2021.04.014.

[O-XLG20]

Ning Xu, Yun-Peng Liu, and Xin Geng. Partial multi-label learning with label distribution. In Proceedings of the AAAI Conference on Artificial Intelligence, 6510–6517. 2020. URL: https://doi.org/10.1609/aaai.v34i04.6124.

[O-XG19]

Changdong Xu and Xin Geng. Hierarchical classification based on label distribution learning. In Proceedings of the AAAI Conference on Artificial Intelligence, 5533–5540. 2019. URL: https://doi.org/10.1609/aaai.v33i01.33015533.