grgera commited on
Commit
ba1c740
·
verified ·
1 Parent(s): cb5c4b8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -11
README.md CHANGED
@@ -1,20 +1,20 @@
1
- ---
2
- tags:
3
- - model_hub_mixin
4
- - pytorch_model_hub_mixin
5
- - pytorch
6
- - statistical_inference
7
- - mutual_information
8
- - set_transformer
9
- license: mit
10
- ---
11
  #
12
 
13
  # 🧬 MIST: Mutual Information estimation via Supervised Training
14
 
15
  [![Paper](https://img.shields.io/badge/arXiv-Paper-b31b1b?logo=arxiv&logoColor=b31b1b)](https://arxiv.org/pdf/2511.18945)
16
  [![GitHub Repo](https://img.shields.io/badge/GitHub-Code-FFD700?logo=github)](https://github.com/grgera/mist)
17
- [![Data](https://img.shields.io/badge/Data-Zenodo-%23009788)](zenodo.org/records/17599669)
18
 
19
  ## Overview
20
  MIST is a framework for fully data-driven mutual information (MI) estimation. It leverages neural networks trained on large meta-datasets of distributions to learn flexible, differentiable MI estimators that generalize across sample sizes, dimensions, and modalities. The framework supports uncertainty quantification via quantile regression and provides fast, well-calibrated inference suitable for integration into modern ML pipelines.
 
1
+ ---
2
+ tags:
3
+ - model_hub_mixin
4
+ - pytorch_model_hub_mixin
5
+ - pytorch
6
+ - statistical_inference
7
+ - mutual_information
8
+ - set_transformer
9
+ license: mit
10
+ ---
11
  #
12
 
13
  # 🧬 MIST: Mutual Information estimation via Supervised Training
14
 
15
  [![Paper](https://img.shields.io/badge/arXiv-Paper-b31b1b?logo=arxiv&logoColor=b31b1b)](https://arxiv.org/pdf/2511.18945)
16
  [![GitHub Repo](https://img.shields.io/badge/GitHub-Code-FFD700?logo=github)](https://github.com/grgera/mist)
17
+ [![Data](https://img.shields.io/badge/Data-Zenodo-%23009788)](https://zenodo.org/records/17599669)
18
 
19
  ## Overview
20
  MIST is a framework for fully data-driven mutual information (MI) estimation. It leverages neural networks trained on large meta-datasets of distributions to learn flexible, differentiable MI estimators that generalize across sample sizes, dimensions, and modalities. The framework supports uncertainty quantification via quantile regression and provides fast, well-calibrated inference suitable for integration into modern ML pipelines.