SLDC Code Release Inquiry (2511.09926)
Hi @raoxuan98-hash 🤗
I'm Niels, and I'm part of the open-source team at Hugging Face. I came across your work on Arxiv and was particularly interested in your paper, "Compensating Distribution Drifts in Class-incremental Learning of Pre-trained Vision Transformers" (https://huggingface.co/papers/2511.09926).
Your abstract mentions that the code is available at https://github.com/raoxuan98-hash/sldc.git. However, when I tried to access this link, it resulted in a 404 error.
I was wondering if the repository URL has changed or if the code will be released at a later date?
The Hugging Face paper page makes it easy for people to find and discuss your work and associated artifacts. You can claim the paper, add GitHub and project page URLs, and link any released models or datasets. This helps to increase the visibility of your research and encourages collaboration within the AI community.
Once the code and any associated models/datasets are publicly available, we would be delighted to help you showcase them on the Hugging Face Hub (huggingface.co/models and huggingface.co/datasets). Hosting your artifacts on the Hub can significantly increase their visibility and discoverability within the AI community. We can link them directly to your paper page, add metadata tags for easy searching, and even help you set up a demo on Hugging Face Spaces with a potential ZeroGPU grant. By leveraging the Hugging Face Hub, you can maximize the impact of your research and make it accessible to a wider audience.
Uploading Models
Making your models accessible is crucial for the reproducibility and advancement of research in machine learning. The Hugging Face Hub provides a seamless platform for uploading and sharing your models, making it easier for others to build upon your work. Here’s a detailed guide on how to upload your models to the Hub:
See here for a guide: https://huggingface.co/docs/hub/models-uploading.
In this case, we could leverage the PyTorchModelHubMixin class, which adds from_pretrained and push_to_hub to any custom nn.Module. This simplifies the process of integrating your PyTorch models with the Hugging Face ecosystem. Alternatively, one can leverage the hf_hub_download one-liner to download a checkpoint from the Hub. This flexibility ensures that you can easily manage and share your model checkpoints.
We encourage researchers to push each model checkpoint to a separate model repository, so that things like download stats also work. We can then also link the checkpoints to the paper page. This granular approach to model sharing provides valuable insights into the usage of your models and allows for better tracking of their impact. When you create separate repositories for each checkpoint, you enable users to easily identify and access the specific versions of your models that they need for their research or applications. This level of detail enhances the reproducibility of your work and promotes transparency within the AI community. Furthermore, linking these checkpoints to the paper page ensures that your publications are directly connected to the corresponding models, providing a comprehensive resource for anyone interested in your research. By adopting this practice, you contribute to a more organized and accessible ecosystem for sharing and utilizing machine learning models.
Uploading Datasets
Sharing your datasets is equally important for fostering collaboration and accelerating progress in the field. The Hugging Face Hub provides a dedicated space for hosting datasets, making it easy for researchers to access and utilize them in their projects. By making your datasets publicly available, you enable others to reproduce your results, validate your findings, and build upon your work. This collaborative approach promotes transparency and accelerates the pace of innovation in the AI community.
Would be awesome to make any relevant datasets (if you introduce new ones or processed versions of standard benchmarks) available on 🤗, so that people can do:
from datasets import load_dataset
dataset = load_dataset("your-hf-org-or-username/your-dataset")
See here for a guide: https://huggingface.co/docs/datasets/loading.
Besides that, there's the dataset viewer which allows people to quickly explore the first few rows of the data in the browser. This interactive tool allows users to quickly assess the suitability of a dataset for their needs. The dataset viewer provides a user-friendly interface for exploring the structure and content of a dataset, enabling researchers to make informed decisions about whether to incorporate it into their projects. By providing this level of transparency, the Hugging Face Hub empowers users to effectively leverage available datasets and contribute to the advancement of machine learning research. The viewer supports various data formats and offers features such as filtering, sorting, and visualization, making it an invaluable resource for data exploration and analysis. By utilizing the dataset viewer, researchers can save time and effort in identifying relevant datasets and accelerate their research workflows.
Making datasets readily available also encourages the development of new benchmarks and evaluation metrics. When researchers have access to a diverse range of datasets, they can create more comprehensive and robust benchmarks that accurately reflect the performance of machine learning models. This, in turn, drives innovation and leads to the development of more effective and generalizable models. Furthermore, sharing datasets facilitates the identification of biases and limitations in existing data, promoting the development of fairer and more equitable AI systems. By contributing to the collective pool of knowledge, researchers can help to create a more inclusive and representative AI ecosystem. The Hugging Face Hub serves as a central repository for datasets, fostering collaboration and accelerating progress in the field of machine learning.
Please let me know if you have an updated link or if you're interested in discussing this further!
Kind regards,
Niels ML Engineer @ HF 🤗
In conclusion, sharing your code, models, and datasets on platforms like the Hugging Face Hub is essential for promoting collaboration and accelerating progress in the AI community. By making your work accessible and discoverable, you can maximize its impact and contribute to the advancement of the field. Remember to explore resources like the Papers With Code website for additional ways to share and showcase your research. This platform allows you to link your code and datasets to your published papers, further enhancing the visibility and reproducibility of your work.