model_id
stringlengths
8
65
model_card
stringlengths
0
15.7k
model_labels
listlengths
Salesforce/blip-image-captioning-large
# BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation Model card for image captioning pretrained on COCO dataset - base architecture (with ViT large backbone). | ![BLIP.gif](/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F1670928184033-62441d1d9fdefb55a0b7d1...%3C%2Fspan%3E%3C%2Fdiv%3E
null
Salesforce/blip-image-captioning-base
# BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation Model card for image captioning pretrained on COCO dataset - base architecture (with ViT base backbone). | ![BLIP.gif](/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F1670928184033-62441d1d9fdefb55a0b7d12...%3C%2Fspan%3E%3C%2Fdiv%3E
null
Salesforce/blip2-opt-2.7b
# BLIP-2, OPT-2.7b, pre-trained only BLIP-2 model, leveraging [OPT-2.7b](https://huggingface.co/facebook/opt-2.7b) (a large language model with 2.7 billion parameters). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv...
null
microsoft/git-base
# GIT (GenerativeImage2Text), base-sized GIT (short for GenerativeImage2Text) model, base-sized version. It was introduced in the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Wang et al. and first released in [this repository](https://github.com/micr...
null
Dataseeds/LLaVA-OneVision-Qwen2-0.5b-ov-DSD-FineTune
# LLaVA-OneVision-Qwen2-0.5b Fine-tuned on DataSeeds.AI Dataset This model is a LoRA (Low-Rank Adaptation) fine-tuned version of [lmms-lab/llava-onevision-qwen2-0.5b-ov](https://huggingface.co/lmms-lab/llava-onevision-qwen2-0.5b-ov) specialized for photography scene analysis and description generation. The model was ...
null
Dataseeds/BLIP2-opt-2.7b-DSD-FineTune
# BLIP2-OPT-2.7B Fine-tuned on DataSeeds.AI Dataset Code: https://github.com/DataSeeds-ai/DSD-finetune-blip-llava This model is a fine-tuned version of [Salesforce/blip2-opt-2.7b](https://huggingface.co/Salesforce/blip2-opt-2.7b) specialized for photography scene analysis and technical description generation. The mo...
null
nlpconnect/vit-gpt2-image-captioning
# nlpconnect/vit-gpt2-image-captioning This is an image captioning model trained by @ydshieh in [flax ](https://github.com/huggingface/transformers/tree/main/examples/flax/image-captioning) this is pytorch version of [this](https://huggingface.co/ydshieh/vit-gpt2-coco-en-ckpts). # The Illustrated Image Captioning u...
null
Salesforce/instructblip-vicuna-7b
# InstructBLIP model InstructBLIP model using [Vicuna-7b](https://github.com/lm-sys/FastChat#model-weights) as language model. InstructBLIP was introduced in the paper [InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning](https://arxiv.org/abs/2305.06500) by Dai et al. Disclaimer: Th...
null
ogulcanakca/blip-itu-turkish-captions-finetuned
# Türkçe Görüntü Altyazılama: BLIP ile Bir Başlangıç Noktası ## Projeye Genel Bakış ve Katkısı Bu proje, `Salesforce/blip-image-captioning-base` modelinin, `ituperceptron/image-captioning-turkish` veri kümesinin "long_captions" bölümünden alınan bir alt küme üzerinde **Türkçe görüntü altyazıları üretmek** amacıyla i...
null
adalbertojunior/image_captioning_portuguese
Image Captioning in Portuguese trained with ViT and GPT2 [DEMO](https://huggingface.co/spaces/adalbertojunior/image_captioning_portuguese) Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC)
null
deepklarity/poster2plot
# Poster2Plot An image captioning model to generate movie/t.v show plot from poster. It generates decent plots but is no way perfect. We are still working on improving the model. ## Live demo on Hugging Face Spaces: https://huggingface.co/spaces/deepklarity/poster2plot # Model Details The base model uses a Vision ...
null
gagan3012/ViTGPT2I2A
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ViTGPT2I2A This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patc...
null
bipin/image-caption-generator
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Image-caption-generator This model is trained on [Flickr8k](https://www.kaggle.com/datasets/nunenuh/flickr8k) dataset to generat...
null
yuewu/toc_titler
A model that inputs chemistry journal article table of contents (ToC) images and generates appropriate titles. Trained on all JACS ToCs and titles.
null
dhansmair/flamingo-tiny
Flamingo Model (tiny version) pretrained on Image Captioning on the Conceptual Captions (3M) dataset. Source Code: https://github.com/dhansmair/flamingo-mini Demo Space: https://huggingface.co/spaces/dhansmair/flamingo-tiny-cap Flamingo-mini: https://huggingface.co/spaces/dhansmair/flamingo-mini-cap
null
dhansmair/flamingo-mini
Flamingo Model pretrained on Image Captioning on the Conceptual Captions (3M) dataset. Source Code: https://github.com/dhansmair/flamingo-mini Demo Space: https://huggingface.co/spaces/dhansmair/flamingo-mini-cap Flamingo-tiny: https://huggingface.co/spaces/dhansmair/flamingo-tiny-cap
null
Zayn/AICVTG_What_if_a_machine_could_create_captions_automatically
This is an image captioning model training by Zayn ```python from transformers import VisionEncoderDecoderModel, ViTFeatureExtractor, AutoTokenizer model = VisionEncoderDecoderModel.from_pretrained("Zayn/AICVTG_What_if_a_machine_could_create_captions_automatically") feature_extractor = ViTFeatureExtractor.from_pr...
null
microsoft/git-base-coco
# GIT (GenerativeImage2Text), base-sized, fine-tuned on COCO GIT (short for GenerativeImage2Text) model, base-sized version, fine-tuned on COCO. It was introduced in the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Wang et al. and first released in [...
null
microsoft/git-base-textcaps
# GIT (GenerativeImage2Text), base-sized, fine-tuned on TextCaps GIT (short for GenerativeImage2Text) model, base-sized version, fine-tuned on TextCaps. It was introduced in the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Wang et al. and first relea...
null
microsoft/git-large
# GIT (GenerativeImage2Text), large-sized GIT (short for GenerativeImage2Text) model, large-sized version. It was introduced in the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Wang et al. and first released in [this repository](https://github.com/mi...
null
microsoft/git-large-coco
# GIT (GenerativeImage2Text), large-sized, fine-tuned on COCO GIT (short for GenerativeImage2Text) model, large-sized version, fine-tuned on COCO. It was introduced in the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Wang et al. and first released in...
null
microsoft/git-large-textcaps
# GIT (GenerativeImage2Text), large-sized, fine-tuned on TextCaps GIT (short for GenerativeImage2Text) model, large-sized version, fine-tuned on TextCaps. It was introduced in the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Wang et al. and first rel...
null
ybelkada/blip-image-captioning-base-football-finetuned
# BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation Model card for image captioning pretrained on COCO dataset - base architecture (with ViT base backbone) - and fine-tuned on [football dataset](https://huggingface.co/datasets/ybelkada/football-dataset). Google ...
null
tuman/vit-rugpt2-image-captioning
# First image captioning model for russian language vit-rugpt2-image-captioning This is an image captioning model trained on translated version (en-ru) of dataset COCO2014. # Model Details Model was initialized `google/vit-base-patch16-224-in21k` for encoder and `sberbank-ai/rugpt3large_based_on_gpt2` for decoder. ...
null
microsoft/git-large-r
# GIT (GenerativeImage2Text), large-sized, R* *R means "re-trained by removing some offensive captions in cc12m dataset". GIT (short for GenerativeImage2Text) model, large-sized version. It was introduced in the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14...
null
microsoft/git-large-r-coco
# GIT (GenerativeImage2Text), large-sized, fine-tuned on COCO, R* R = re-trained by removing some offensive captions in cc12m dataset GIT (short for GenerativeImage2Text) model, large-sized version, fine-tuned on COCO. It was introduced in the paper [GIT: A Generative Image-to-text Transformer for Vision and Languag...
null
microsoft/git-large-r-textcaps
# GIT (GenerativeImage2Text), large-sized, fine-tuned on TextCaps, R* R = re-trained by removing some offensive captions in cc12m dataset GIT (short for GenerativeImage2Text) model, large-sized version, fine-tuned on TextCaps. It was introduced in the paper [GIT: A Generative Image-to-text Transformer for Vision and...
null
tifa-benchmark/promptcap-coco-vqa
This is the repo for the paper [PromptCap: Prompt-Guided Task-Aware Image Captioning](https://arxiv.org/abs/2211.09699). This paper is accepted to ICCV 2023 as [PromptCap: Prompt-Guided Image Captioning for VQA with GPT-3](https://openaccess.thecvf.com/content/ICCV2023/html/Hu_PromptCap_Prompt-Guided_Image_Captioning_f...
null
Salesforce/blip2-flan-t5-xl
# BLIP-2, Flan T5-xl, pre-trained only BLIP-2 model, leveraging [Flan T5-xl](https://huggingface.co/google/flan-t5-xl) (a large language model). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by ...
null
Salesforce/blip2-opt-6.7b
# BLIP-2, OPT-6.7b, pre-trained only BLIP-2 model, leveraging [OPT-6.7b](https://huggingface.co/facebook/opt-6.7b) (a large language model with 6.7 billion parameters). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv...
null
Salesforce/blip2-opt-2.7b-coco
# BLIP-2, OPT-2.7b, fine-tuned on COCO BLIP-2 model, leveraging [OPT-2.7b](https://huggingface.co/facebook/opt-2.7b) (a large language model with 2.7 billion parameters). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arx...
null
Salesforce/blip2-opt-6.7b-coco
# BLIP-2, OPT-6.7b, fine-tuned on COCO BLIP-2 model, leveraging [OPT-6.7b](https://huggingface.co/facebook/opt-6.7b) (a large language model with 6.7 billion parameters). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arx...
null
Salesforce/blip2-flan-t5-xl-coco
# BLIP-2, Flan T5-xl, fine-tuned on COCO BLIP-2 model, leveraging [Flan T5-xl](https://huggingface.co/google/flan-t5-xl) (a large language model). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) b...
null
Salesforce/blip2-flan-t5-xxl
# BLIP-2, Flan T5-xxl, pre-trained only BLIP-2 model, leveraging [Flan T5-xxl](https://huggingface.co/google/flan-t5-xxl) (a large language model). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) ...
null
jaimin/image_caption
# Sample running code ```python from transformers import VisionEncoderDecoderModel, ViTFeatureExtractor, AutoTokenizer import torch from PIL import Image model = VisionEncoderDecoderModel.from_pretrained("jaimin/image_caption") feature_extractor = ViTFeatureExtractor.from_pretrained("jaimin/image_caption") tokenize...
null
Tomatolovve/DemoTest
# nlpconnect/vit-gpt2-image-captioning This is an image captioning model trained by @ydshieh in [flax ](https://github.com/huggingface/transformers/tree/main/examples/flax/image-captioning) this is pytorch version of [this](https://huggingface.co/ydshieh/vit-gpt2-coco-en-ckpts). # The Illustrated Image Captioning u...
null
Maciel/Muge-Image-Caption
### 功能介绍 该模型功能主要是对图片生成文字描述。模型结构使用Encoder-Decoder结构,其中Encoder端使用BEiT模型,Decoder使用GPT模型。 使用中文Muge数据集训练语料,训练5k步,最终验证集loss为0.3737,rouge1为20.419,rouge2为7.3553,rougeL为17.3753,rougeLsum为17.376。 [Github项目地址](https://github.com/Macielyoung/Chinese-Image-Caption) ### 如何使用 ```python from transformers import VisionEncoderDe...
null
baseplate/vit-gpt2-image-captioning
# nlpconnect/vit-gpt2-image-captioning This is an image captioning model trained by @ydshieh in [flax ](https://github.com/huggingface/transformers/tree/main/examples/flax/image-captioning) this is pytorch version of [this](https://huggingface.co/ydshieh/vit-gpt2-coco-en-ckpts). # The Illustrated Image Captioning u...
null
memegpt/blip2_endpoint
# BLIP-2, OPT-2.7b, pre-trained only BLIP-2 model, leveraging [OPT-2.7b](https://huggingface.co/facebook/opt-2.7b) (a large language model with 2.7 billion parameters). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv...
null
kpyu/video-blip-opt-2.7b-ego4d
# VideoBLIP, OPT-2.7b, fine-tuned on Ego4D VideoBLIP model, leveraging [BLIP-2](https://arxiv.org/abs/2301.12597) with [OPT-2.7b](https://huggingface.co/facebook/opt-2.7b) (a large language model with 2.7 billion parameters) as its LLM backbone. ## Model description VideoBLIP is an augmented BLIP-2 that can handle ...
null
kpyu/video-blip-flan-t5-xl-ego4d
# VideoBLIP, Flan T5-xl, fine-tuned on Ego4D VideoBLIP model, leveraging [BLIP-2](https://arxiv.org/abs/2301.12597) with [Flan T5-xl](https://huggingface.co/google/flan-t5-xl) (a large language model with 2.7 billion parameters) as its LLM backbone. ## Model description VideoBLIP is an augmented BLIP-2 that can han...
null
wangjin2000/git-base-finetune
# GIT (GenerativeImage2Text), base-sized GIT (short for GenerativeImage2Text) model, base-sized version. It was introduced in the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Wang et al. and first released in [this repository](https://github.com/micr...
null
Salesforce/instructblip-flan-t5-xl
# InstructBLIP model InstructBLIP model using [Flan-T5-xl](https://huggingface.co/google/flan-t5-xl) as language model. InstructBLIP was introduced in the paper [InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning](https://arxiv.org/abs/2305.06500) by Dai et al. Disclaimer: The team ...
null
noamrot/FuseCap_Image_Captioning
# FuseCap: Leveraging Large Language Models for Enriched Fused Image Captions A framework designed to generate semantically rich image captions. ## Resources - 💻 **Project Page**: For more details, visit the official [project page](https://rotsteinnoam.github.io/FuseCap/). - 📝 **Read the Paper**: You can find the...
null
Salesforce/instructblip-flan-t5-xxl
# InstructBLIP model InstructBLIP model using [Flan-T5-xxl](https://huggingface.co/google/flan-t5-xxl) as language model. InstructBLIP was introduced in the paper [InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning](https://arxiv.org/abs/2305.06500) by Dai et al. Disclaimer: The tea...
null
Salesforce/instructblip-vicuna-13b
# InstructBLIP model InstructBLIP model using [Vicuna-13b](https://github.com/lm-sys/FastChat#model-weights) as language model. InstructBLIP was introduced in the paper [InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning](https://arxiv.org/abs/2305.06500) by Dai et al. Disclaimer: T...
null
muhualing/vit
# nlpconnect/vit-gpt2-image-captioning This is an image captioning model trained by @ydshieh in [flax ](https://github.com/huggingface/transformers/tree/main/examples/flax/image-captioning) this is pytorch version of [this](https://huggingface.co/ydshieh/vit-gpt2-coco-en-ckpts). # The Illustrated Image Captioning u...
null
paragon-AI/blip2-image-to-text
# BLIP-2, OPT-2.7b, pre-trained only BLIP-2 model, leveraging [OPT-2.7b](https://huggingface.co/facebook/opt-2.7b) (a large language model with 2.7 billion parameters). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv...
null
captioner/caption-gen
null
movementso/blip-image-captioning-large
# BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation Model card for image captioning pretrained on COCO dataset - base architecture (with ViT large backbone). | ![BLIP.gif](https://s3.amazonaws.com/moonup/production/uploads/1670928184033-62441d1d9fdefb55a0b7d12c....
null
trojblue/blip2-opt-6.7b-coco-fp16
# BLIP-2, OPT-6.7b, Fine-tuned on COCO - Unofficial FP16 Version This repository contains an unofficial version of the BLIP-2 model, leveraging [OPT-6.7b](https://huggingface.co/facebook/opt-6.7b), which has been fine-tuned on COCO and converted to FP16 for reduced model size and memory footprint. The original model...
null
LanguageMachines/blip2-flan-t5-xxl
# BLIP-2, Flan T5-xxl, pre-trained only BLIP-2 model, leveraging [Flan T5-xxl](https://huggingface.co/google/flan-t5-xxl) (a large language model). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) ...
null
LanguageMachines/blip2-opt-2.7b
# BLIP-2, OPT-2.7b, pre-trained only BLIP-2 model, leveraging [OPT-2.7b](https://huggingface.co/facebook/opt-2.7b) (a large language model with 2.7 billion parameters). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv...
null
LanguageMachines/blip2-flan-t5-xl
# BLIP-2, Flan T5-xl, pre-trained only BLIP-2 model, leveraging [Flan T5-xl](https://huggingface.co/google/flan-t5-xl) (a large language model). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by ...
null
Mediocreatmybest/blip2-opt-2.7b-fp16-sharded
__Quality of Life duplication:__ _duplicated_from: ybelkada/blip2-opt-2.7b-fp16-sharded_ _Tokenizer duplicated from: Salesforce/blip2-opt-2.7b_ _Safetensors Added_ 🥱- _Mediocre_ # BLIP-2, OPT-2.7b, pre-trained only BLIP-2 model, leveraging [OPT-2.7b](https://huggingface.co/facebook/opt-2.7b) (a large l...
null
sheraz179/blip2-flan-t5-xl
# BLIP-2, Flan T5-xl, pre-trained only BLIP-2 model, leveraging [Flan T5-xl](https://huggingface.co/google/flan-t5-xl) (a large language model). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by ...
null
jeff-RQ/new-test-model
# BLIP-2, OPT-2.7b, pre-trained only BLIP-2 model, leveraging [OPT-2.7b](https://huggingface.co/facebook/opt-2.7b) (a large language model with 2.7 billion parameters). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv...
null
PD0AUTOMATIONAL/blip2-endpoint
# BLIP-2, OPT-6.7b, fine-tuned on COCO BLIP-2 model, leveraging [OPT-6.7b](https://huggingface.co/facebook/opt-6.7b) (a large language model with 6.7 billion parameters). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arx...
null
PD0AUTOMATIONAL/blip-large-endpoint
# BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation Model card for image captioning pretrained on COCO dataset - base architecture (with ViT large backbone). | ![BLIP.gif](https://s3.amazonaws.com/moonup/production/uploads/1670928184033-62441d1d9fdefb55a0b7d12c....
null
Mediocreatmybest/blip2-opt-2.7b_8bit
Quantization with [bitsandbytes](https://github.com/TimDettmers/bitsandbytes) _8-bit / fp4 / float16 / Safetensors_ -🥱 _Mediocre_ # BLIP-2, OPT-2.7b, pre-trained only BLIP-2 model, leveraging [OPT-2.7b](https://huggingface.co/facebook/opt-2.7b) (a large language model with 2.7 billion parameters). It was ...
null
Mediocreatmybest/blip2-opt-6.7b_8bit
Quantization with [bitsandbytes](https://github.com/TimDettmers/bitsandbytes) _8-bit / fp4 / float16 / Safetensors_ -🥱 _Mediocre_ # BLIP-2, OPT-6.7b, pre-trained only BLIP-2 model, leveraging [OPT-6.7b](https://huggingface.co/facebook/opt-6.7b) (a large language model with 6.7 billion parameters). It was int...
null
jeff-RQ/blip2-opt-6.7b-coco
# BLIP-2, OPT-6.7b, fine-tuned on COCO BLIP-2 model, leveraging [OPT-6.7b](https://huggingface.co/facebook/opt-6.7b) (a large language model with 6.7 billion parameters). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arx...
null
sheraz179/blip2-flan-t5-xxl
# BLIP-2, Flan T5-xxl, pre-trained only BLIP-2 model, leveraging [Flan T5-xxl](https://huggingface.co/google/flan-t5-xxl) (a large language model). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) ...
null
Gregor/mblip-mt0-xl
# mBLIP mT0-XL This is the model checkpoint for our work [mBLIP: Efficient Bootstrapping of Multilingual Vision-LLMs](https://arxiv.org/abs/2307.06930). ## Model description mBLIP is a [BLIP-2](https://arxiv.org/abs/2301.12597) model which consists of 3 sub-models: a Vision Transformer (ViT), a Query-Transformer (...
null
Fireworks12/git-base-pokemon
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # git-base-pokemon This model is a fine-tuned version of [microsoft/git-base](https://huggingface.co/microsoft/git-base) on the im...
null
michelecafagna26/blip-base-captioning-ft-hl-actions
## BLIP-base fine-tuned for Image Captioning on High-Level descriptions of Actions [BLIP](https://arxiv.org/abs/2201.12086) base trained on the [HL dataset](https://huggingface.co/datasets/michelecafagna26/hl) for **action generation of images** ## Model fine-tuning 🏋️‍ - Trained for 6 epochs - lr: 5e−5, - Adam op...
null
michelecafagna26/blip-base-captioning-ft-hl-scenes
## BLIP-base fine-tuned for Image Captioning on High-Level descriptions of Scenes [BLIP](https://arxiv.org/abs/2201.12086) base trained on the [HL dataset](https://huggingface.co/datasets/michelecafagna26/hl) for **scene generation of images** ## Model fine-tuning 🏋️‍ - Trained for 10 epochs - lr: 5e−5 - Adam opti...
null
michelecafagna26/blip-base-captioning-ft-hl-rationales
## BLIP-base fine-tuned for Image Captioning on High-Level descriptions of Rationales [BLIP](https://arxiv.org/abs/2201.12086) base trained on the [HL dataset](https://huggingface.co/datasets/michelecafagna26/hl) for **rationale generation of images** ## Model fine-tuning 🏋️‍ - Trained for of 6 epochs - lr: 5e−5 -...
null
michelecafagna26/git-base-captioning-ft-hl-actions
## GIT-base fine-tuned for Image Captioning on High-Level descriptions of Actions [GIT](https://arxiv.org/abs/2205.14100) base trained on the [HL dataset](https://huggingface.co/datasets/michelecafagna26/hl) for **action generation of images** ## Model fine-tuning 🏋️‍ - Trained for 10 epochs - lr: 5e−5 - Adam opti...
null
michelecafagna26/git-base-captioning-ft-hl-scenes
## GIT-base fine-tuned for Image Captioning on High-Level descriptions of Scenes [GIT](https://arxiv.org/abs/2205.14100) base trained on the [HL dataset](https://huggingface.co/datasets/michelecafagna26/hl) for **scene generation of images** ## Model fine-tuning 🏋️‍ - Trained for 10 epochs - lr: 5e−5 - Adam optimi...
null
michelecafagna26/git-base-captioning-ft-hl-rationales
## GIT-base fine-tuned for Image Captioning on High-Level descriptions of Rationales [GIT](https://arxiv.org/abs/2205.14100) base trained on the [HL dataset](https://huggingface.co/datasets/michelecafagna26/hl) for **rationale generation of images** ## Model fine-tuning 🏋️‍ - Trained for of 10 - lr: 5e−5 - Adam o...
null
michelecafagna26/blip-base-captioning-ft-hl-narratives
## BLIP-base fine-tuned for Narrative Image Captioning [BLIP](https://arxiv.org/abs/2201.12086) base trained on the [HL Narratives](https://huggingface.co/datasets/michelecafagna26/hl-narratives) for **high-level narrative descriptions generation** ## Model fine-tuning 🏋️‍ - Trained for a 3 epochs - lr: 5e−5 - Ada...
null
michelecafagna26/git-base-captioning-ft-hl-narratives
## GIT-base fine-tuned for Narrative Image Captioning [GIT](https://arxiv.org/abs/2205.14100) base trained on the [HL Narratives](https://huggingface.co/datasets/michelecafagna26/hl-narratives) for **high-level narrative descriptions generation** ## Model fine-tuning 🏋️‍ - Trained for a 3 epochs - lr: 5e−5 - Adam ...
null
michelecafagna26/clipcap-base-captioning-ft-hl-narratives
## ClipCap fine-tuned for Narrative Image Captioning [ClipCap](https://arxiv.org/abs/2111.09734) base trained on the [HL Narratives](https://huggingface.co/datasets/michelecafagna26/hl-narratives) for **high-level narrative descriptions generation** ## Model fine-tuning 🏋️‍ We fine-tune LM + Mapping Network startin...
null
michelecafagna26/clipcap-base-captioning-ft-hl-actions
## ClipCap fine-tuned for Action Image Captioning [ClipCap](https://arxiv.org/abs/2111.09734) base trained on the [HL Dataset](https://huggingface.co/datasets/michelecafagna26/hl) for **high-level action descriptions generation** ## Model fine-tuning 🏋️‍ We fine-tune LM + Mapping Network starting from the model pre...
null
michelecafagna26/clipcap-base-captioning-ft-hl-scenes
## ClipCap fine-tuned for Scene Image Captioning [ClipCap](https://arxiv.org/abs/2111.09734) base trained on the [HL Dataset](https://huggingface.co/datasets/michelecafagna26/hl) for **high-level scene descriptions generation** ## Model fine-tuning 🏋️‍ We fine-tune LM + Mapping Network starting from the model pretr...
null
michelecafagna26/clipcap-base-captioning-ft-hl-rationales
## ClipCap fine-tuned for Rationale Image Captioning [ClipCap](https://arxiv.org/abs/2111.09734) base trained on the [HL Dataset](https://huggingface.co/datasets/michelecafagna26/hl) for **high-level rationale descriptions generation** ## Model fine-tuning 🏋️‍ We fine-tune LM + Mapping Network starting from the mod...
null
getZuma/image-captioning
# Update to existing Salesforce model card: **Added handler to run the model on hugging face inference pipeline.** Input: ``` { "inputs": "<Base64 Image>", "prompts": "<Prompt Text here>" } ``` Output: ``` { "captions":"<Generated Image caption>" } ``` # BLIP-2, OPT-2.7b, pre-trained only...
null
daniyal214/finetuned-blip-chest-xrays
null
daniyal214/finetuned-git-large-chest-xrays
null
Mediocreatmybest/instructblip-flan-t5-xl_8bit
# InstructBLIP model InstructBLIP model using [Flan-T5-xl](https://huggingface.co/google/flan-t5-xl) as language model. InstructBLIP was introduced in the paper [InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning](https://arxiv.org/abs/2305.06500) by Dai et al. Disclaimer: The team ...
null
Mediocreatmybest/instructblip-flan-t5-xxl_8bit
# BLIP-2, Flan T5-xxl, pre-trained only BLIP-2 model, leveraging [Flan T5-xxl](https://huggingface.co/google/flan-t5-xxl) (a large language model). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) ...
null
stabilityai/japanese-instructblip-alpha
# Japanese InstructBLIP Alpha ![japanese-instructblip-icon](./japanese-instructblip-parrot.png) ## Model Details Japanese InstructBLIP Alpha is a vision-language instruction-following model that enables to generate Japanese descriptions for input images and optionally input texts such as questions. ## Usage Firs...
null
AIris-Channel/vit-gpt2-verifycode-caption
世萌验证码识别模型,训练集60000张图片,基于vit-gpt2微调 ## Use in Transformers ```python from transformers import VisionEncoderDecoderModel, ViTImageProcessor, AutoTokenizer import torch from PIL import Image model = VisionEncoderDecoderModel.from_pretrained("AIris-Channel/vit-gpt2-verifycode-caption") feature_extractor = ViTImageProces...
null
Mediocreatmybest/instructblip-flan-t5-xxl_8bit_nf4
Quantization with [bitsandbytes](https://github.com/TimDettmers/bitsandbytes) _8-bit / nf4 / Safetensors_ -_Mediocre_ 🥱 # InstructBLIP model InstructBLIP model using [Flan-T5-xxl](https://huggingface.co/google/flan-t5-xxl) as language model. InstructBLIP was introduced in the paper [InstructBLIP: Towards Gene...
null
Mediocreatmybest/instructblip-flan-t5-xl_8bit_nf4
Quantization with [bitsandbytes](https://github.com/TimDettmers/bitsandbytes) _8-bit / nf4 / Safetensors_ -_Mediocre_ 🥱 # InstructBLIP model InstructBLIP model using [Flan-T5-xl](https://huggingface.co/google/flan-t5-xl) as language model. InstructBLIP was introduced in the paper [InstructBLIP: Towards General...
null
alexgk/git-large-coco
# GIT (GenerativeImage2Text), large-sized, fine-tuned on COCO GIT (short for GenerativeImage2Text) model, large-sized version, fine-tuned on COCO. It was introduced in the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Wang et al. and first released in...
null
turing-motors/heron-chat-git-ELYZA-fast-7b-v0
# Heron GIT Japanese ELYZA Llama 2 Fast 7B ![heron](./heron_image.png) ## Model Details Heron GIT Japanese ELYZA Llama 2 Fast 7B is a vision-language model that can converse about input images.<br> This model was trained using [the heron library](https://github.com/turingmotors/heron). Please refer to the code for de...
null
turing-motors/heron-chat-git-ja-stablelm-base-7b-v0
# Heron GIT Japanese StableLM Base 7B ![heron](./heron_image.png) ## Model Details Heron GIT Japanese StableLM Base 7B is a vision-language model that can converse about input images.<br> This model was trained using [the heron library](https://github.com/turingmotors/heron). Please refer to the code for details. #...
null
turing-motors/heron-chat-blip-ja-stablelm-base-7b-v0
# Heron BLIP Japanese StableLM Base 7B ![heron](./heron_image.png) ## DEMO You can play the demo of this model [here](https://huggingface.co/spaces/turing-motors/heron_chat_blip). ## Model Details Heron BLIP Japanese StableLM Base 7B is a vision-language model that can converse about input images.<br> This model wa...
null
turing-motors/heron-preliminary-git-Llama-2-70b-v0
# Heron GIT Llama 2 70B Preliminary ![heron](./heron_image.png) ## Model Details Heron GIT Llama 2 70B Preliminary is a vision-language model that was pretrained with image-text pairs.<br> This model was trained using [the heron library](https://github.com/turingmotors/heron). Please refer to the code for details. <...
null
turing-motors/heron-chat-git-Llama-2-7b-v0
# Heron GIT Llama 2 Fast 7B ![heron](./heron_image.png) ## Model Details Heron GIT Llama 2 7B is a vision-language model that can converse about input images.<br> This model was trained using [the heron library](https://github.com/turingmotors/heron). Please refer to the code for details. ## Usage Follow [the inst...
null
Haimath/BLIP-Math
# BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation Model card for image captioning pretrained on COCO dataset - base architecture (with ViT base backbone). | ![BLIP.gif](/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F1670928184033-62441d1d9fdefb55a0b7d12...%3C%2Fspan%3E%3C%2Fdiv%3E
null
uf-aice-lab/BLIP-Math
# BLIP - Math Our model is fine-tuned on a mathematical multi-modal dataset, and it comprises two output heads: text generation and scoring. We provide the weight file for the text generation part of the model 'pytorch_model.bin.' You will need 4 input sources, including two text inputs and two image inputs: 'problem_...
null
advaitadasein/blip2_test
# BLIP-2, OPT-2.7b, pre-trained only BLIP-2 model, leveraging [OPT-2.7b](https://huggingface.co/facebook/opt-2.7b) (a large language model with 2.7 billion parameters). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv...
null
upro/blip
# BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation Model card for image captioning pretrained on COCO dataset - base architecture (with ViT large backbone). | ![BLIP.gif](/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F1670928184033-62441d1d9fdefb55a0b7d1...%3C%2Fspan%3E%3C%2Fdiv%3E
null
Sof22/image-caption-large-copy
This isi the BLIP salesforce large image captioning model with small adjustments to the paramaters on the back end for testing - note in particular the length of reply is increased. # BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation Model card for image captio...
null
Gregor/mblip-bloomz-7b
# mBLIP BLOOMZ-7B This is the model checkpoint for our work [mBLIP: Efficient Bootstrapping of Multilingual Vision-LLMs](https://arxiv.org/abs/2307.06930). ## Model description mBLIP is a [BLIP-2](https://arxiv.org/abs/2301.12597) model which consists of 3 sub-models: a Vision Transformer (ViT), a Query-Transforme...
null
ayoubkirouane/git-base-One-Piece
# Model Details + **Model Name**: Git-base-One-Piece + **Base Model**: Microsoft's "git-base" model + **Model Type**: Generative Image-to-Text (GIT) + **Fine-Tuned** On: 'One-Piece-anime-captions' dataset + **Fine-Tuning Purpose**: To generate text captions for images related to the anime series "One Piece." ## Mo...
null
s3-tresio/blip-image-captioning-base
# BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation Model card for image captioning pretrained on COCO dataset - base architecture (with ViT base backbone). | ![BLIP.gif](/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F1670928184033-62441d1d9fdefb55a0b7d12...%3C%2Fspan%3E%3C%2Fdiv%3E
null