Inference Providers
Active filters: llama-2
gsaivinay/Llama-2-7b-Chat-GPTQ
Text Generation
• Updated • 11
• 2
anonymous4chan/llama-2-13b
Text Generation
• 13B • Updated • 10
4bit/Llama-2-7b-Chat-GPTQ
Text Generation
• Updated • 10
• 11
TheBloke/Llama-2-13B-fp16
Text Generation
• Updated • 1.11k
• 63
anonymous4chan/llama-2-70b
Text Generation
• 69B • Updated • 13
• 4
NousResearch/Llama-2-7b-chat-hf
Text Generation
• 7B • Updated • 35.4k
• 194
michaelfeil/ct2fast-Llama-2-7b-hf
Text Generation
• Updated • 10
• 3
michaelfeil/ct2fast-Llama-2-7b-chat-hf
Text Generation
• Updated • 10
• 4
NousResearch/Llama-2-70b-hf
Text Generation
• 69B • Updated • 2.63k
• 22
michaelfeil/ct2fast-Llama-2-13b-chat-hf
Text Generation
• Updated • 10
• 4
michaelfeil/ct2fast-Llama-2-13b-hf
Text Generation
• Updated • 14
• 1
TheBloke/Llama-2-70B-Chat-GPTQ
Text Generation
• 69B • Updated • 2.52k
• 259
TheBloke/Llama-2-70B-GPTQ
Text Generation
• 69B • Updated • 903
• 83
quantumaikr/llama-2-7b-chat-guanaco-hf-4bit
Text Generation
• Updated • 3
quantumaikr/llama-2-7b-chat-vicuna-hf-4bit
Text Generation
• Updated • 6
NousResearch/Llama-2-13b-chat-hf
Text Generation
• 13B • Updated • 6.6k
• 31
TheBloke/Llama-2-70B-fp16
Text Generation
• 69B • Updated • 1.19k
• 47
TheBloke/Llama-2-70B-Chat-fp16
Text Generation
• 69B • Updated • 663
• 48
Text Generation
• 7B • Updated • 11
• 17
abhinavkulkarni/meta-llama-Llama-2-7b-chat-hf-w4-g128-awq
Text Generation
• Updated • 9
• 6
Text Generation
• 13B • Updated • 9
• 5
NousResearch/Llama-2-70b-chat-hf
Text Generation
• 69B • Updated • 999
• 19
abhinavkulkarni/meta-llama-Llama-2-13b-chat-hf-w4-g128-awq
Text Generation
• Updated • 9
• 1
Text Generation
• 69B • Updated • 71
• 13
seonglae/llama-2-7b-chat-hf-gptq
Text Generation
• Updated • 57
seonglae/llama-2-13b-chat-hf-gptq
Text Generation
• Updated • 6
Mikael110/llama-2-7b-guanaco-fp16
Text Classification
• Updated • 438
• 10
Text Generation
• Updated • 1.06k
• 3
Text Generation
• Updated • 757
• 1
TheBloke/Redmond-Puffin-13B-GGML
Updated • 20
• 23