AI & ML interests

None defined yet.

meg 
posted an update about 1 month ago
view post
Post
3753
🤖 Did you know your voice might be cloned without your consent from just *one sentence* of audio?
That's not great. So with @frimelle , we brainstormed a new idea for developers who want to curb malicious use: ✨The Voice Consent Gate.✨
Details, code, here: https://huggingface.co/blog/voice-consent-gate
  • 3 replies
·
hassenhamdi 
posted an update 2 months ago
view post
Post
206
New release of HyDRA v0.2 is here!

🐍 HyDRA: Hybrid Dynamic RAG Agent.

For addressing the limitations of simple, static RAG. HyDRA is the answer. It's an advanced, unified framework for agentic RAG, inspired by the latest research to create something truly powerful.

🧠 Moving beyond single-shot retrieval. HyDRA introduces a multi-turn, reflection-based system with coordinated agents: a Planner, Coordinator, and Executors (currently local & deep web search).

🔬 At its core is an advanced 3-stage local retrieval pipeline that leaves basic RAG in the dust:
🥇 1. Hybrid Search: Combines dense (semantic) and sparse (textual) embeddings in one go using the bge-m3 model. This alone is a massive upgrade.
🥈 2. RRF (Reciprocal Rank Fusion): Intelligently merges and reranks results from different search vectors for ultimate precision.
🥉 3. Advanced Reranking: Uses the bge-m3-reranker model to score and surface the absolute most relevant documents for any query.

⚡️ This isn't just powerful, it's blazing fast. We're using SOTA ANN (HNSW) with vector and index quantization (down to 1-bit!) for near-instant retrieval with minimal quality loss.

🤖 HyDRA is more than just retrieval. It incorporates memory from experience and reflection, creating a guiding policy for smarter future interactions and strategic planning.

The result? A local retrieval system that significantly outperforms standard vector search RAG.

🌐 For deep web searches, HyDRA leverages the asynDDGS library and mcp (Model Context Protocol) for free, unrestricted web access. The entire reasoning engine is powered by the incredibly fast and efficient Google Gemini 2.5 Flash!

👨‍💻 Explore the project, dive into the code, and see it in action:
🔗 GitHub: https://github.com/hassenhamdi/HyDRA (leave a star if you like the project)

🤝 Looking to implement cutting-edge AI solutions or collaborate? Let's connect!
LinkedIn: linkedin.com/in/hassenhamdi
Email: [email protected]
Discord: hassenhamdi
meg 
posted an update 3 months ago
view post
Post
2895
🤖 As AI-generated content is shared in movies/TV/across the web, there's one simple low-hanging fruit 🍇 to help know what's real: Visible watermarks. With the Gradio team, I've made sure it's trivially easy to add this disclosure to images, video, chatbot text. See how: https://huggingface.co/blog/watermarking-with-gradio
Thanks to the code collab in particular from @abidlabs and Yuvraj Sharma.
meg 
posted an update 4 months ago
clem 
posted an update 4 months ago
meg 
posted an update 4 months ago
view post
Post
443
🤖 ICYMI: Yesterday, Hugging Face and OpenAI partnered to bring open source GPT to the public. This is a Big Deal in "AI world".

0. Common ground setting: OpenAI is the ChatGPT people. An “open source” model is one whose weights are available — that means the model can be “yours”.
1. You don’t have to interact with the company directly, nor give them your interactions, to use the system. The company can't "surveil" you.
2. You can evaluate the unique contributions of their SOTA model much more rigorously than you can when there are collections of models+code behind a closed API. You can find out specifically what the model can and can't do.
3. And you can directly customize it for whatever you'd like. Fine-tuning, wherein you give the model data that's tailored to your use cases and train it some more on that data, is trivial* when you have the model weights.
*Provided you have the compute.
4. You can directly benchmark whatever you'd like. Biases? Energy usage? Strengths/weaknesses? Go for it. You wants it you gots it--this transparency helps people understand SOTA *in general*, not just for this model, but points to, e.g., what's going on with closed Google models as well.
5. One of the most powerful things about "openness" that I've learned is that it cultivates ecosystems of collaborators building on top of one another's brilliance to make systems that are significantly better than they would be if created in isolation.
But, caveat wrt my own philosophy...
6. I do not take it as a given that advancing LLMs is good, and have a lot more to say wrt where I think innovation should focus more. For example, a focus on *data* -- curation, measurement, consent, credit, compensation, safety -- would deeply improve technology for everyone.
7. The transparency this release provides is massive for people who want to *learn* about LLMs. For the next generation of technologists to advance over the current, they MUST be able to learn about what's happening now. (cont...)
  • 1 reply
·
meg 
posted an update 4 months ago
view post
Post
498
🤖 👾 Thanks so much to BBC News and the stellar Suranjana Tewari for having me on to talk about US <—> China relationship in AI, and what it means for AI ethics.
clem 
posted an update 6 months ago
clem 
posted an update 6 months ago
view post
Post
7736
Today, we're unveiling two new open-source AI robots! HopeJR for $3,000 & Reachy Mini for $300 🤖🤖🤖

Let's go open-source AI robotics!
·