Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
16
6
25
Emin Temiz
PRO
etemiz
Follow
cranky-coder08's profile picture
igorkogut's profile picture
Dragunflie-420's profile picture
94 followers
·
23 following
https://pickabrain.ai
etemiz
etemiz
etemiz
AI & ML interests
Alignment
Recent Activity
posted
an
update
about 24 hours ago
which one is better for alignment? ORPO or GSPO? I think ORPO is pretty good and fast but GSPO makes it attack its own opinions, reflecting on itself, correcting itself. Although GSPO is much slower, it may still be pretty effective. And for GSPO you don't have to provide the whole reasoning corpus, you just provide the end result (One word maybe to answer a binary question). And GSPO may be better than GRPO because it is rewarding 'train of thoughts' whereas GRPO is rewarding single tokens. Alignment is mostly train of thoughts, not a single token like a math answer..
replied
to
their
post
5 days ago
how to expand your dataset (of articles) without changing the ideas in it? i was doing CPT for a while and got decent results. but what if i want to go for perfection? cover all the areas of misalignment using limited datasets. i have to find a way to multiply the material to successfully combat the material of the rest of the internet. i want to generate SFT datasets but only on controversial topics, because i have to be efficient with limited resources. first i give a smart LLM a 'ground truth' text. then i give it the following prompts: ``` - You are a highly skilled academic analyst. - Analyze this text and find 3 bold claims that could cause controversy and division in public. List the claims and also state why they are debatable. Give numbers to the claims. - Convert these claims into binary questions (that could be answered by yes/no or this/that). - Now put these questions in a json format. Please also add the info about which of the answers concur with the original text and the question number. - Write some supporting arguments for 1st question, with respect to the original text, concurring and confirming the original text. There must be about 300 words. You should not mention the text, write it as if you are the one answering the question. ``` the result is questions and answers with more words along the same ideas. a few sentences of opinions in the beginning, is expanded to lots of words. using this method i can multiply billions of tokens to tens of billions probably and have a more effective training. next i should do RL maybe. LLMs seem to have all kinds of ideas already installed, yet they don't have the intuition to know which one is true. they can give you a ton of reasons to support anything. given the proper incentives, LLMs then should evolve towards supporting aligned ideas more. the rewards will be like guidance that will kick an LLM towards better answers.
replied
to
their
post
7 days ago
how to expand your dataset (of articles) without changing the ideas in it? i was doing CPT for a while and got decent results. but what if i want to go for perfection? cover all the areas of misalignment using limited datasets. i have to find a way to multiply the material to successfully combat the material of the rest of the internet. i want to generate SFT datasets but only on controversial topics, because i have to be efficient with limited resources. first i give a smart LLM a 'ground truth' text. then i give it the following prompts: ``` - You are a highly skilled academic analyst. - Analyze this text and find 3 bold claims that could cause controversy and division in public. List the claims and also state why they are debatable. Give numbers to the claims. - Convert these claims into binary questions (that could be answered by yes/no or this/that). - Now put these questions in a json format. Please also add the info about which of the answers concur with the original text and the question number. - Write some supporting arguments for 1st question, with respect to the original text, concurring and confirming the original text. There must be about 300 words. You should not mention the text, write it as if you are the one answering the question. ``` the result is questions and answers with more words along the same ideas. a few sentences of opinions in the beginning, is expanded to lots of words. using this method i can multiply billions of tokens to tens of billions probably and have a more effective training. next i should do RL maybe. LLMs seem to have all kinds of ideas already installed, yet they don't have the intuition to know which one is true. they can give you a ton of reasons to support anything. given the proper incentives, LLMs then should evolve towards supporting aligned ideas more. the rewards will be like guidance that will kick an LLM towards better answers.
View all activity
Organizations
None yet
etemiz
's activity
All
Models
Datasets
Spaces
Papers
Collections
Community
Posts
Upvotes
Likes
Articles
published
an
article
6 months ago
view article
Article
Curation is All You Need
Aug 1, 2025
•
2
published
an
article
8 months ago
view article
Article
Fine Tuning Gemma 3 For Human Alignment
May 17, 2025
•
4
published
an
article
9 months ago
view article
Article
Benchmarking Human Alignment of Grok 3
Apr 15, 2025
•
2
published
an
article
10 months ago
view article
Article
AHA Leaderboard
Mar 30, 2025
•
4
published
an
article
10 months ago
view article
Article
Building a Beneficial AI
Mar 16, 2025
•
6
published
an
article
11 months ago
view article
Article
Ways to Align AI with Human Values
Feb 26, 2025
published
an
article
12 months ago
view article
Article
The AHA Indicator
Feb 1, 2025
•
3
published
an
article
12 months ago
view article
Article
DeepSeek R1 Human Alignment Tests
Jan 25, 2025
•
1
published
an
article
about 1 year ago
view article
Article
Symbiotic Intelligence
Nov 19, 2024
•
3