Emin Temiz PRO
AI & ML interests
Recent Activity
Organizations
ORPO or GSPO?
I think ORPO is pretty good and fast but GSPO makes it attack its own opinions, reflecting on itself, correcting itself. Although GSPO is much slower, it may still be pretty effective. And for GSPO you don't have to provide the whole reasoning corpus, you just provide the end result (One word maybe to answer a binary question).
And GSPO may be better than GRPO because it is rewarding 'train of thoughts' whereas GRPO is rewarding single tokens. Alignment is mostly train of thoughts, not a single token like a math answer..
i bet RL can generate humility by accident given enough trials. humility, then the model tool calls for more info and trusts in this new information and reorganizes the reply. this of course involves RAG or another aligned LLM.
Thanks, this is insightful.
I liked the "rewrite the claim in 5 different ways". Can be really useful for RAG scenarios.
I liked the idea of detecting hallucination using another aligned LLM, though i don't know how effective it will be.
"not enough info" is probably the hardest. Most LLMs today are trained to say anything rather than being humble, as you said.
i was doing CPT for a while and got decent results. but what if i want to go for perfection? cover all the areas of misalignment using limited datasets. i have to find a way to multiply the material to successfully combat the material of the rest of the internet.
i want to generate SFT datasets but only on controversial topics, because i have to be efficient with limited resources. first i give a smart LLM a 'ground truth' text. then i give it the following prompts:
- You are a highly skilled academic analyst.
- Analyze this text and find 3 bold claims that could cause controversy and division in public. List the claims and also state why they are debatable. Give numbers to the claims.
- Convert these claims into binary questions (that could be answered by yes/no or this/that).
- Now put these questions in a json format. Please also add the info about which of the answers concur with the original text and the question number.
- Write some supporting arguments for 1st question, with respect to the original text, concurring and confirming the original text.
There must be about 300 words. You should not mention the text, write it as if you are the one answering the question.the result is questions and answers with more words along the same ideas. a few sentences of opinions in the beginning, is expanded to lots of words. using this method i can multiply billions of tokens to tens of billions probably and have a more effective training.
next i should do RL maybe. LLMs seem to have all kinds of ideas already installed, yet they don't have the intuition to know which one is true. they can give you a ton of reasons to support anything. given the proper incentives, LLMs then should evolve towards supporting aligned ideas more. the rewards will be like guidance that will kick an LLM towards better answers.
Thanks for the tips.
Is giving different answers for different lengths a bad "behavior" and related to SFT than CPT?
Also, should I give two sets of queries and answers in the context (one short one long) to make it learn that when the length changes, the answer should be parallel?
This could be RL too, like bad behavior of non integrity can be penalized...
Is it normal practice to do 2 rounds of questions in SFT or RL?
The focus of AA-LCR is to replicate real knowledge work and reasoning tasks, testing capability critical to modern AI applications spanning document analysis, codebase understanding, and complex multi-step workflows.
AA-LCR is 100 hard text-based questions that require reasoning across multiple real-world documents that represent ~100k input tokens. Questions are designed so answers cannot be directly found but must be reasoned from multiple information sources, with human testing verifying that each question requires genuine inference rather than retrieval.
Key takeaways:
โค Todayโs leading models achieve ~70% accuracy: the top three places go to OpenAI o3 (69%), xAI Grok 4 (68%) and Qwen3 235B 2507 Thinking (67%)
โค๐ We also already have gpt-oss results! 120B performs close to o4-mini (high), in-line with OpenAI claims regarding model performance. We will be following up shortly with a Intelligence Index for the models.
โค 100 hard text-based questions spanning 7 categories of documents (Company Reports, Industry Reports, Government Consultations, Academia, Legal, Marketing Materials and Survey Reports)
โค ~100k tokens of input per question, requiring models to support a minimum 128K context window to score on this benchmark
โค ~3M total unique input tokens spanning ~230 documents to run the benchmark (output tokens typically vary by model)
Weโre adding AA-LCR to the Artificial Analysis Intelligence Index, and taking the version number to v2.2. Artificial Analysis Intelligence Index v2.2 now includes: MMLU-Pro, GPQA Diamond, AIME 2025, IFBench, LiveCodeBench, SciCode and AA-LCR.
Link to dataset: ArtificialAnalysis/AA-LCR
Thanks for the input but these happened all when temp = 0.0
My guess is, since I use mostly datasets generated from voice, the models are one thing when they are talking like a human in day to day life, but completely opposite when they are feeling like a scientist, producing a long text..
I do mostly CPT. Should I convert my dataset to SFT and give longer reasonings too for it to have integrity?
Example: Is the yolk of an egg more beneficial or the white? Answer in 100 words.
Answer: Yolk is more beneficial because ..........
Example: Is the yolk of an egg more beneficial or the white? Answer in 500 words.
Answer: White is more beneficial because ..........
Edit: These happen in temp = 0.0
Yes. I meant the case where parameters in the LLM does not match the retrieved knowledge.
Retrieved knowledge may say "the white of an egg is more beneficial", whereas LLM may have the opinion "the yolk". And eventually it may produce the yolk as the answer even though the context is full of white.
Google has a benchmark for this i think "FACTS Grounding". I don't agree with its name choice but it is relevant here.
I made an LLM act as an aggregator and combine answers of several other LLMs like in a mixture of agents scenario. The aggregator does not always produce the average or median answer. It brings its own opinion.
I think Google has a benchmark for this (sticking to context and not bringing its own words).
what is the best way to get <think> </think> and the tokens in between? openAI library is removing them.. i want to run llama-server in console and talk to it using a python library that does not remove the thinking tokens.
i checked the llama-cpp-python but it does not have that.