The review quality in TMLR is better because:
1. The authors suggest the AE. This means that the AE is more likely to be the right fit for the paper.
2. The AE selects the best reviewers for the paper who may or may not be in the reviewer pool.
...
The review quality in TMLR is better because:
1. The authors suggest the AE. This means that the AE is more likely to be the right fit for the paper.
2. The AE selects the best reviewers for the paper who may or may not be in the reviewer pool.
...
There is a nuanced but important difference between chain-of-thought before and after o1.
Before the o1 paradigm (i.e., chain-of-thought prompting), there was a mismatch between what chain of thought was and what we wanted it to be. We wanted chain of thought to reflect the…
DPO is widely used to better align SFT LLMs with human preferences. However, its effectiveness is often limited by the KL-divergence constraint tied to the SFT model. In our new study, we closely examine DPO’s behavior, focusing on the significance of the reference policy through…
The Alpaca moment of Large Multimodal Models! Can we build native LMMs just like Llama for simple multimodal generation?
Introducing Anole: the first open-source, autoregressive native LMM for multimodal generation. Building on Chameleon by @AIatMeta: github.com/GAIR-NLP/anole
111 Followers 250 FollowingCEO of https://t.co/dnQfCwtCtN, the world's best AI therapy. PhD in Psych/Neuroscience; ex-@Stanford; Founded data/ai team at @Calm. New to X.
1K Followers 103 FollowingAI/RL researcher, Assistant Prof. at @Tsinghua_Uni, leading the RL lab at @AntResearch_, PhD at @berkeley_ai, frequent flyer and milk tea lover.
355K Followers 1K FollowingML/AI researcher & former stats professor turned LLM research engineer. Author of "Build a Large Language Model From Scratch" (https://t.co/O8LAAMRzzW).
17K Followers 20 FollowingA high-throughput and memory-efficient inference and serving engine for LLMs. Join https://t.co/lxJ0SfX5pJ to discuss together with the community!
20K Followers 1K FollowingResearcher @MSFTResearch, AI Frontiers Lab; Prof @UWMadison (on leave); learning in context; thinking about reasoning; babas of Inez Lily.
34K Followers 832 FollowingProfessor in Computer Science at UC Berkeley, co-Director of Berkeley RDI Center; Building safe, secure, decentralized AI; Serial entrepreneur
2K Followers 2K FollowingPhD student at Tsinghua NLP & AIR, studying agents that automate tasks ranging from daily activities to creative endeavors. Two drifters with the world to see.
613 Followers 395 FollowingScience Manager at AWS AI Labs. Training code LLM/agents. Organizer of @DL4Code at ICLR and @LLM4Code at ICSE
Past @StanfordNLP @StanfordSymSys @UMich @SJTU1896
5K Followers 829 FollowingPostdoc @LTIatCMU. PhD from Ohio State @osunlp. Author of MMMU, MAmmoTH. Training & evaluating foundation models. Opinions are my own.
6K Followers 1K FollowingCo-founder @allhands_ai, building OpenHands | PhD candidate @IllinoisCDS | BS @UMichCSE ('22) | Ex Intern @GoogleAI @Microsoft | Opinions are my own
494 Followers 158 FollowingUndergrad @sjtu1896.
Intern @ GAIR Lab (https://t.co/QWViO83puG)
Visiting @stanfordnlp.
NLP/LLMs/Reasoning.
Looking for a Ph.D. in the 26 fall.
206K Followers 5K FollowingVC at @MenloVentures. Formerly founding team @glean, @Google Search. @Cornell CS. Tweets about tech, immigration, India, fitness and search.
1.2M Followers 279 FollowingWe’re a team of scientists, engineers, ethicists and more, committed to solving intelligence, to advance science and benefit humanity.