Further's Jason Tabeling breaks down how to leverage the power of BigQuery ML to optimize your targeting, bidding strategies, and ultimately, your ROI in his article on Search Engine Land. ow.ly/wzmB50VGtv6#GoogleAds#BigQueryML#MarketingAnalytics#PPC
Reasoning LLMs Guide
Here is my practical guide to building with Reasoning LLMs.
Lots of dev tips in it.
It covers:
- What are Reasoning LLMs?
- Top Reasoning Models
- Reasoning Model Design Patterns & Use Cases
- Reasoning LLM Usage Tips
- Limitations with Reasoning Models
WebThinker combines large reasoning models with deep research capability.
This paper introduces a reasoning agent framework that equips large reasoning models (LRMs) with autonomous web exploration and report writing abilities to overcome limitations of static internal…
The entire tech community is under the impression that AI coding will result in power flowing from engineers to “idea guys.”
Wrong—it will always flow to whatever still has scarcity: those who know how to get distribution
Announcing VHELM v2.1.2 for VLMs: We added the latest Gemini models, Qwen2.5-VL Instruct models, GPT 4.5 preview, o3, o4-mini, and Llama 4 Scout/Maverick. Prompts and predictions can be found on our website:
crfm.stanford.edu/helm/vhelm/v2.…
Meet ReasonIR-8B✨the first retriever specifically trained for reasoning tasks! Our challenging synthetic training data unlocks SOTA scores on reasoning IR and RAG benchmarks. ReasonIR-8B ranks 1st on BRIGHT and outperforms search engine and retriever baselines on MMLU and GPQA🔥
How do language models generalize from information they learn in-context vs. via finetuning? We show that in-context learning can generalize more flexibly, illustrating key differences in the inductive biases of these modes of learning — and ways to improve finetuning. Thread: 1/
Noticing a lot of bad AI product experiences these days.
Lots of products feel rushed and unpolished.
It's because AI is in its infancy, but already quite capable.
There is a lot to unlock and improve (e.g., better UI/UX).
Insane amount of opportunities to build and disrupt!
We are hiring a PhD research intern at FAIR w/ @marksibrahim@kamalikac to start this summer or Fall!
Potential topics: trustworthy and reliable LLMs, multi-modal LLMs and agents, post-training, reasoning, with a focus on open science/sharing our findings in the paper at the end…
There are two ways to validate a network-based product:
1. The best method is repeatedly running experiments into small, closed communities because you get so many shots at bat to get the product right.
However, it only works for communities that are:
• Plentiful (i.e.,…
silly comments are why code generation models can reason better and write better code. Stripping off silly comments post hoc is a more straightforward task, with today’s LLMs, than building accurate codegen models that are not verbosely commenting.
silly comments are why code generation models can reason better and write better code. Stripping off silly comments post hoc is a more straightforward task, with today’s LLMs, than building accurate codegen models that are not verbosely commenting.
Wow that’s a dumb idea.
Someone already made it before—with the worst user experience possible and with the least memorable name a human could think of.
If theirs didn’t work, there’s no way yours is going to work.
BOOOOM: Today I'm dropping TINY AGENTS
the 50 lines of code Agent in Javascript 🔥
I spent the last few weeks working on this, so I hope you will like it.
I've been diving into MCP (Model Context Protocol) to understand what the hype was all about.
It is fairly simple, but…
8 Followers 36 FollowingI'm a journalist-turned-editor-turned-watchdog. I lead content at a review site that takes accuracy seriously, and bring that same obsession to https://t.co/kdfM9sujMo.
182 Followers 63 FollowingFounder & CEO at @TeambleTeam_ | ODF #6 @beondeck | Ex-@BCG | tweeting about entrepreneurship, future of work, and workplace culture
39 Followers 90 FollowingBuilding cool things with AI, Solana & a keyboard. Sharing insights, tools & dev life one post at a time. Let’s push what’s possible.
3 Followers 1 FollowingGorkXBT, OkayAI’s KOL AI mogging the crypto matrix.
Slinging alphas, dunking rugs with blockchain truth.
No BS, just vibes, join the degen squad on X!
18K Followers 1K FollowingPretraining @xAI. Previously: @InflectionAI, @AIatMeta, @DeepMind, @Google, @LMU_Muenchen, PhD math-ph. Opinions my own. (Can be yours for a small fee.)
6K Followers 478 FollowingxAI, pre-train lead for v7, grok2&3&4 mini. ex-OpenAI, sole inventor of GPT4-turbo long-context. Core contributor to (GPT4/o/turbo, DaLLE 3, OAI Embedding v3)
263K Followers 670 FollowingBuilding with AI agents @dair_ai • Prev: Meta AI, Galactica LLM, Elastic, PaperswithCode, PhD • I share insights on how to build with AI Agents ↓
46K Followers 1K Following(On mat leave.) Cofounded & running @ml_collective. Host of Deep Learning Classics & Trends. Research at Google DeepMind. DEI/DIA Chair of ICLR & NeurIPS.
31K Followers 877 FollowingVP GenAI @Databricks. Former CEO/cofounder MosaicML & Nervana/IntelAI. Neuro + CS. I like to build stuff that will eventually learn how to build other stuff.
1.3M Followers 1K FollowingCo-Founder of Coursera; Stanford CS adjunct faculty. Former head of Baidu AI Group/Google Brain. #ai #machinelearning, #deeplearning #MOOCs
1.2M Followers 279 FollowingWe’re a team of scientists, engineers, ethicists and more, committed to solving intelligence, to advance science and benefit humanity.
1.4M Followers 1K FollowingBuilding @EurekaLabsAI. Previously Director of AI @ Tesla, founding team @ OpenAI, CS231n/PhD @ Stanford. I like to train large deep neural nets.