Assistant Professor @mldcmu. Formerly: Postdoc @MITEECS, PhD @Berkeley_EECS, Math Undergrad @Princeton. New to Twitter. https://t.co/67bMOAyqK6Joined May 2024
👏👏This is pretty massive!! Generative modeling looks clean in math, but getting it up and running can require a fair bit of alchemy. 🧪🧪
Thankfully, Nick Boffi (co-creator of Stochastic Interpolants) just dropped a super-clean, super-fast, super-reproducible repo for core…
👏👏This is pretty massive!! Generative modeling looks clean in math, but getting it up and running can require a fair bit of alchemy. 🧪🧪
Thankfully, Nick Boffi (co-creator of Stochastic Interpolants) just dropped a super-clean, super-fast, super-reproducible repo for core…
TRI's latest Large Behavior Model (LBM) paper landed on arxiv last night! Check out our project website: toyotaresearchinstitute.github.io/lbm1/
One of our main goals for this paper was to put out a very careful and thorough study on the topic to help people understand the state of the…
Very cool! In addition to optimizing inference-time search as a learning desideratum, this really speaks to power of building reward models purely from expert trajectories, via discriminative objectives. Excited to see how far this can go!
Very cool! In addition to optimizing inference-time search as a learning desideratum, this really speaks to power of building reward models purely from expert trajectories, via discriminative objectives. Excited to see how far this can go!
I am giving a talk "From Sim2Real 1.0 to 4.0 for Humanoid Whole-Body Control and Loco-Manipulation" at the RoboLetics 2.0 workshop @ieee_ras_icra today, summarizing my recent thoughts on sim2real.
If you are interested: 2pm, May 23 @ room 302.
Want to scale robot data with simulation, but don’t know how to get large numbers of realistic, diverse, and task-relevant scenes?
Our solution:
➊ Pretrain on broad procedural scene data
➋ Steer generation toward downstream objectives
🌐 steerable-scene-generation.github.io
🧵1/8
RL and post-training play a central role in giving language models advanced reasoning capabilities, but many algorithmic and scientific questions remain unanswered.
Join us at FoPT @ COLT '25 to explore pressing emerging challenges and opportunities for theory to bring clarity.
RL and post-training play a central role in giving language models advanced reasoning capabilities, but many algorithmic and scientific questions remain unanswered.
Join us at FoPT @ COLT '25 to explore pressing emerging challenges and opportunities for theory to bring clarity.
Congrats to Andrea Bajcsy (@andrea_bajcsy) on receiving the NSF CAREER award! 👏
Her work: “Formalizing Open World Safety for Interactive Robots,” explores how robots make safe decisions beyond collision avoidance. Read about it and her education plans: loom.ly/59evuD0
Building AI systems is now a fragmented process spanning multiple organizations & entities.
In new work (w/ @aspenkhopkins@cen_sarah@andrew_ilyas@imstruckman@LVidegaray), we study the implications of these emerging networks → what we call *AI supply chains* 🧵
Before the (exciting) workshops on Sun, catch Vincent’s oral talk at the #ICLR2025 main conference on this paper today at 3:30pm, Hall 1 Apex!
And don’t forget to talk with the co-leads Vincent and @YiSu37328759 at the poster 10 a.m - 12:30 p.m Hall 3 + Hall 2B #558.
Before the (exciting) workshops on Sun, catch Vincent’s oral talk at the #ICLR2025 main conference on this paper today at 3:30pm, Hall 1 Apex!
And don’t forget to talk with the co-leads Vincent and @YiSu37328759 at the poster 10 a.m - 12:30 p.m Hall 3 + Hall 2B #558.
So excited for this!!!
The key technical breakthrough here is that we can control joints and fingertips of the robot **without joint encoders**.
Learning from self-supervised data collection is all you need for training the humanoid hand control you see below.
So excited for this!!!
The key technical breakthrough here is that we can control joints and fingertips of the robot **without joint encoders**.
Learning from self-supervised data collection is all you need for training the humanoid hand control you see below.
178 Followers 8K FollowingPassionate about AI 🤖, ML 🧠, AGI 🌐, ASI 🚀, and robotics 🤖.
Never lose hope in God's mercy 💫.
AI Engineer Microsoft
He studies at MIT.
Free Palestine 🇵🇸
186 Followers 425 FollowingProducts to enhance human agency
@southpkcommons, @buildexante, @AgencyFund
Past: Public Health @Meta, Head of ML @PureStorage
1K Followers 2K FollowingJR East Professor of Engineering. Head,Department of Civil and Environmental Engineering, Massachusetts Institute of Technology
4K Followers 501 FollowingResearcher @OpenAI, core member of GPT image generation. PhD @MITEECS, where I worked on video world models, RL, and robotics. Ex @GoogleDeepMind, @berkeley_ai
4K Followers 917 FollowingAssistant Professor at UPenn. Research interests: Neural Scene Representation, Neural Rendering, Human Performance Modeling and Capture.
16K Followers 308 FollowingTeaching AI to see, model, and interact with our 3D world. Assistant Professor @ MIT, leading the Scene Representation Group (https://t.co/h5gvhLYrtw).
7K Followers 543 FollowingAssistant Professor of Computer Science @Columbia @ColumbiaCompSci, Postdoc from @Stanford @StanfordSVL, PhD from @MIT_CSAIL. #Robotics #Vision #Learning
21K Followers 267 FollowingPioneering the future of robotics since 1979. We’re transforming industries and everyday life through cutting-edge innovation and world-class education.
16K Followers 495 FollowingHarvard Professor.
Full stack ML and AI.
Co-director of the Kempner Institute for the Study of Artificial and Natural Intelligence.
37K Followers 483 FollowingDigital Geometer, Assoc. Prof. of Computer Science & Robotics @CarnegieMellon @SCSatCMU and member of the @GeomCollective. There are four lights.