I’m thrilled to announce Conformal Risk Control: a way to bound quantities other than coverage with conformal prediction.
arxiv.org/abs/2208.02814
Check out the worked examples in CV and NLP!
The best part is: it’s exactly the same algorithm as split conformal prediction🤯🧵1/5
New Preprint with @adamjfisch, T.Jaakkola and @BarzilayRegina. We present Consistent Accelerated Inference via 𝐂onfident 𝐀daptive 𝐓ransformers (CATs)
CATs can speed up inference 😺 while guaranteeing consistency 😼. The code is available🙀
🔗people.csail.mit.edu/tals/static/Co…#NLProc
Large pre-trained Transformers are great, but expensive to run. But making them more efficient (e.g., early exits) can give undesirable performance hits.
In our new work, we speed up inference while guaranteeing consistency with the original model up to a specifiable tolerance.
Large pre-trained Transformers are great, but expensive to run. But making them more efficient (e.g., early exits) can give undesirable performance hits.
In our new work, we speed up inference while guaranteeing consistency with the original model up to a specifiable tolerance.
New #NAACL2021 paper out on robust fact verification. Sources like Wikipedia are continuously edited with the latest information. In order to keep up, our models need to be sensitive to these changes in evidence when verifying claims.
Work with @TalSchuster and @BarzilayRegina!
New #NAACL2021 paper out on robust fact verification. Sources like Wikipedia are continuously edited with the latest information. In order to keep up, our models need to be sensitive to these changes in evidence when verifying claims.
Work with @TalSchuster and @BarzilayRegina!
#NeurIPS2019 Our work with MIT improves the interpretability of NLP models with an adversarial class-wise rationalization technique, which can find explanations towards any given class. Poster: Tue @ East Exhibition Hall B + C #1. @MITIBMLab@neurobongo@MIT_CSAIL@Bishop_Gorov
If you're at @emnlp2019, don't miss our talks:
Towards Debiasing Fact Verification Models
* Wednesday 15:42 (2B) *
@TalSchuster@darshj_shah
Working Hard or Hardly Working: Challenges of Integrating Typology into Neural Dependency Parsers
* Thursday 15:30 (201A) *
@adamjfisch
Check-out our new paper - arxiv.org/pdf/1909.13838…
Automatic Fact-guided sentence modification.
Method to automatically modify the factual information in a sentence.
Joint work with @str_t5 , Prof. Regina Barzilay.
Few-shot Text Classification with Distributional Signatures. What happens if you take meta-learning for vision and apply it to NLP? Prototypical Networks with lexical features perform worse than nearest neighbors on new classes. How can we do better? ;)
arxiv.org/abs/1908.06039
Our #emnlp2019 paper is now on arxiv:arxiv.org/abs/1908.05267
* Extending #FEVER (fact-checking) eval dataset to eliminate bias.
* Regularizing the training to alleviate the bias.
Coauthors: Darsh Shah, @yeodontsay, Daniel Filizzola, @ESantus, Regina Barzilay
@emnlp2019 #nlproc
Development datasets released! 6 in-domain and 6 out-of-domain including BioASQ, DROP, DuoRC, RACE, RelationExtraction, TextbookQA! Also released BERT baseline results. All the information at github.com/mrqa/MRQA-Shar…. Check out and let us know if you have questions! #mrqa2019
Our paper "GraphIE: A Graph-Based Framework for Information Extraction" has been accepted to #NAACL2019. We study how to model the graph structure of the data in various IE tasks. Joint work with @ESantus@jiangfeng1124@ZhijingJin and Regina Barzilay. (arxiv.org/abs/1810.13083)
Our paper: "Gromov-Wasserstein Alignment of Word Embedding Spaces" is now available (arxiv.org/abs/1809.00013). TL;DR: The Gromov-Wasserstein distance provides a simple, principled objective to align (w/o supervision) word embedding spaces, even of different dimensionality!
7K Followers 2K FollowingCofounder/CTO @SpiffyAI and Prof at @UCIrvine, working on reliable LLMs, explanations for AI+ML, adversaries for NLP, and debugging/evaluation.
15K Followers 6K FollowingI build tough benchmarks for LMs and then I get the LMs to solve them. SWE-bench & SWE-agent. Postdoc @Princeton. PhD @nlpnoah @UW.
143 Followers 2K Followingيا من تحسب نفسك وصيا علي. ليس لأهلك دخل فيمن أتابع، أو يتابعني.
لا أنتمي لحزب المحايديين أو المستقلين.
المثالية؟ إنها على وضعية الصامت.
20 Followers 74 FollowingThe Natural Language Processing Group at the National University of Science and Technology Politehnica Bucharest @upb1818
#NLProc
254K Followers 14K FollowingThe Innovation Medicine is a new journal on medical science from The Innovation Group, a sister journal of The Innovation @The_InnovationJ. #Medicine #Science
233 Followers 393 FollowingPhd student @TU_Muenchen working on #nlproc | Previously: 3-years PhD at @CisLMU, Intern at @Bosch_AI, Visiting @EdinburghNLP
451K Followers 77 FollowingTensors and neural networks in Python with strong hardware acceleration. PyTorch is an open source project at the Linux Foundation. #PyTorchFoundation
1.4M Followers 570 FollowingThe Massachusetts Institute of Technology is a world leader in research and education. Related accounts: @MITevents @MITstudents @MIT_alumni
13K Followers 225 FollowingNLP/ML research group at @UCLCS, PIs: S. Riedel (@riedelcastro), P. Stenetorp, T. Rocktäschel (@_rockt), E. Grefenstette (@egrefen), P. Minervini (@pminervini)