NTT scientists co-authored 11 papers selected for NeurIPS 2021 – QNT Press Release

[ad_1]

Papers cover machine learning, deep learning, optimization, generative modeling and other topics

NTT Research Corporation with NTT R&D, Divided NTT Corporation. (TYO:9432), today announced that 11 papers co-authored by researchers from several of their laboratories have been selected in NeurIPS 2021, 35day Neural Information Processing System Foundation Annual Meeting. Scientists from NTT Research will be held from December 6th to 14th Physics and Informatics (PHI) Laboratory with Password and Information Security (CIS) Laboratory Four papers are being submitted. Scientists from NTT Corp’s Computer and Data Science (CD), Human Informatics (HI), Social Informatics (SI) and Communication Science (CS) laboratories will publish seven papers.

NTT Research’s paper is co-authored by Drs. Sanjam Garg, Jess Riedel and Hidenori Tanaka. The NTT R&D paper is co-authored by Drs. Akagi Yasunori, Marumo Naoki, Kim Hideaki, Kurashima Takeshi, Toda Hiroyuki, Senjiwa Taiki, Yamaguchi Miya, Ida Yasuji, Ma Yue Kenji, Inoue Tomohiro, Sakakami Shinsaku, Nakamura Kengo, Futami Futami, Uda Tomokazu, Iwaeda Naya, Masao Fujiwara Yasuhiro, Kimura Akira, Yamada Takeshi, and Kumagai Atsushi. These papers solve problems related to deep learning, generative modeling, graph learning, kernel methods, machine learning, meta-learning, and optimization. One paper belongs to data set and benchmark tracking (“RAFT: A Real-World Few-shot Text Classification Benchmark”), and two papers were selected as spotlights (“Using Iterative Randomization to Prune Randomly Initialize Neural Networks” and “Gauss Cox’s Fast Bayesian inference “processed by path integral formula”). For title, co-author (related to NTT), abstract, and time, please refer to the following list:

  • Separation of data inadvertent and data-aware poisoning attack, “Samuel Deng, Sanjam Garg (CIS Lab), Somesh Jha, Saeed Mahloujifar, Mohammad Mahmoody and Abhradeep Guha Thakurta. Most poisoning attacks require full knowledge of the training data. This leaves the possibility of using a poisoning attack that does not have all the knowledge of a clean training set to achieve the same attack result. The theoretical research results of this problem show that the two settings of data perception and data forgetting are fundamentally different. The same attack or defense result cannot be achieved in these scenarios. December 7, 8:30 AM (Pacific Time)

  • RAFT: A benchmark for real-world text classification with few shots,” Neel Alex, Eli Lifland, Lewis Tunstall, Abhishek Thakur, Pegah Maham, C. Jess Riedel (PHI Laboratories), Emmie Hine, Carolyn Ashurst, Paul Sedille, Alexis Carlier, Michael Noetel, and Andreas Stuhlmüller – data sets and benchmark tracking . Large pre-trained language models have shown the promise of learning from small samples, but existing benchmarks are not designed to measure the progress of application settings. Real-world Annotated Small Sample Tasks (RAFT) benchmarks focus on naturally occurring tasks and use evaluation settings that reflect deployment. A baseline assessment of RAFT shows that the current technology has problems in several areas. The human baseline shows that some classification tasks are difficult for non-expert humans. However, even non-expert humans…

The full story can be found on Benzinga.com

[ad_2]

Source link

Recommended For You

About the Author: News Center