Unnamed: 0.1 int64 0 41k | Unnamed: 0 int64 0 41k | author stringlengths 9 1.39k | id stringlengths 11 18 | summary stringlengths 25 3.66k | title stringlengths 4 258 | year int64 1.99k 2.02k | arxiv_url stringlengths 32 39 | info stringlengths 523 3.18k | embeddings stringlengths 16.9k 17.1k |
|---|---|---|---|---|---|---|---|---|---|
0 | 0 | ['Ahmed Osman', 'Wojciech Samek'] | 1802.00209v1 | We propose an architecture for VQA which utilizes recurrent layers to
generate visual and textual attention. The memory characteristic of the
proposed recurrent attention units offers a rich joint embedding of visual and
textual features and enables the model to reason relations between several
parts of the image and q... | Dual Recurrent Attention Units for Visual Question Answering | 2,018 | http://arxiv.org/pdf/1802.00209v1 | Title Dual Recurrent Attention Units Visual Question Answering Summary propose architecture VQA utilizes recurrent layer generate visual textual attention memory characteristic proposed recurrent attention unit offer rich joint embedding visual textual feature enables model reason relation several part image question s... | [0.030238820239901543, 0.016444332897663116, -0.016811851412057877, 0.06810687482357025, 0.0009400892304256558, 0.01178388949483633, -0.0018757805228233337, -0.009158705361187458, 0.002846865216270089, -0.038819942623376846, 0.0050158933736383915, -0.03350161761045456, -0.021443059667944908, 0.050835151225328445, 0.053... |
1 | 1 | ['Ji Young Lee', 'Franck Dernoncourt'] | 1603.03827v1 | Recent approaches based on artificial neural networks (ANNs) have shown
promising results for short-text classification. However, many short texts
occur in sequences (e.g., sentences in a document or utterances in a dialog),
and most existing ANN-based systems do not leverage the preceding short texts
when classifying ... | Sequential Short-Text Classification with Recurrent and Convolutional
Neural Networks | 2,016 | http://arxiv.org/pdf/1603.03827v1 | Title Sequential ShortText Classification Recurrent Convolutional Neural Networks Summary Recent approach based artificial neural network ANNs shown promising result shorttext classification However many short text occur sequence eg sentence document utterance dialog existing ANNbased system leverage preceding short te... | [0.040623173117637634, 0.010163335129618645, 0.0038399931509047747, 0.06766032427549362, -0.01700439304113388, -0.003362833522260189, 0.025043299421668053, 0.020645055919885635, 0.03404254466295242, -0.081425741314888, -0.029075222089886665, -0.04023962467908859, 0.01441923063248396, 0.06081303581595421, 0.004792141262... |
2 | 2 | ['Iulian Vlad Serban', 'Tim Klinger', 'Gerald Tesauro', 'Kartik Talamadupula', 'Bowen Zhou', 'Yoshua Bengio', 'Aaron Courville'] | 1606.00776v2 | We introduce the multiresolution recurrent neural network, which extends the
sequence-to-sequence framework to model natural language generation as two
parallel discrete stochastic processes: a sequence of high-level coarse tokens,
and a sequence of natural language tokens. There are many ways to estimate or
learn the ... | Multiresolution Recurrent Neural Networks: An Application to Dialogue
Response Generation | 2,016 | http://arxiv.org/pdf/1606.00776v2 | Title Multiresolution Recurrent Neural Networks Application Dialogue Response Generation Summary introduce multiresolution recurrent neural network extends sequencetosequence framework model natural language generation two parallel discrete stochastic process sequence highlevel coarse token sequence natural language to... | [0.07278219610452652, 0.03251509740948677, -0.0008744823280721903, 0.02868502028286457, -0.0180113073438406, -0.007289289031177759, -0.0035347219090908766, -0.01468235719949007, -0.006232827436178923, -0.07925242930650711, 0.02225460298359394, -0.025085624307394028, 0.0075123305432498455, 0.07406775653362274, -0.001531... |
3 | 3 | ['Sebastian Ruder', 'Joachim Bingel', 'Isabelle Augenstein', 'Anders Søgaard'] | 1705.08142v2 | Multi-task learning is motivated by the observation that humans bring to bear
what they know about related problems when solving new ones. Similarly, deep
neural networks can profit from related tasks by sharing parameters with other
networks. However, humans do not consciously decide to transfer knowledge
between task... | Learning what to share between loosely related tasks | 2,017 | http://arxiv.org/pdf/1705.08142v2 | Title Learning share loosely related task Summary Multitask learning motivated observation human bring bear know related problem solving new one Similarly deep neural network profit related task sharing parameter network However human consciously decide transfer knowledge task Natural Language Processing NLP hard predi... | [0.022487860172986984, 0.03934193029999733, -0.032645177096128464, 0.00894354097545147, -0.02416212111711502, -0.02859516069293022, 0.05892830342054367, -0.02223842963576317, -0.02291261963546276, -0.0076596797443926334, -0.08599571883678436, 0.01687566377222538, -0.04040209576487541, 0.07899221777915955, 0.02695311792... |
4 | 4 | ['Iulian V. Serban', 'Chinnadhurai Sankar', 'Mathieu Germain', 'Saizheng Zhang', 'Zhouhan Lin', 'Sandeep Subramanian', 'Taesup Kim', 'Michael Pieper', 'Sarath Chandar', 'Nan Rosemary Ke', 'Sai Rajeshwar', 'Alexandre de Brebisson', 'Jose M. R. Sotelo', 'Dendi Suhubdy', 'Vincent Michalski', 'Alexandre Nguyen', 'Joelle Pi... | 1709.02349v2 | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural langu... | A Deep Reinforcement Learning Chatbot | 2,017 | http://arxiv.org/pdf/1709.02349v2 | Title Deep Reinforcement Learning Chatbot Summary present MILABOT deep reinforcement learning chatbot developed Montreal Institute Learning Algorithms MILA Amazon Alexa Prize competition MILABOT capable conversing human popular small talk topic speech text system consists ensemble natural language generation retrieval ... | [0.08369601517915726, 0.020538926124572754, -0.006166993174701929, -0.02317694202065468, -0.016374515369534492, 0.007429464254528284, -0.004607527516782284, 0.004775070585310459, 0.014056166633963585, -0.020789040252566338, -0.023264795541763306, -0.004501406103372574, -0.025124873965978622, 0.09215951710939407, -0.004... |
5 | 5 | ['Kelvin Guu', 'Tatsunori B. Hashimoto', 'Yonatan Oren', 'Percy Liang'] | 1709.08878v1 | We propose a new generative model of sentences that first samples a prototype
sentence from the training corpus and then edits it into a new sentence.
Compared to traditional models that generate from scratch either left-to-right
or by first sampling a latent sentence vector, our prototype-then-edit model
improves perp... | Generating Sentences by Editing Prototypes | 2,017 | http://arxiv.org/pdf/1709.08878v1 | Title Generating Sentences Editing Prototypes Summary propose new generative model sentence first sample prototype sentence training corpus edits new sentence Compared traditional model generate scratch either lefttoright first sampling latent sentence vector prototypethenedit model improves perplexity language modelin... | [0.08618932217359543, 0.04899665713310242, -0.02246158942580223, 0.019272757694125175, -0.03703729435801506, 0.01352265290915966, 0.037966545671224594, -0.024638528004288673, -0.027826817706227303, -0.03432176634669304, 0.004921475891023874, -0.011469664983451366, -0.023666782304644585, 0.02391095645725727, 0.040919352... |
6 | 6 | ['Iulian V. Serban', 'Chinnadhurai Sankar', 'Mathieu Germain', 'Saizheng Zhang', 'Zhouhan Lin', 'Sandeep Subramanian', 'Taesup Kim', 'Michael Pieper', 'Sarath Chandar', 'Nan Rosemary Ke', 'Sai Rajeswar', 'Alexandre de Brebisson', 'Jose M. R. Sotelo', 'Dendi Suhubdy', 'Vincent Michalski', 'Alexandre Nguyen', 'Joelle Pin... | 1801.06700v1 | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural langu... | A Deep Reinforcement Learning Chatbot (Short Version) | 2,018 | http://arxiv.org/pdf/1801.06700v1 | Title Deep Reinforcement Learning Chatbot Short Version Summary present MILABOT deep reinforcement learning chatbot developed Montreal Institute Learning Algorithms MILA Amazon Alexa Prize competition MILABOT capable conversing human popular small talk topic speech text system consists ensemble natural language generat... | [0.07682596892118454, 0.024237127974629402, -0.0078873410820961, -0.021940061822533607, -0.00957291666418314, 0.009843532927334309, 0.0012712820898741484, -0.0006560627953149378, 0.006107945926487446, -0.02412767894566059, -0.021866416558623314, -0.002529066288843751, -0.02169090323150158, 0.08880618214607239, -0.00294... |
7 | 7 | ['Darko Brodic', 'Alessia Amelio', 'Zoran N. Milivojevic', 'Milena Jevtic'] | 1609.06492v1 | The paper introduces a new method for discrimination of documents given in
different scripts. The document is mapped into a uniformly coded text of
numerical values. It is derived from the position of the letters in the text
line, based on their typographical characteristics. Each code is considered as
a gray level. Ac... | Document Image Coding and Clustering for Script Discrimination | 2,016 | http://arxiv.org/pdf/1609.06492v1 | Title Document Image Coding Clustering Script Discrimination Summary paper introduces new method discrimination document given different script document mapped uniformly coded text numerical value derived position letter text line based typographical characteristic code considered gray level Accordingly coded text dete... | [0.015126252546906471, 0.0003478110593277961, -0.015845399349927902, 0.04707137867808342, -0.02666432224214077, 0.02335183322429657, 0.03511674329638481, 0.1077132597565651, 0.03802177309989929, -0.0408138744533062, 0.0360984206199646, 0.020254211500287056, 0.042206279933452606, 0.03312941640615463, -0.0029164124280214... |
8 | 8 | ['Mateusz Malinowski', 'Mario Fritz'] | 1610.01076v1 | Together with the development of more accurate methods in Computer Vision and
Natural Language Understanding, holistic architectures that answer on questions
about the content of real-world images have emerged. In this tutorial, we build
a neural-based approach to answer questions about images. We base our tutorial
on ... | Tutorial on Answering Questions about Images with Deep Learning | 2,016 | http://arxiv.org/pdf/1610.01076v1 | Title Tutorial Answering Questions Images Deep Learning Summary Together development accurate method Computer Vision Natural Language Understanding holistic architecture answer question content realworld image emerged tutorial build neuralbased approach answer question image base tutorial two datasets mostly DAQUAR bit... | [0.05145927891135216, 0.03849175572395325, -0.02072332054376602, 0.07118497788906097, -0.004714971873909235, -0.005271739326417446, 0.017641786485910416, -0.0026514835190027952, -0.03388672694563866, -0.031107222661376, -0.0179398525506258, -0.010696107521653175, 0.008540982380509377, 0.07507918775081635, 0.00251430040... |
9 | 9 | ['Tony Beltramelli'] | 1705.07962v2 | Transforming a graphical user interface screenshot created by a designer into
computer code is a typical task conducted by a developer in order to build
customized software, websites, and mobile applications. In this paper, we show
that deep learning methods can be leveraged to train a model end-to-end to
automatically... | pix2code: Generating Code from a Graphical User Interface Screenshot | 2,017 | http://arxiv.org/pdf/1705.07962v2 | Title pix2code Generating Code Graphical User Interface Screenshot Summary Transforming graphical user interface screenshot created designer computer code typical task conducted developer order build customized software website mobile application paper show deep learning method leveraged train model endtoend automatica... | [0.020227601751685143, 0.03282645717263222, -0.03515835851430893, 0.026191987097263336, -0.02822529338300228, -0.009803039021790028, 0.056261561810970306, 0.0431673526763916, -0.04715876281261444, -0.03450101986527443, -0.0003499208833090961, 0.03791843354701996, 0.019206956028938293, 0.16242928802967072, 0.01970106549... |
10 | 10 | ['Fred Richardson', 'Douglas Reynolds', 'Najim Dehak'] | 1504.00923v1 | Learned feature representations and sub-phoneme posteriors from Deep Neural
Networks (DNNs) have been used separately to produce significant performance
gains for speaker and language recognition tasks. In this work we show how
these gains are possible using a single DNN for both speaker and language
recognition. The u... | A Unified Deep Neural Network for Speaker and Language Recognition | 2,015 | http://arxiv.org/pdf/1504.00923v1 | Title Unified Deep Neural Network Speaker Language Recognition Summary Learned feature representation subphoneme posterior Deep Neural Networks DNNs used separately produce significant performance gain speaker language recognition task work show gain possible using single DNN speaker language recognition unified DNN ap... | [0.0012832034844905138, 0.06519152224063873, 0.016510702669620514, 0.05018047243356705, 0.003570161061361432, 0.020978888496756554, 0.06571382284164429, -0.016524504870176315, -0.008365627378225327, 0.006050108931958675, -0.06710567325353622, -0.05633586272597313, 0.04013851284980774, 0.002399613382294774, -0.019586972... |
11 | 11 | ['Hieu Pham', 'Melody Y. Guan', 'Barret Zoph', 'Quoc V. Le', 'Jeff Dean'] | 1802.03268v2 | We propose Efficient Neural Architecture Search (ENAS), a fast and
inexpensive approach for automatic model design. In ENAS, a controller learns
to discover neural network architectures by searching for an optimal subgraph
within a large computational graph. The controller is trained with policy
gradient to select a su... | Efficient Neural Architecture Search via Parameter Sharing | 2,018 | http://arxiv.org/pdf/1802.03268v2 | Title Efficient Neural Architecture Search via Parameter Sharing Summary propose Efficient Neural Architecture Search ENAS fast inexpensive approach automatic model design ENAS controller learns discover neural network architecture searching optimal subgraph within large computational graph controller trained policy gr... | [0.002730378182604909, 0.04860375449061394, -0.025265756994485855, 0.05351081117987633, 0.013431431725621223, -0.020422853529453278, 0.009471342898905277, 0.004952882416546345, 0.010119565762579441, 0.01924092136323452, -0.05238935723900795, 0.031350985169410706, -0.023592302575707436, 0.04062184318900108, 0.0388815924... |
12 | 12 | ['Brenden M. Lake', 'Tomer D. Ullman', 'Joshua B. Tenenbaum', 'Samuel J. Gershman'] | 1604.00289v3 | Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans ... | Building Machines That Learn and Think Like People | 2,016 | http://arxiv.org/pdf/1604.00289v3 | Title Building Machines Learn Think Like People Summary Recent progress artificial intelligence AI renewed interest building system learn think like people Many advance come using deep neural network trained endtoend task object recognition video game board game achieving performance equal even beat human respect Despi... | [0.031925417482852936, 0.050153762102127075, -0.04506157338619232, 0.025901375338435173, 0.010624541901051998, 0.0023722907062619925, 0.02059454284608364, -0.024825099855661392, 0.002919214777648449, -0.0038751689717173576, -0.004059332888573408, 0.0339323990046978, -0.016624143347144127, 0.06939277797937393, 0.0384554... |
13 | 13 | ['Hao Wang', 'Dit-Yan Yeung'] | 1604.01662v2 | While perception tasks such as visual object recognition and text
understanding play an important role in human intelligence, the subsequent
tasks that involve inference, reasoning and planning require an even higher
level of intelligence. The past few years have seen major advances in many
perception tasks using deep ... | Towards Bayesian Deep Learning: A Survey | 2,016 | http://arxiv.org/pdf/1604.01662v2 | Title Towards Bayesian Deep Learning Survey Summary perception task visual object recognition text understanding play important role human intelligence subsequent task involve inference reasoning planning require even higher level intelligence past year seen major advance many perception task using deep learning model ... | [0.009172124788165092, 0.03827746957540512, 0.012193050235509872, 0.04512273892760277, -0.045876313000917435, 0.03563075140118599, 0.04301531985402107, 0.028921186923980713, -0.024594003334641457, -0.025386976078152657, -0.0037335555534809828, -0.013101317919790745, 0.035171762108802795, 0.07787491381168365, -0.0178403... |
14 | 14 | ['Tejas D. Kulkarni', 'Karthik R. Narasimhan', 'Ardavan Saeedi', 'Joshua B. Tenenbaum'] | 1604.06057v2 | Learning goal-directed behavior in environments with sparse feedback is a
major challenge for reinforcement learning algorithms. The primary difficulty
arises due to insufficient exploration, resulting in an agent being unable to
learn robust value functions. Intrinsically motivated agents can explore new
behavior for ... | Hierarchical Deep Reinforcement Learning: Integrating Temporal
Abstraction and Intrinsic Motivation | 2,016 | http://arxiv.org/pdf/1604.06057v2 | Title Hierarchical Deep Reinforcement Learning Integrating Temporal Abstraction Intrinsic Motivation Summary Learning goaldirected behavior environment sparse feedback major challenge reinforcement learning algorithm primary difficulty arises due insufficient exploration resulting agent unable learn robust value functi... | [0.007970927283167839, 0.09408263862133026, -0.010288913734257221, -0.03931441530585289, 0.028506653383374214, 0.0056037078611552715, -0.012039706110954285, -0.01628732867538929, -0.05267838016152382, 0.01272954884916544, -0.0309577826410532, 0.028655484318733215, -0.041666388511657715, 0.09871751815080643, 0.001510770... |
15 | 15 | ['Deepak Pathak', 'Ross Girshick', 'Piotr Dollár', 'Trevor Darrell', 'Bharath Hariharan'] | 1612.06370v2 | This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segme... | Learning Features by Watching Objects Move | 2,016 | http://arxiv.org/pdf/1612.06370v2 | Title Learning Features Watching Objects Move Summary paper present novel yet intuitive approach unsupervised feature learning Inspired human visual system explore whether lowlevel motionbased grouping cue used learn effective visual representation Specifically use unsupervised motionbased segmentation video obtain seg... | [0.0038877902552485466, 0.009858185425400734, 0.002748560393229127, 0.047045670449733734, 0.016096679493784904, 0.03286033868789673, 0.030609730631113052, 0.0402018241584301, -0.04668239876627922, -0.03307424858212471, 0.007265650667250156, 0.01992909610271454, -0.019759509712457657, 0.03309622406959534, 0.018379300832... |
16 | 16 | ['Muhammad Ghifary', 'W. Bastiaan Kleijn', 'Mengjie Zhang'] | 1409.6041v1 | We propose a simple neural network model to deal with the domain adaptation
problem in object recognition. Our model incorporates the Maximum Mean
Discrepancy (MMD) measure as a regularization in the supervised learning to
reduce the distribution mismatch between the source and target domains in the
latent space. From ... | Domain Adaptive Neural Networks for Object Recognition | 2,014 | http://arxiv.org/pdf/1409.6041v1 | Title Domain Adaptive Neural Networks Object Recognition Summary propose simple neural network model deal domain adaptation problem object recognition model incorporates Maximum Mean Discrepancy MMD measure regularization supervised learning reduce distribution mismatch source target domain latent space experiment demo... | [-0.014236598275601864, 0.008411363698542118, -0.018657177686691284, 0.0477081760764122, 0.019902747124433517, 0.014120462350547314, 0.05373150855302811, -0.015081859193742275, -0.037916991859674454, -0.01987382024526596, -0.015697956085205078, 0.010453196242451668, 0.007528562564402819, 0.047139428555965424, -0.000596... |
17 | 17 | ['Lionel Pigou', 'Aäron van den Oord', 'Sander Dieleman', 'Mieke Van Herreweghe', 'Joni Dambre'] | 1506.01911v3 | Recent studies have demonstrated the power of recurrent neural networks for
machine translation, image captioning and speech recognition. For the task of
capturing temporal structure in video, however, there still remain numerous
open research questions. Current research suggests using a simple temporal
feature pooling... | Beyond Temporal Pooling: Recurrence and Temporal Convolutions for
Gesture Recognition in Video | 2,015 | http://arxiv.org/pdf/1506.01911v3 | Title Beyond Temporal Pooling Recurrence Temporal Convolutions Gesture Recognition Video Summary Recent study demonstrated power recurrent neural network machine translation image captioning speech recognition task capturing temporal structure video however still remain numerous open research question Current research ... | [0.01242941152304411, 0.01989608444273472, 0.003044548910111189, 0.07898290455341339, 0.01224425807595253, 0.02652081660926342, 0.02579663135111332, 0.009492067620158195, -0.034116871654987335, -0.05787022411823273, -0.0010184143902733922, -0.06511043757200241, 0.05763678625226021, 0.03060113824903965, 0.01116096042096... |
18 | 18 | ['Rakesh Achanta', 'Trevor Hastie'] | 1509.05962v2 | In this paper, we address the task of Optical Character Recognition(OCR) for
the Telugu script. We present an end-to-end framework that segments the text
image, classifies the characters and extracts lines using a language model. The
segmentation is based on mathematical morphology. The classification module,
which is ... | Telugu OCR Framework using Deep Learning | 2,015 | http://arxiv.org/pdf/1509.05962v2 | Title Telugu OCR Framework using Deep Learning Summary paper address task Optical Character RecognitionOCR Telugu script present endtoend framework segment text image classifies character extract line using language model segmentation based mathematical morphology classification module challenging task three deep convo... | [0.01340897660702467, 0.05425284430384636, 0.037259504199028015, 0.090110182762146, -0.0590578094124794, -0.0020389538258314133, 0.0018720458028838038, 0.04540619999170303, -0.017834406346082687, -0.003423208836466074, -0.000219842026126571, -0.03536640480160713, 0.048597030341625214, 0.06942123174667358, -0.0128559963... |
19 | 19 | ['Jeff Donahue', 'Philipp Krähenbühl', 'Trevor Darrell'] | 1605.09782v7 | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the... | Adversarial Feature Learning | 2,016 | http://arxiv.org/pdf/1605.09782v7 | Title Adversarial Feature Learning Summary ability Generative Adversarial Networks GANs framework learn generative model mapping simple latent distribution arbitrarily complex data distribution demonstrated empirically compelling result showing latent space generator capture semantic variation data distribution Intuiti... | [0.01773410104215145, 0.0888248085975647, -0.044434696435928345, 0.013442041352391243, -0.005503339692950249, 0.0025855363346636295, 0.004073440097272396, -0.015375635586678982, -0.03027379885315895, 0.00891395378857851, -0.0141568873077631, -0.0012658112682402134, -0.029098844155669212, 0.059320781379938126, 0.0601195... |
20 | 20 | ['Zachary C. Lipton'] | 1606.03490v3 | Supervised machine learning models boast remarkable predictive capabilities.
But can you trust your model? Will it work in deployment? What else can it tell
you about the world? We want models to be not only good, but interpretable. And
yet the task of interpretation appears underspecified. Papers provide diverse
and s... | The Mythos of Model Interpretability | 2,016 | http://arxiv.org/pdf/1606.03490v3 | Title Mythos Model Interpretability Summary Supervised machine learning model boast remarkable predictive capability trust model work deployment else tell world want model good interpretable yet task interpretation appears underspecified Papers provide diverse sometimes nonoverlapping motivation interpretability offer ... | [0.01580430194735527, 0.03038216382265091, -0.02977801486849785, 0.018013877794146538, -0.001068954006768763, 0.02330859564244747, 0.024266917258501053, -0.000612442207057029, -0.034967511892318726, 0.012094943784177303, 0.011408815160393715, 0.03546974062919617, -0.006087920628488064, 0.10775260627269745, -0.003884392... |
21 | 21 | ['Sahil Garg', 'Irina Rish', 'Guillermo Cecchi', 'Aurelie Lozano'] | 1701.06106v2 | In this paper, we focus on online representation learning in non-stationary
environments which may require continuous adaptation of model architecture. We
propose a novel online dictionary-learning (sparse-coding) framework which
incorporates the addition and deletion of hidden units (dictionary elements),
and is inspi... | Neurogenesis-Inspired Dictionary Learning: Online Model Adaption in a
Changing World | 2,017 | http://arxiv.org/pdf/1701.06106v2 | Title NeurogenesisInspired Dictionary Learning Online Model Adaption Changing World Summary paper focus online representation learning nonstationary environment may require continuous adaptation model architecture propose novel online dictionarylearning sparsecoding framework incorporates addition deletion hidden unit ... | [-0.010662376880645752, 0.05564999580383301, -0.019077830016613007, -0.01993660070002079, 0.031982842832803726, 0.04325827211141586, 0.0516713447868824, 0.030131543055176735, -0.04375961422920227, 0.027563508599996567, 0.05267802253365517, -0.014716862700879574, -0.021795129403471947, 0.0833757221698761, 0.019378816708... |
22 | 22 | ['Weifeng Ge', 'Yizhou Yu'] | 1702.08690v2 | Deep neural networks require a large amount of labeled training data during
supervised learning. However, collecting and labeling so much data might be
infeasible in many cases. In this paper, we introduce a source-target selective
joint fine-tuning scheme for improving the performance of deep learning tasks
with insuf... | Borrowing Treasures from the Wealthy: Deep Transfer Learning through
Selective Joint Fine-tuning | 2,017 | http://arxiv.org/pdf/1702.08690v2 | Title Borrowing Treasures Wealthy Deep Transfer Learning Selective Joint Finetuning Summary Deep neural network require large amount labeled training data supervised learning However collecting labeling much data might infeasible many case paper introduce sourcetarget selective joint finetuning scheme improving perform... | [0.005758550483733416, 0.056478843092918396, -0.008075646124780178, 0.04061710089445114, 0.0007613872294314206, 0.003362262388691306, 0.0626605749130249, 0.01701948419213295, -0.04397859051823616, -0.00875970721244812, -0.04576868191361427, 0.04579819366335869, -0.011800899170339108, 0.04835931956768036, 0.013810884207... |
23 | 23 | ['Tanmay Gupta', 'Kevin Shih', 'Saurabh Singh', 'Derek Hoiem'] | 1704.00260v2 | An important goal of computer vision is to build systems that learn visual
representations over time that can be applied to many tasks. In this paper, we
investigate a vision-language embedding as a core representation and show that
it leads to better cross-task transfer than standard multi-task learning. In
particular... | Aligned Image-Word Representations Improve Inductive Transfer Across
Vision-Language Tasks | 2,017 | http://arxiv.org/pdf/1704.00260v2 | Title Aligned ImageWord Representations Improve Inductive Transfer Across VisionLanguage Tasks Summary important goal computer vision build system learn visual representation time applied many task paper investigate visionlanguage embedding core representation show lead better crosstask transfer standard multitask lear... | [0.014099230989813805, -0.02462138794362545, -0.009523318149149418, 0.08520937711000443, -0.01648499071598053, 0.02004041150212288, 0.00818576104938984, -0.007237681653350592, 0.020850639790296555, -0.055504005402326584, -0.05823567882180214, -0.010540205053985119, 0.011283772066235542, 0.026981983333826065, 0.04166373... |
24 | 24 | ['Jan Hendrik Metzen', 'Mummadi Chaithanya Kumar', 'Thomas Brox', 'Volker Fischer'] | 1704.05712v3 | While deep learning is remarkably successful on perceptual tasks, it was also
shown to be vulnerable to adversarial perturbations of the input. These
perturbations denote noise added to the input that was generated specifically
to fool the system while being quasi-imperceptible for humans. More severely,
there even exi... | Universal Adversarial Perturbations Against Semantic Image Segmentation | 2,017 | http://arxiv.org/pdf/1704.05712v3 | Title Universal Adversarial Perturbations Semantic Image Segmentation Summary deep learning remarkably successful perceptual task also shown vulnerable adversarial perturbation input perturbation denote noise added input generated specifically fool system quasiimperceptible human severely even exist universal perturbat... | [0.0056486246176064014, 0.03561432287096977, -0.013600168749690056, 0.056583743542432785, 0.0028320043347775936, -0.005416540428996086, 0.0127531373873353, -0.02397274412214756, -0.04770808666944504, 0.009939886629581451, -0.01911712996661663, 0.06414027512073517, -0.027907874435186386, -0.019609153270721436, 0.0494938... |
25 | 25 | ['Quynh Nguyen', 'Matthias Hein'] | 1704.08045v2 | While the optimization problem behind deep neural networks is highly
non-convex, it is frequently observed in practice that training deep networks
seems possible without getting stuck in suboptimal points. It has been argued
that this is the case as all local minima are close to being globally optimal.
We show that thi... | The loss surface of deep and wide neural networks | 2,017 | http://arxiv.org/pdf/1704.08045v2 | Title loss surface deep wide neural network Summary optimization problem behind deep neural network highly nonconvex frequently observed practice training deep network seems possible without getting stuck suboptimal point argued case local minimum close globally optimal show almost true fact almost local minimum global... | [-0.0181210245937109, 0.015704238787293434, -0.006439590826630592, 0.08110242336988449, -0.0018681740621104836, -0.01548056397587061, 0.04042204096913338, -0.020858725532889366, -0.04759621247649193, 0.03233534097671509, -0.010029901750385761, -0.0015024126041680574, -0.008791866712272167, 0.04432067275047302, 0.039892... |
26 | 26 | ['Chris Donahue', 'Zachary C. Lipton', 'Akshay Balsubramani', 'Julian McAuley'] | 1705.07904v3 | We propose a new algorithm for training generative adversarial networks that
jointly learns latent codes for both identities (e.g. individual humans) and
observations (e.g. specific photographs). By fixing the identity portion of the
latent codes, we can generate diverse images of the same subject, and by fixing
the ob... | Semantically Decomposing the Latent Spaces of Generative Adversarial
Networks | 2,017 | http://arxiv.org/pdf/1705.07904v3 | Title Semantically Decomposing Latent Spaces Generative Adversarial Networks Summary propose new algorithm training generative adversarial network jointly learns latent code identity eg individual human observation eg specific photograph fixing identity portion latent code generate diverse image subject fixing observat... | [0.02524767629802227, 0.1283176839351654, 0.0017059007659554482, 0.059948887676000595, -0.02689424343407154, 0.04963088035583496, 0.043878935277462006, -0.008901681751012802, -0.019719336181879044, 0.005113862454891205, -0.012964663095772266, -0.015912093222141266, 0.004586712457239628, 0.01671898551285267, 0.066325530... |
27 | 27 | ['Mahesh Chandra Mukkamala', 'Matthias Hein'] | 1706.05507v2 | Adaptive gradient methods have become recently very popular, in particular as
they have been shown to be useful in the training of deep neural networks. In
this paper we have analyzed RMSProp, originally proposed for the training of
deep neural networks, in the context of online convex optimization and show
$\sqrt{T}$-... | Variants of RMSProp and Adagrad with Logarithmic Regret Bounds | 2,017 | http://arxiv.org/pdf/1706.05507v2 | Title Variants RMSProp Adagrad Logarithmic Regret Bounds Summary Adaptive gradient method become recently popular particular shown useful training deep neural network paper analyzed RMSProp originally proposed training deep neural network context online convex optimization show sqrtTtype regret bound Moreover propose t... | [-0.0235554501414299, 0.01657301001250744, 0.022070230916142464, -0.00982610508799553, 0.021564722061157227, -0.01518192794173956, 0.008455047383904457, -0.03350786492228508, -0.05014415457844734, 0.018914753571152687, -0.045846689492464066, -0.016145600005984306, 0.004574382212013006, 0.003473706543445587, 0.011971698... |
28 | 28 | ['Chunyuan Li', 'Hao Liu', 'Changyou Chen', 'Yunchen Pu', 'Liqun Chen', 'Ricardo Henao', 'Lawrence Carin'] | 1709.01215v2 | We investigate the non-identifiability issues associated with bidirectional
adversarial training for joint distribution matching. Within a framework of
conditional entropy, we propose both adversarial and non-adversarial approaches
to learn desirable matched joint distributions for unsupervised and supervised
tasks. We... | ALICE: Towards Understanding Adversarial Learning for Joint Distribution
Matching | 2,017 | http://arxiv.org/pdf/1709.01215v2 | Title ALICE Towards Understanding Adversarial Learning Joint Distribution Matching Summary investigate nonidentifiability issue associated bidirectional adversarial training joint distribution matching Within framework conditional entropy propose adversarial nonadversarial approach learn desirable matched joint distrib... | [0.006879579741507769, 0.07195214182138443, -0.011232499964535236, 0.04159289970993996, -0.024009624496102333, -0.016379093751311302, 0.016849802806973457, 0.004335680510848761, 0.012014094740152359, -0.032395802438259125, -0.05185306817293167, -0.0074209896847605705, -0.010995801538228989, 0.018014095723628998, 0.0075... |
29 | 29 | ['Mateusz Buda', 'Atsuto Maki', 'Maciej A. Mazurowski'] | 1710.05381v1 | In this study, we systematically investigate the impact of class imbalance on
classification performance of convolutional neural networks (CNNs) and compare
frequently used methods to address the issue. Class imbalance is a common
problem that has been comprehensively studied in classical machine learning,
yet very lim... | A systematic study of the class imbalance problem in convolutional
neural networks | 2,017 | http://arxiv.org/pdf/1710.05381v1 | Title systematic study class imbalance problem convolutional neural network Summary study systematically investigate impact class imbalance classification performance convolutional neural network CNNs compare frequently used method address issue Class imbalance common problem comprehensively studied classical machine l... | [0.002548755845054984, 0.014575646258890629, -0.052617598325014114, 0.008142157457768917, 0.019604474306106567, -0.008515378460288048, 0.06046532839536667, -0.010335146449506283, -0.04602038115262985, -0.041656531393527985, 0.00910205114632845, 0.044474340975284576, 0.00974102783948183, 0.022866446524858475, -0.0017808... |
30 | 30 | ['Jan Kukačka', 'Vladimir Golkov', 'Daniel Cremers'] | 1710.10686v1 | Regularization is one of the crucial ingredients of deep learning, yet the
term regularization has various definitions, and regularization methods are
often studied separately from each other. In our work we present a systematic,
unifying taxonomy to categorize existing methods. We distinguish methods that
affect data,... | Regularization for Deep Learning: A Taxonomy | 2,017 | http://arxiv.org/pdf/1710.10686v1 | Title Regularization Deep Learning Taxonomy Summary Regularization one crucial ingredient deep learning yet term regularization various definition regularization method often studied separately work present systematic unifying taxonomy categorize existing method distinguish method affect data network architecture error... | [0.0039631095714867115, 0.04202922433614731, -0.029790757223963737, 0.013474303297698498, 0.0033911168575286865, -0.035869937390089035, 0.06577468663454056, 0.028167083859443665, -0.05148724839091301, 0.006152178626507521, -0.003993260208517313, -0.030590925365686417, 0.019640978425741196, 0.05663751810789108, 0.006232... |
31 | 31 | ['Elie Aljalbout', 'Vladimir Golkov', 'Yawar Siddiqui', 'Daniel Cremers'] | 1801.07648v1 | Clustering is a fundamental machine learning method. The quality of its
results is dependent on the data distribution. For this reason, deep neural
networks can be used for learning better representations of the data. In this
paper, we propose a systematic taxonomy for clustering with deep learning, in
addition to a re... | Clustering with Deep Learning: Taxonomy and New Methods | 2,018 | http://arxiv.org/pdf/1801.07648v1 | Title Clustering Deep Learning Taxonomy New Methods Summary Clustering fundamental machine learning method quality result dependent data distribution reason deep neural network used learning better representation data paper propose systematic taxonomy clustering deep learning addition review method field Based taxonomy... | [-0.016683556139469147, 0.010083289816975594, -0.03315931186079979, 0.029727904126048088, 0.009991924278438091, -0.005651251878589392, 0.05703303590416908, -0.00812006276100874, -0.0044061243534088135, -0.00665407907217741, -0.03286062553524971, -0.004478133749216795, 0.025629807263612747, 0.059234000742435455, 0.01990... |
32 | 32 | ['Armand Zampieri', 'Guillaume Charpiat', 'Yuliya Tarabalka'] | 1802.09816v1 | We tackle here the problem of multimodal image non-rigid registration, which
is of prime importance in remote sensing and medical imaging. The difficulties
encountered by classical registration approaches include feature design and
slow optimization by gradient descent. By analyzing these methods, we note the
significa... | Coarse to fine non-rigid registration: a chain of scale-specific neural
networks for multimodal image alignment with application to remote sensing | 2,018 | http://arxiv.org/pdf/1802.09816v1 | Title Coarse fine nonrigid registration chain scalespecific neural network multimodal image alignment application remote sensing Summary tackle problem multimodal image nonrigid registration prime importance remote sensing medical imaging difficulty encountered classical registration approach include feature design slo... | [-0.0002296001766808331, 0.027488330379128456, 0.0224667489528656, 0.02791258506476879, -0.05824168026447296, -0.00564225297421217, 0.019202496856451035, -0.06031869351863861, -0.022075379267334938, 0.027093613520264626, 0.028951166197657585, -0.004241004586219788, 0.04826965183019638, 0.03934275731444359, 0.0270466450... |
33 | 33 | ['Li Yao', 'Atousa Torabi', 'Kyunghyun Cho', 'Nicolas Ballas', 'Christopher Pal', 'Hugo Larochelle', 'Aaron Courville'] | 1502.08029v5 | Recent progress in using recurrent neural networks (RNNs) for image
description has motivated the exploration of their application for video
description. However, while images are static, working with videos requires
modeling their dynamic temporal structure and then properly integrating that
information into a natural... | Describing Videos by Exploiting Temporal Structure | 2,015 | http://arxiv.org/pdf/1502.08029v5 | Title Describing Videos Exploiting Temporal Structure Summary Recent progress using recurrent neural network RNNs image description motivated exploration application video description However image static working video requires modeling dynamic temporal structure properly integrating information natural language descri... | [0.04384573921561241, 0.040353644639253616, 0.022152865305542946, 0.06905457377433777, -0.0016936935717239976, 0.004977921023964882, -0.015876099467277527, -0.016332415863871574, -0.07429931312799454, -0.07366855442523956, 0.010147901251912117, -0.06954325735569, 0.04226887226104736, 0.06193795055150986, 0.006885376758... |
34 | 34 | ['Hao Wang', 'Xingjian Shi', 'Dit-Yan Yeung'] | 1611.00454v1 | Hybrid methods that utilize both content and rating information are commonly
used in many recommender systems. However, most of them use either handcrafted
features or the bag-of-words representation as a surrogate for the content
information but they are neither effective nor natural enough. To address this
problem, w... | Collaborative Recurrent Autoencoder: Recommend while Learning to Fill in
the Blanks | 2,016 | http://arxiv.org/pdf/1611.00454v1 | Title Collaborative Recurrent Autoencoder Recommend Learning Fill Blanks Summary Hybrid method utilize content rating information commonly used many recommender system However use either handcrafted feature bagofwords representation surrogate content information neither effective natural enough address problem develop ... | [0.026162119582295418, 0.03550722077488899, 0.0006124278297647834, 0.02112296223640442, -0.0031149424612522125, 0.0052260784432291985, 0.03841988742351532, 0.014339396730065346, -0.011528442613780499, -0.0230964794754982, -0.04469263553619385, -0.018563508987426758, 0.006120844278484583, 0.09863444417715073, -0.0506608... |
35 | 35 | ['Laura Graesser', 'Abhinav Gupta', 'Lakshay Sharma', 'Evelina Bakhturina'] | 1712.00725v1 | In this project we analysed how much semantic information images carry, and
how much value image data can add to sentiment analysis of the text associated
with the images. To better understand the contribution from images, we compared
models which only made use of image data, models which only made use of text
data, an... | Sentiment Classification using Images and Label Embeddings | 2,017 | http://arxiv.org/pdf/1712.00725v1 | Title Sentiment Classification using Images Label Embeddings Summary project analysed much semantic information image carry much value image data add sentiment analysis text associated image better understand contribution image compared model made use image data model made use text data model combined data type also an... | [0.0373409278690815, 0.08658852428197861, 0.0031289902981370687, 0.08435819298028946, -0.022962214425206184, 0.04876367002725601, -0.011424457654356956, 0.010839671827852726, 0.024760400876402855, -0.06766130030155182, -0.022244643419981003, 0.03267093002796173, -0.010490966029465199, 0.03902794048190117, 0.00771614862... |
36 | 36 | ['Hao Wang', 'Xingjian Shi', 'Dit-Yan Yeung'] | 1611.00448v1 | Neural networks (NN) have achieved state-of-the-art performance in various
applications. Unfortunately in applications where training data is
insufficient, they are often prone to overfitting. One effective way to
alleviate this problem is to exploit the Bayesian approach by using Bayesian
neural networks (BNN). Anothe... | Natural-Parameter Networks: A Class of Probabilistic Neural Networks | 2,016 | http://arxiv.org/pdf/1611.00448v1 | Title NaturalParameter Networks Class Probabilistic Neural Networks Summary Neural network NN achieved stateoftheart performance various application Unfortunately application training data insufficient often prone overfitting One effective way alleviate problem exploit Bayesian approach using Bayesian neural network BN... | [-0.019308457151055336, 0.04892473667860031, -0.018579134717583656, -0.011225856840610504, -0.007168007083237171, -0.058175358921289444, 0.012418322265148163, -0.012458806857466698, -0.029085859656333923, 0.007147525902837515, -0.000888597802259028, 0.04558419808745384, 0.017681701108813286, 0.04631771147251129, 0.0518... |
37 | 37 | ['Misha Denil', 'Pulkit Agrawal', 'Tejas D Kulkarni', 'Tom Erez', 'Peter Battaglia', 'Nando de Freitas'] | 1611.01843v3 | When encountering novel objects, humans are able to infer a wide range of
physical properties such as mass, friction and deformability by interacting
with them in a goal driven way. This process of active interaction is in the
same spirit as a scientist performing experiments to discover hidden facts.
Recent advances i... | Learning to Perform Physics Experiments via Deep Reinforcement Learning | 2,016 | http://arxiv.org/pdf/1611.01843v3 | Title Learning Perform Physics Experiments via Deep Reinforcement Learning Summary encountering novel object human able infer wide range physical property mass friction deformability interacting goal driven way process active interaction spirit scientist performing experiment discover hidden fact Recent advance artific... | [0.004888972733169794, 0.022142047062516212, -0.025890188291668892, 0.007126957178115845, -0.003003097604960203, -0.02418522723019123, 0.06944964826107025, 0.01833062805235386, -0.04840927571058273, 0.04805910214781761, 0.003001939970999956, 0.03375035524368286, -0.037421874701976776, 0.11026794463396072, 0.01542024966... |
38 | 38 | ['Tsung-Hsien Wen', 'David Vandyke', 'Nikola Mrksic', 'Milica Gasic', 'Lina M. Rojas-Barahona', 'Pei-Hao Su', 'Stefan Ultes', 'Steve Young'] | 1604.04562v3 | Teaching machines to accomplish tasks by conversing naturally with humans is
challenging. Currently, developing task-oriented dialogue systems requires
creating multiple components and typically this involves either a large amount
of handcrafting, or acquiring costly labelled datasets to solve a statistical
learning pr... | A Network-based End-to-End Trainable Task-oriented Dialogue System | 2,016 | http://arxiv.org/pdf/1604.04562v3 | Title Networkbased EndtoEnd Trainable Taskoriented Dialogue System Summary Teaching machine accomplish task conversing naturally human challenging Currently developing taskoriented dialogue system requires creating multiple component typically involves either large amount handcrafting acquiring costly labelled datasets... | [0.05435043200850487, 0.017975132912397385, -0.004753083921968937, 0.06381011754274368, -0.01053185947239399, 0.012891801074147224, 0.013492371886968613, -0.020046189427375793, 0.006221742369234562, -0.04255888611078262, -0.04956076666712761, -0.00783568900078535, 0.007973398081958294, 0.07707810401916504, 0.0171229392... |
39 | 39 | ['Johannes Welbl', 'Guillaume Bouchard', 'Sebastian Riedel'] | 1604.05878v1 | Embedding-based Knowledge Base Completion models have so far mostly combined
distributed representations of individual entities or relations to compute
truth scores of missing links. Facts can however also be represented using
pairwise embeddings, i.e. embeddings for pairs of entities and relations. In
this paper we ex... | A Factorization Machine Framework for Testing Bigram Embeddings in
Knowledgebase Completion | 2,016 | http://arxiv.org/pdf/1604.05878v1 | Title Factorization Machine Framework Testing Bigram Embeddings Knowledgebase Completion Summary Embeddingbased Knowledge Base Completion model far mostly combined distributed representation individual entity relation compute truth score missing link Facts however also represented using pairwise embeddings ie embedding... | [0.000826517993118614, 0.009025746956467628, -0.005209268070757389, 0.035159654915332794, 0.0032346744555979967, 0.027672480791807175, -0.006383709143847227, -0.011465386487543583, 0.03268832713365555, -0.02079460769891739, 0.02257765643298626, 0.025877708569169044, -0.0053927102126181126, 0.02905987948179245, -0.00385... |
40 | 40 | ['Franck Dernoncourt', 'Ji Young Lee', 'Peter Szolovits'] | 1612.05251v1 | Existing models based on artificial neural networks (ANNs) for sentence
classification often do not incorporate the context in which sentences appear,
and classify sentences individually. However, traditional sentence
classification approaches have been shown to greatly benefit from jointly
classifying subsequent sente... | Neural Networks for Joint Sentence Classification in Medical Paper
Abstracts | 2,016 | http://arxiv.org/pdf/1612.05251v1 | Title Neural Networks Joint Sentence Classification Medical Paper Abstracts Summary Existing model based artificial neural network ANNs sentence classification often incorporate context sentence appear classify sentence individually However traditional sentence classification approach shown greatly benefit jointly clas... | [0.04733778536319733, 0.0364992655813694, -0.0035521562676876783, 0.002981181489303708, -0.042058493942022324, 0.03678615391254425, 0.025012049823999405, 0.014855986461043358, 0.01904723420739174, -0.04209323227405548, -0.02876879833638668, -0.05620887130498886, 0.036804135888814926, -0.0016750863287597895, -0.00407067... |
41 | 41 | ['Franck Dernoncourt', 'Ji Young Lee', 'Ozlem Uzuner', 'Peter Szolovits'] | 1606.03475v1 | Objective: Patient notes in electronic health records (EHRs) may contain
critical information for medical investigations. However, the vast majority of
medical investigators can only access de-identified notes, in order to protect
the confidentiality of patients. In the United States, the Health Insurance
Portability a... | De-identification of Patient Notes with Recurrent Neural Networks | 2,016 | http://arxiv.org/pdf/1606.03475v1 | Title Deidentification Patient Notes Recurrent Neural Networks Summary Objective Patient note electronic health record EHRs may contain critical information medical investigation However vast majority medical investigator access deidentified note order protect confidentiality patient United States Health Insurance Port... | [0.021877270191907883, 0.06426245719194412, -0.01817529834806919, -0.019433828070759773, -0.002087124390527606, 0.019773103296756744, 0.027331892400979996, 0.010316393338143826, 0.004598485771566629, -0.004994913004338741, 0.04128994792699814, -0.01306468341499567, 0.03758978098630905, 0.048275794833898544, -0.01015557... |
42 | 42 | ['Tsendsuren Munkhdalai', 'Hong Yu'] | 1610.06454v2 | Hypothesis testing is an important cognitive process that supports human
reasoning. In this paper, we introduce a computational hypothesis testing
approach based on memory augmented neural networks. Our approach involves a
hypothesis testing loop that reconsiders and progressively refines a previously
formed hypothesis... | Reasoning with Memory Augmented Neural Networks for Language
Comprehension | 2,016 | http://arxiv.org/pdf/1610.06454v2 | Title Reasoning Memory Augmented Neural Networks Language Comprehension Summary Hypothesis testing important cognitive process support human reasoning paper introduce computational hypothesis testing approach based memory augmented neural network approach involves hypothesis testing loop reconsiders progressively refin... | [0.03371580317616463, -0.004798790905624628, -0.01564905047416687, 0.056556738913059235, -0.03314266726374626, 0.025938093662261963, 0.02255520410835743, -0.03362160921096802, -0.0031821217853575945, -0.017369162291288376, 0.031453315168619156, -0.028457975015044212, 0.03600015118718147, 0.0016937246546149254, 0.024398... |
43 | 43 | ['W. James Murdoch', 'Arthur Szlam'] | 1702.02540v2 | Although deep learning models have proven effective at solving problems in
natural language processing, the mechanism by which they come to their
conclusions is often unclear. As a result, these models are generally treated
as black boxes, yielding no insight of the underlying learned patterns. In this
paper we conside... | Automatic Rule Extraction from Long Short Term Memory Networks | 2,017 | http://arxiv.org/pdf/1702.02540v2 | Title Automatic Rule Extraction Long Short Term Memory Networks Summary Although deep learning model proven effective solving problem natural language processing mechanism come conclusion often unclear result model generally treated black box yielding insight underlying learned pattern paper consider Long Short Term Me... | [0.05388208106160164, -0.0057205636985599995, -0.008655648678541183, 0.07049400359392166, -0.06092309206724167, 0.0017530766781419516, -0.010157352313399315, 0.040230054408311844, 0.0032108710147440434, -0.08655161410570145, 0.028754491358995438, -0.014085361734032631, -0.01891648955643177, 0.06222238019108772, -0.0232... |
44 | 44 | ['Sebastian Gehrmann', 'Franck Dernoncourt', 'Yeran Li', 'Eric T. Carlson', 'Joy T. Wu', 'Jonathan Welt', 'John Foote Jr.', 'Edward T. Moseley', 'David W. Grant', 'Patrick D. Tyler', 'Leo Anthony Celi'] | 1703.08705v1 | Objective: We investigate whether deep learning techniques for natural
language processing (NLP) can be used efficiently for patient phenotyping.
Patient phenotyping is a classification task for determining whether a patient
has a medical condition, and is a crucial part of secondary analysis of
healthcare data. We ass... | Comparing Rule-Based and Deep Learning Models for Patient Phenotyping | 2,017 | http://arxiv.org/pdf/1703.08705v1 | Title Comparing RuleBased Deep Learning Models Patient Phenotyping Summary Objective investigate whether deep learning technique natural language processing NLP used efficiently patient phenotyping Patient phenotyping classification task determining whether patient medical condition crucial part secondary analysis heal... | [0.05393210053443909, 0.017177172005176544, 0.003052765503525734, -0.008947459980845451, -0.010735324583947659, 0.01852540113031864, 0.01480336394160986, 0.013665424659848213, -0.02276225946843624, -0.022496294230222702, 0.010864540003240108, -0.05141543224453926, 0.015303588472306728, 0.06716867536306381, 0.0016145890... |
45 | 45 | ['Ji Young Lee', 'Franck Dernoncourt', 'Peter Szolovits'] | 1704.01523v1 | Over 50 million scholarly articles have been published: they constitute a
unique repository of knowledge. In particular, one may infer from them
relations between scientific concepts, such as synonyms and hyponyms.
Artificial neural networks have been recently explored for relation extraction.
In this work, we continue... | MIT at SemEval-2017 Task 10: Relation Extraction with Convolutional
Neural Networks | 2,017 | http://arxiv.org/pdf/1704.01523v1 | Title MIT SemEval2017 Task 10 Relation Extraction Convolutional Neural Networks Summary 50 million scholarly article published constitute unique repository knowledge particular one may infer relation scientific concept synonym hyponym Artificial neural network recently explored relation extraction work continue line wo... | [0.058312851935625076, 0.06265062093734741, 0.005281275138258934, 0.055801503360271454, -0.031507428735494614, -0.00939155276864767, 0.026195526123046875, 0.029269294813275337, -0.028874466195702553, -0.037247151136398315, -0.02918235957622528, 0.05256698280572891, 0.0099489139392972, 0.0036105012986809015, -0.01692311... |
46 | 46 | ['Ji Young Lee', 'Franck Dernoncourt', 'Peter Szolovits'] | 1705.06273v1 | Recent approaches based on artificial neural networks (ANNs) have shown
promising results for named-entity recognition (NER). In order to achieve high
performances, ANNs need to be trained on a large labeled dataset. However,
labels might be difficult to obtain for the dataset on which the user wants to
perform NER: la... | Transfer Learning for Named-Entity Recognition with Neural Networks | 2,017 | http://arxiv.org/pdf/1705.06273v1 | Title Transfer Learning NamedEntity Recognition Neural Networks Summary Recent approach based artificial neural network ANNs shown promising result namedentity recognition NER order achieve high performance ANNs need trained large labeled dataset However label might difficult obtain dataset user want perform NER label ... | [0.03788178414106369, 0.0226596649736166, 0.0063038170337677, 0.0070561072789132595, 0.0062925005331635475, 0.019892117008566856, 0.005912716966122389, 0.012715619057416916, -0.0196891687810421, -0.0015258564380928874, -0.04504218325018883, -0.004810972139239311, 0.02148493379354477, 0.0035197273828089237, -0.013179363... |
47 | 47 | ['Sai Rajeswar', 'Sandeep Subramanian', 'Francis Dutil', 'Christopher Pal', 'Aaron Courville'] | 1705.10929v1 | Generative Adversarial Networks (GANs) have gathered a lot of attention from
the computer vision community, yielding impressive results for image
generation. Advances in the adversarial generation of natural language from
noise however are not commensurate with the progress made in generating images,
and still lag far ... | Adversarial Generation of Natural Language | 2,017 | http://arxiv.org/pdf/1705.10929v1 | Title Adversarial Generation Natural Language Summary Generative Adversarial Networks GANs gathered lot attention computer vision community yielding impressive result image generation Advances adversarial generation natural language noise however commensurate progress made generating image still lag far behind likeliho... | [0.05477595701813698, 0.08053953945636749, -0.013415508903563023, 0.04601915925741196, -0.016717227175831795, -0.011133761145174503, 0.01022478099912405, -0.007239060942083597, 0.0230791587382555, -0.03693745657801628, 0.005420372821390629, -0.01803283952176571, 0.0035774034913629293, 0.03436025604605675, 0.07447671145... |
48 | 48 | ['Leila Arras', 'Grégoire Montavon', 'Klaus-Robert Müller', 'Wojciech Samek'] | 1706.07206v2 | Recently, a technique called Layer-wise Relevance Propagation (LRP) was shown
to deliver insightful explanations in the form of input space relevances for
understanding feed-forward neural network classification decisions. In the
present work, we extend the usage of LRP to recurrent neural networks. We
propose a specif... | Explaining Recurrent Neural Network Predictions in Sentiment Analysis | 2,017 | http://arxiv.org/pdf/1706.07206v2 | Title Explaining Recurrent Neural Network Predictions Sentiment Analysis Summary Recently technique called Layerwise Relevance Propagation LRP shown deliver insightful explanation form input space relevance understanding feedforward neural network classification decision present work extend usage LRP recurrent neural n... | [0.04484899342060089, 0.02378677763044834, -0.00305969943292439, 0.051354214549064636, -0.011041464284062386, -0.0319642499089241, -0.016614774242043495, -0.006575544364750385, -0.019484154880046844, -0.045627813786268234, -0.01730375364422798, -0.010188334621489048, 0.0017385354731231928, 0.025300318375229836, -0.0215... |
49 | 49 | ['Emmanuel Dufourq', 'Bruce A. Bassett'] | 1709.06990v1 | Can textual data be compressed intelligently without losing accuracy in
evaluating sentiment? In this study, we propose a novel evolutionary
compression algorithm, PARSEC (PARts-of-Speech for sEntiment Compression),
which makes use of Parts-of-Speech tags to compress text in a way that
sacrifices minimal classification... | Text Compression for Sentiment Analysis via Evolutionary Algorithms | 2,017 | http://arxiv.org/pdf/1709.06990v1 | Title Text Compression Sentiment Analysis via Evolutionary Algorithms Summary textual data compressed intelligently without losing accuracy evaluating sentiment study propose novel evolutionary compression algorithm PARSEC PARtsofSpeech sEntiment Compression make use PartsofSpeech tag compress text way sacrifice minima... | [0.029672643169760704, 0.0624702051281929, -0.022342588752508163, 0.011170751415193081, -0.05197097361087799, -0.0025711855851113796, -0.06413792073726654, 0.06467556208372116, -0.019226262345910072, -0.0199278611689806, 0.03828226029872894, 0.02604842372238636, 0.011373891495168209, 0.04111306369304657, -0.03516996279... |
50 | 50 | ['Kartik Audhkhasi', 'Brian Kingsbury', 'Bhuvana Ramabhadran', 'George Saon', 'Michael Picheny'] | 1712.03133v1 | Direct acoustics-to-word (A2W) models in the end-to-end paradigm have
received increasing attention compared to conventional sub-word based automatic
speech recognition models using phones, characters, or context-dependent hidden
Markov model states. This is because A2W models recognize words from speech
without any de... | Building competitive direct acoustics-to-word models for English
conversational speech recognition | 2,017 | http://arxiv.org/pdf/1712.03133v1 | Title Building competitive direct acousticstoword model English conversational speech recognition Summary Direct acousticstoword A2W model endtoend paradigm received increasing attention compared conventional subword based automatic speech recognition model using phone character contextdependent hidden Markov model sta... | [0.04574203118681908, 0.025533463805913925, 0.042400166392326355, 0.04855414852499962, -0.05226854234933853, -0.0042722481302917, 0.020178286358714104, 0.024636832997202873, -0.015204379335045815, -0.06854008883237839, -0.0555407777428627, -0.024402471259236336, 0.05278247594833374, 0.013023776933550835, 0.005281993187... |
51 | 51 | ['Huijuan Xu', 'Kate Saenko'] | 1511.05234v2 | We address the problem of Visual Question Answering (VQA), which requires
joint image and language understanding to answer a question about a given
photograph. Recent approaches have applied deep image captioning methods based
on convolutional-recurrent networks to this problem, but have failed to model
spatial inferen... | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for
Visual Question Answering | 2,015 | http://arxiv.org/pdf/1511.05234v2 | Title Ask Attend Answer Exploring QuestionGuided Spatial Attention Visual Question Answering Summary address problem Visual Question Answering VQA requires joint image language understanding answer question given photograph Recent approach applied deep image captioning method based convolutionalrecurrent network proble... | [0.0524277538061142, 0.029206106439232826, -0.024834005162119865, 0.04498821869492531, -0.006906989496201277, 0.009571080096065998, 0.025222986936569214, -0.0028687838930636644, 0.02206416241824627, -0.03178846836090088, 0.0006940392195247114, 0.005100044887512922, -0.0010749456705525517, 0.04998240992426872, 0.0496853... |
52 | 52 | ['Yuetan Lin', 'Zhangyang Pang', 'Donghui Wang', 'Yueting Zhuang'] | 1702.06700v1 | Visual question answering (VQA) has witnessed great progress since May, 2015
as a classic problem unifying visual and textual data into a system. Many
enlightening VQA works explore deep into the image and question encodings and
fusing methods, of which attention is the most effective and infusive
mechanism. Current at... | Task-driven Visual Saliency and Attention-based Visual Question
Answering | 2,017 | http://arxiv.org/pdf/1702.06700v1 | Title Taskdriven Visual Saliency Attentionbased Visual Question Answering Summary Visual question answering VQA witnessed great progress since May 2015 classic problem unifying visual textual data system Many enlightening VQA work explore deep image question encoding fusing method attention effective infusive mechanism... | [0.03881014138460159, -0.005521686282008886, -0.0014341219794005156, 0.07670561969280243, 0.02064802125096321, 0.014323089271783829, -0.014976751990616322, 0.008460894227027893, -0.003425849135965109, -0.02413828670978546, -0.009696214459836483, -0.00717691658064723, -0.0037670289166271687, 0.053044453263282776, 0.0465... |
53 | 53 | ['Akash Kumar Dhaka', 'Giampiero Salvi'] | 1606.09163v1 | We present a systematic analysis on the performance of a phonetic recogniser
when the window of input features is not symmetric with respect to the current
frame. The recogniser is based on Context Dependent Deep Neural Networks
(CD-DNNs) and Hidden Markov Models (HMMs). The objective is to reduce the
latency of the sy... | Optimising The Input Window Alignment in CD-DNN Based Phoneme
Recognition for Low Latency Processing | 2,016 | http://arxiv.org/pdf/1606.09163v1 | Title Optimising Input Window Alignment CDDNN Based Phoneme Recognition Low Latency Processing Summary present systematic analysis performance phonetic recogniser window input feature symmetric respect current frame recogniser based Context Dependent Deep Neural Networks CDDNNs Hidden Markov Models HMMs objective reduc... | [-0.020944718271493912, 0.029016690328717232, 0.008884217590093613, 0.02872476726770401, 0.004271514713764191, -0.018700644373893738, 0.07468104362487793, 0.004588223993778229, -0.027082746848464012, 0.040426988154649734, -0.06507018953561783, -0.042696014046669006, 0.08818154036998749, 0.04114234820008278, 0.022322118... |
54 | 54 | ['Peng Qian', 'Xipeng Qiu', 'Xuanjing Huang'] | 1604.06635v1 | Recently, the long short-term memory neural network (LSTM) has attracted wide
interest due to its success in many tasks. LSTM architecture consists of a
memory cell and three gates, which looks similar to the neuronal networks in
the brain. However, there still lacks the evidence of the cognitive
plausibility of LSTM a... | Bridging LSTM Architecture and the Neural Dynamics during Reading | 2,016 | http://arxiv.org/pdf/1604.06635v1 | Title Bridging LSTM Architecture Neural Dynamics Reading Summary Recently long shortterm memory neural network LSTM attracted wide interest due success many task LSTM architecture consists memory cell three gate look similar neuronal network brain However still lack evidence cognitive plausibility LSTM architecture wel... | [-0.012993956916034222, -0.006285125855356455, -0.015956982970237732, 0.033215660601854324, 0.0012327719014137983, 0.015472170896828175, 0.039305780082941055, 0.008511208929121494, 0.027965713292360306, 0.04192258045077324, -0.06797993928194046, -0.034531183540821075, 0.0488734170794487, 0.05301915854215622, 0.05934487... |
55 | 55 | ['Jiwei Li'] | 1412.3714v2 | This paper addresses how a recursive neural network model can automatically
leave out useless information and emphasize important evidence, in other words,
to perform "weight tuning" for higher-level representation acquisition. We
propose two models, Weighted Neural Network (WNN) and Binary-Expectation Neural
Network (... | Feature Weight Tuning for Recursive Neural Networks | 2,014 | http://arxiv.org/pdf/1412.3714v2 | Title Feature Weight Tuning Recursive Neural Networks Summary paper address recursive neural network model automatically leave useless information emphasize important evidence word perform weight tuning higherlevel representation acquisition propose two model Weighted Neural Network WNN BinaryExpectation Neural Network... | [0.012952763587236404, 0.029821977019309998, -0.01966494508087635, 0.03055460937321186, 0.0001618549576960504, -0.003271478693932295, 0.0013514960883185267, -0.005855271592736244, -0.020848331972956657, 0.002556001301854849, 0.03033561259508133, -0.017845505848526955, 0.03005438670516014, 0.051921360194683075, 0.006054... |
56 | 56 | ['Sadikin Mujiono', 'Mohamad Ivan Fanany', 'Chan Basaruddin'] | 1610.01891v1 | One essential task in information extraction from the medical corpus is drug
name recognition. Compared with text sources come from other domains, the
medical text is special and has unique characteristics. In addition, the
medical text mining poses more challenges, e.g., more unstructured text, the
fast growing of new... | A New Data Representation Based on Training Data Characteristics to
Extract Drug Named-Entity in Medical Text | 2,016 | http://arxiv.org/pdf/1610.01891v1 | Title New Data Representation Based Training Data Characteristics Extract Drug NamedEntity Medical Text Summary One essential task information extraction medical corpus drug name recognition Compared text source come domain medical text special unique characteristic addition medical text mining pose challenge eg unstru... | [0.033277690410614014, -0.004981243517249823, 0.00892974715679884, 0.012650229968130589, -0.012423137202858925, 0.005561047233641148, 0.0060630664229393005, 0.017601408064365387, 0.005051193293184042, -0.04098929092288017, -0.026254603639245033, -0.01484074629843235, -0.007727933581918478, 0.06162698566913605, 0.017386... |
57 | 57 | ['Eric Malmi', 'Pyry Takala', 'Hannu Toivonen', 'Tapani Raiko', 'Aristides Gionis'] | 1505.04771v2 | Writing rap lyrics requires both creativity to construct a meaningful,
interesting story and lyrical skills to produce complex rhyme patterns, which
form the cornerstone of good flow. We present a rap lyrics generation method
that captures both of these aspects. First, we develop a prediction model to
identify the next... | DopeLearning: A Computational Approach to Rap Lyrics Generation | 2,015 | http://arxiv.org/pdf/1505.04771v2 | Title DopeLearning Computational Approach Rap Lyrics Generation Summary Writing rap lyric requires creativity construct meaningful interesting story lyrical skill produce complex rhyme pattern form cornerstone good flow present rap lyric generation method capture aspect First develop prediction model identify next line... | [0.06822919845581055, 0.032081488519907, -0.01135606225579977, -0.01575624570250511, -0.04941854253411293, -0.0047858408652246, 0.006555943284183741, -0.016065383329987526, -0.03747532144188881, -0.009168005548417568, -0.0009104200289584696, 0.01905684545636177, 0.0540979728102684, 0.021996311843395233, 0.0182785503566... |
58 | 58 | ['Shengxian Wan', 'Yanyan Lan', 'Jun Xu', 'Jiafeng Guo', 'Liang Pang', 'Xueqi Cheng'] | 1604.04378v1 | Semantic matching, which aims to determine the matching degree between two
texts, is a fundamental problem for many NLP applications. Recently, deep
learning approach has been applied to this problem and significant improvements
have been achieved. In this paper, we propose to view the generation of the
global interact... | Match-SRNN: Modeling the Recursive Matching Structure with Spatial RNN | 2,016 | http://arxiv.org/pdf/1604.04378v1 | Title MatchSRNN Modeling Recursive Matching Structure Spatial RNN Summary Semantic matching aim determine matching degree two text fundamental problem many NLP application Recently deep learning approach applied problem significant improvement achieved paper propose view generation global interaction two text recursive... | [0.04786315932869911, 0.031401198357343674, -0.015540102496743202, 0.07065323740243912, -0.04607900232076645, -0.009140062145888805, -0.019094157963991165, 0.0045071071945130825, -0.009520016610622406, -0.014485950581729412, 0.0053553651086986065, -0.022471679374575615, 0.04384959116578102, 0.06489219516515732, -0.0041... |
59 | 59 | ['Iulian V. Serban', 'Alexander G. Ororbia II', 'Joelle Pineau', 'Aaron Courville'] | 1612.00377v4 | Advances in neural variational inference have facilitated the learning of
powerful directed graphical models with continuous latent variables, such as
variational autoencoders. The hope is that such models will learn to represent
rich, multi-modal latent factors in real-world data, such as natural language
text. Howeve... | Piecewise Latent Variables for Neural Variational Text Processing | 2,016 | http://arxiv.org/pdf/1612.00377v4 | Title Piecewise Latent Variables Neural Variational Text Processing Summary Advances neural variational inference facilitated learning powerful directed graphical model continuous latent variable variational autoencoders hope model learn represent rich multimodal latent factor realworld data natural language text Howev... | [0.05198274552822113, 0.09823858737945557, -0.033034391701221466, 0.02786707691848278, -0.02450750023126602, 0.009729685261845589, 0.03232638165354729, -0.0357762835919857, -0.05808180943131447, -0.04140713810920715, 0.031163692474365234, -0.05717034265398979, -0.014125913381576538, 0.1409609466791153, 0.03976983949542... |
60 | 60 | ['Baolin Peng', 'Kaisheng Yao'] | 1506.00195v1 | Recurrent Neural Networks (RNNs) have become increasingly popular for the
task of language understanding. In this task, a semantic tagger is deployed to
associate a semantic label to each word in an input sequence. The success of
RNN may be attributed to its ability to memorize long-term dependence that
relates the cur... | Recurrent Neural Networks with External Memory for Language
Understanding | 2,015 | http://arxiv.org/pdf/1506.00195v1 | Title Recurrent Neural Networks External Memory Language Understanding Summary Recurrent Neural Networks RNNs become increasingly popular task language understanding task semantic tagger deployed associate semantic label word input sequence success RNN may attributed ability memorize longterm dependence relates current... | [0.005397650878876448, -0.028821926563978195, -0.004661565646529198, 0.03165142610669136, -0.013744932599365711, -0.011485778726637363, -0.03155529126524925, -0.01816035993397236, -0.0040605636313557625, -0.04181218147277832, -0.017562074586749077, -0.01850859820842743, 0.015784334391355515, 0.018213815987110138, 0.000... |
61 | 61 | ['Alessandro Sordoni', 'Michel Galley', 'Michael Auli', 'Chris Brockett', 'Yangfeng Ji', 'Margaret Mitchell', 'Jian-Yun Nie', 'Jianfeng Gao', 'Bill Dolan'] | 1506.06714v1 | We present a novel response generation system that can be trained end to end
on large quantities of unstructured Twitter conversations. A neural network
architecture is used to address sparsity issues that arise when integrating
contextual information into classic statistical models, allowing the system to
take into ac... | A Neural Network Approach to Context-Sensitive Generation of
Conversational Responses | 2,015 | http://arxiv.org/pdf/1506.06714v1 | Title Neural Network Approach ContextSensitive Generation Conversational Responses Summary present novel response generation system trained end end large quantity unstructured Twitter conversation neural network architecture used address sparsity issue arise integrating contextual information classic statistical model ... | [0.06651893258094788, 0.02844914048910141, -0.015687178820371628, 0.01697389967739582, -0.011209608055651188, -0.00512953195720911, 0.022652603685855865, -0.0066788471303880215, 0.005032610148191452, -0.025628216564655304, 0.0077673643827438354, -0.02901383303105831, -0.008281727321445942, 0.06537818908691406, 0.010971... |
62 | 62 | ['Ryan Lowe', 'Nissan Pow', 'Iulian Serban', 'Joelle Pineau'] | 1506.08909v3 | This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost
1 million multi-turn dialogues, with a total of over 7 million utterances and
100 million words. This provides a unique resource for research into building
dialogue managers based on neural language models that can make use of large
amounts o... | The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured
Multi-Turn Dialogue Systems | 2,015 | http://arxiv.org/pdf/1506.08909v3 | Title Ubuntu Dialogue Corpus Large Dataset Research Unstructured MultiTurn Dialogue Systems Summary paper introduces Ubuntu Dialogue Corpus dataset containing almost 1 million multiturn dialogue total 7 million utterance 100 million word provides unique resource research building dialogue manager based neural language ... | [0.05112911015748978, 0.06905069947242737, -0.0019867701921612024, 0.07173269987106323, -0.022180061787366867, 0.017659947276115417, -0.006270970683544874, -0.006695645395666361, 0.028420697897672653, -0.04165142402052879, -0.019953597337007523, -0.0066855731420218945, -0.016433417797088623, 0.053304798901081085, 0.006... |
63 | 63 | ['Iulian V. Serban', 'Alessandro Sordoni', 'Yoshua Bengio', 'Aaron Courville', 'Joelle Pineau'] | 1507.04808v3 | We investigate the task of building open domain, conversational dialogue
systems based on large dialogue corpora using generative models. Generative
models produce system responses that are autonomously generated word-by-word,
opening up the possibility for realistic, flexible interactions. In support of
this goal, we ... | Building End-To-End Dialogue Systems Using Generative Hierarchical
Neural Network Models | 2,015 | http://arxiv.org/pdf/1507.04808v3 | Title Building EndToEnd Dialogue Systems Using Generative Hierarchical Neural Network Models Summary investigate task building open domain conversational dialogue system based large dialogue corpus using generative model Generative model produce system response autonomously generated wordbyword opening possibility real... | [0.07504977285861969, 0.0586656853556633, 0.00667676841840148, 0.04332181066274643, 0.0008132587536238134, 0.005327644292265177, -0.00212988443672657, -0.013627498410642147, 0.012270418927073479, -0.0374271459877491, -0.005141301546245813, -0.04003185033798218, -0.0022301755379885435, 0.07689303159713745, 0.00879512820... |
64 | 64 | ['Dzmitry Bahdanau', 'Jan Chorowski', 'Dmitriy Serdyuk', 'Philemon Brakel', 'Yoshua Bengio'] | 1508.04395v2 | Many of the current state-of-the-art Large Vocabulary Continuous Speech
Recognition Systems (LVCSR) are hybrids of neural networks and Hidden Markov
Models (HMMs). Most of these systems contain separate components that deal with
the acoustic modelling, language modelling and sequence decoding. We
investigate a more dir... | End-to-End Attention-based Large Vocabulary Speech Recognition | 2,015 | http://arxiv.org/pdf/1508.04395v2 | Title EndtoEnd Attentionbased Large Vocabulary Speech Recognition Summary Many current stateoftheart Large Vocabulary Continuous Speech Recognition Systems LVCSR hybrid neural network Hidden Markov Models HMMs system contain separate component deal acoustic modelling language modelling sequence decoding investigate dir... | [0.0036636502481997013, 0.03559265658259392, 0.028615348041057587, 0.05915956571698189, -0.004340333864092827, -0.011123569682240486, -8.60570726217702e-05, 0.006707873195409775, -0.019050870090723038, -0.044991590082645416, -0.0426768995821476, -0.014255586080253124, 0.03938936069607735, 0.03283065930008888, 0.0070450... |
65 | 65 | ['Baolin Peng', 'Zhengdong Lu', 'Hang Li', 'Kam-Fai Wong'] | 1508.05508v1 | We propose Neural Reasoner, a framework for neural network-based reasoning
over natural language sentences. Given a question, Neural Reasoner can infer
over multiple supporting facts and find an answer to the question in specific
forms. Neural Reasoner has 1) a specific interaction-pooling mechanism,
allowing it to exa... | Towards Neural Network-based Reasoning | 2,015 | http://arxiv.org/pdf/1508.05508v1 | Title Towards Neural Networkbased Reasoning Summary propose Neural Reasoner framework neural networkbased reasoning natural language sentence Given question Neural Reasoner infer multiple supporting fact find answer question specific form Neural Reasoner 1 specific interactionpooling mechanism allowing examine multiple... | [0.018060922622680664, 0.03844502195715904, 0.02135317027568817, 0.053976576775312424, -0.029643064364790916, -8.704445644980296e-05, 0.0524468719959259, 0.0009235565667040646, -0.00865957885980606, -0.01684771478176117, 0.04920750856399536, 0.012071599252521992, -0.020566226914525032, 0.027884943410754204, 0.014114674... |
66 | 66 | ['Hongyuan Mei', 'Mohit Bansal', 'Matthew R. Walter'] | 1509.00838v2 | We propose an end-to-end, domain-independent neural encoder-aligner-decoder
model for selective generation, i.e., the joint task of content selection and
surface realization. Our model first encodes a full set of over-determined
database event records via an LSTM-based recurrent neural network, then
utilizes a novel co... | What to talk about and how? Selective Generation using LSTMs with
Coarse-to-Fine Alignment | 2,015 | http://arxiv.org/pdf/1509.00838v2 | Title talk Selective Generation using LSTMs CoarsetoFine Alignment Summary propose endtoend domainindependent neural encoderalignerdecoder model selective generation ie joint task content selection surface realization model first encodes full set overdetermined database event record via LSTMbased recurrent neural netwo... | [-0.0010701733408495784, 0.020024528726935387, -0.007494167424738407, 0.034683242440223694, 0.010953160002827644, -0.006938681937754154, 0.010699848644435406, -0.01741715520620346, -0.04157884791493416, -0.014847035519778728, 0.003465132787823677, 0.012246784754097462, 0.02054784819483757, 0.0906740054488182, 0.0112764... |
67 | 67 | ['Tim Rocktäschel', 'Edward Grefenstette', 'Karl Moritz Hermann', 'Tomáš Kočiský', 'Phil Blunsom'] | 1509.06664v4 | While most approaches to automatically recognizing entailment relations have
used classifiers employing hand engineered features derived from complex
natural language processing pipelines, in practice their performance has been
only slightly better than bag-of-word pair classifiers using only lexical
similarity. The on... | Reasoning about Entailment with Neural Attention | 2,015 | http://arxiv.org/pdf/1509.06664v4 | Title Reasoning Entailment Neural Attention Summary approach automatically recognizing entailment relation used classifier employing hand engineered feature derived complex natural language processing pipeline practice performance slightly better bagofword pair classifier using lexical similarity attempt far build endt... | [0.040562331676483154, 0.017046701163053513, -0.009960235096514225, 0.0880603939294815, -0.037016768008470535, 0.01922035776078701, -0.026768028736114502, -0.005412455648183823, 0.04625948518514633, -0.03011850267648697, 0.022725339978933334, 0.046445656567811966, 0.012567591853439808, 0.03314181789755821, 0.0241728182... |
68 | 68 | ['Yu Zhang', 'Guoguo Chen', 'Dong Yu', 'Kaisheng Yao', 'Sanjeev Khudanpur', 'James Glass'] | 1510.08983v2 | In this paper, we extend the deep long short-term memory (DLSTM) recurrent
neural networks by introducing gated direct connections between memory cells in
adjacent layers. These direct links, called highway connections, enable
unimpeded information flow across different layers and thus alleviate the
gradient vanishing ... | Highway Long Short-Term Memory RNNs for Distant Speech Recognition | 2,015 | http://arxiv.org/pdf/1510.08983v2 | Title Highway Long ShortTerm Memory RNNs Distant Speech Recognition Summary paper extend deep long shortterm memory DLSTM recurrent neural network introducing gated direct connection memory cell adjacent layer direct link called highway connection enable unimpeded information flow across different layer thus alleviate ... | [0.002620002953335643, 0.03971334174275398, 0.01974472403526306, 0.05725536122918129, 0.004510858561843634, -0.006223713047802448, 0.057537518441677094, 0.012117910198867321, 0.02173352614045143, -0.03259125351905823, -0.044787194579839706, -0.05307859182357788, 0.03997835889458656, 0.030911626294255257, 0.005527654662... |
69 | 69 | ['Pengcheng Yin', 'Zhengdong Lu', 'Hang Li', 'Ben Kao'] | 1512.00965v2 | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Un... | Neural Enquirer: Learning to Query Tables with Natural Language | 2,015 | http://arxiv.org/pdf/1512.00965v2 | Title Neural Enquirer Learning Query Tables Natural Language Summary proposed Neural Enquirer neural network architecture execute natural language NL query knowledgebase KB answer Basically Neural Enquirer find distributed representation query executes knowledgebase table obtain answer one value table Unlike similar ef... | [0.050602201372385025, 0.07603490352630615, -0.004970546346157789, 0.042139116674661636, -0.017126917839050293, -0.008230132050812244, -0.019356362521648407, 0.009580429643392563, 0.008824012242257595, -0.022665787488222122, -0.005566886160522699, 0.046493031084537506, -0.044214747846126556, 0.04636704921722412, 0.0340... |
70 | 70 | ['Petr Baudiš', 'Jan Pichl', 'Tomáš Vyskočil', 'Jan Šedivý'] | 1603.06127v4 | We review the task of Sentence Pair Scoring, popular in the literature in
various forms - viewed as Answer Sentence Selection, Semantic Text Scoring,
Next Utterance Ranking, Recognizing Textual Entailment, Paraphrasing or e.g. a
component of Memory Networks.
We argue that all such tasks are similar from the model per... | Sentence Pair Scoring: Towards Unified Framework for Text Comprehension | 2,016 | http://arxiv.org/pdf/1603.06127v4 | Title Sentence Pair Scoring Towards Unified Framework Text Comprehension Summary review task Sentence Pair Scoring popular literature various form viewed Answer Sentence Selection Semantic Text Scoring Next Utterance Ranking Recognizing Textual Entailment Paraphrasing eg component Memory Networks argue task similar mod... | [0.04160737618803978, 0.04916667565703392, -0.006056435871869326, 0.07171273976564407, -0.0365428701043129, 0.023142997175455093, -0.002609814051538706, -0.029520545154809952, -0.012600592337548733, -0.03214128315448761, -0.02376331388950348, -0.011796790175139904, 0.02304866723716259, 0.028991086408495903, -0.01025020... |
71 | 71 | ['Jiatao Gu', 'Zhengdong Lu', 'Hang Li', 'Victor O. K. Li'] | 1603.06393v3 | We address an important problem in sequence-to-sequence (Seq2Seq) learning
referred to as copying, in which certain segments in the input sequence are
selectively replicated in the output sequence. A similar phenomenon is
observable in human language communication. For example, humans tend to repeat
entity names or eve... | Incorporating Copying Mechanism in Sequence-to-Sequence Learning | 2,016 | http://arxiv.org/pdf/1603.06393v3 | Title Incorporating Copying Mechanism SequencetoSequence Learning Summary address important problem sequencetosequence Seq2Seq learning referred copying certain segment input sequence selectively replicated output sequence similar phenomenon observable human language communication example human tend repeat entity name ... | [0.0387384369969368, 0.08290021121501923, 0.001752879354171455, 0.03323785215616226, -0.050581447780132294, 0.007113951724022627, -0.007450574543327093, 0.0347331203520298, -0.0663609504699707, -0.0353577695786953, 0.015634460374712944, -0.0024770135059952736, 0.017113493755459785, 0.0466763935983181, 0.040870621800422... |
72 | 72 | ['Iulian Vlad Serban', 'Alberto García-Durán', 'Caglar Gulcehre', 'Sungjin Ahn', 'Sarath Chandar', 'Aaron Courville', 'Yoshua Bengio'] | 1603.06807v2 | Over the past decade, large-scale supervised learning corpora have enabled
machine learning researchers to make substantial advances. However, to this
date, there are no large-scale question-answer corpora available. In this paper
we present the 30M Factoid Question-Answer Corpus, an enormous question answer
pair corpu... | Generating Factoid Questions With Recurrent Neural Networks: The 30M
Factoid Question-Answer Corpus | 2,016 | http://arxiv.org/pdf/1603.06807v2 | Title Generating Factoid Questions Recurrent Neural Networks 30M Factoid QuestionAnswer Corpus Summary past decade largescale supervised learning corpus enabled machine learning researcher make substantial advance However date largescale questionanswer corpus available paper present 30M Factoid QuestionAnswer Corpus en... | [0.07906658947467804, 0.03347751125693321, -0.012156195007264614, 0.024543317034840584, -0.008402527309954166, 0.02561432495713234, 0.03566369786858559, 0.01811807043850422, -0.018982665613293648, -0.02428242564201355, 0.021681996062397957, -0.007761327549815178, -0.04212968796491623, 0.03326527774333954, 0.03074628487... |
73 | 73 | ['Chia-Wei Liu', 'Ryan Lowe', 'Iulian V. Serban', 'Michael Noseworthy', 'Laurent Charlin', 'Joelle Pineau'] | 1603.08023v2 | We investigate evaluation metrics for dialogue response generation systems
where supervised labels, such as task completion, are not available. Recent
works in response generation have adopted metrics from machine translation to
compare a model's generated response to a single target response. We show that
these metric... | How NOT To Evaluate Your Dialogue System: An Empirical Study of
Unsupervised Evaluation Metrics for Dialogue Response Generation | 2,016 | http://arxiv.org/pdf/1603.08023v2 | Title Evaluate Dialogue System Empirical Study Unsupervised Evaluation Metrics Dialogue Response Generation Summary investigate evaluation metric dialogue response generation system supervised label task completion available Recent work response generation adopted metric machine translation compare model generated resp... | [0.06705658882856369, 0.0024863320868462324, -0.010683062486350536, 0.026092534884810448, -0.011607684195041656, 0.007161720655858517, 0.020762262865900993, 0.0042042238637804985, 0.02620399184525013, -0.04520987719297409, -0.044234130531549454, -0.009241407737135887, -0.003777619218453765, 0.06811194121837616, -0.0059... |
74 | 74 | ['Iulian Vlad Serban', 'Alessandro Sordoni', 'Ryan Lowe', 'Laurent Charlin', 'Joelle Pineau', 'Aaron Courville', 'Yoshua Bengio'] | 1605.06069v3 | Sequential data often possesses a hierarchical structure with complex
dependencies between subsequences, such as found between the utterances in a
dialogue. In an effort to model this kind of generative process, we propose a
neural network-based generative architecture, with latent stochastic variables
that span a vari... | A Hierarchical Latent Variable Encoder-Decoder Model for Generating
Dialogues | 2,016 | http://arxiv.org/pdf/1605.06069v3 | Title Hierarchical Latent Variable EncoderDecoder Model Generating Dialogues Summary Sequential data often posse hierarchical structure complex dependency subsequence found utterance dialogue effort model kind generative process propose neural networkbased generative architecture latent stochastic variable span variabl... | [0.06646768748760223, 0.06293745338916779, -0.007975352928042412, 0.024971112608909607, 0.008771158754825592, -0.007252962794154882, 0.003914270084351301, -0.025218840688467026, -0.025848684832453728, -0.03928428888320923, 0.016512366011738777, -0.02540435455739498, 0.01444464735686779, 0.0966857448220253, 0.0244050584... |
75 | 75 | ['Dirk Weissenborn'] | 1606.03864v2 | Many important NLP problems can be posed as dual-sequence or
sequence-to-sequence modeling tasks. Recent advances in building end-to-end
neural architectures have been highly successful in solving such tasks. In this
work we propose a new architecture for dual-sequence modeling that is based on
associative memory. We d... | Neural Associative Memory for Dual-Sequence Modeling | 2,016 | http://arxiv.org/pdf/1606.03864v2 | Title Neural Associative Memory DualSequence Modeling Summary Many important NLP problem posed dualsequence sequencetosequence modeling task Recent advance building endtoend neural architecture highly successful solving task work propose new architecture dualsequence modeling based associative memory derive AMRNNs recu... | [0.05189606919884682, 0.049359217286109924, -0.03858322277665138, 0.03595011308789253, -0.026050789281725883, 0.0005870053428225219, -0.008557391352951527, -0.03977277874946594, 0.00774132227525115, -0.047866933047771454, 0.034330904483795166, -0.07000040262937546, -0.007414250168949366, 0.019366860389709473, 0.0148836... |
76 | 76 | ['Marc Dymetman', 'Chunyang Xiao'] | 1607.02467v2 | We introduce LL-RNNs (Log-Linear RNNs), an extension of Recurrent Neural
Networks that replaces the softmax output layer by a log-linear output layer,
of which the softmax is a special case. This conceptually simple move has two
main advantages. First, it allows the learner to combat training data sparsity
by allowing ... | Log-Linear RNNs: Towards Recurrent Neural Networks with Flexible Prior
Knowledge | 2,016 | http://arxiv.org/pdf/1607.02467v2 | Title LogLinear RNNs Towards Recurrent Neural Networks Flexible Prior Knowledge Summary introduce LLRNNs LogLinear RNNs extension Recurrent Neural Networks replaces softmax output layer loglinear output layer softmax special case conceptually simple move two main advantage First allows learner combat training data spar... | [0.018478121608495712, 0.04983919858932495, 0.0189732126891613, 0.026548797264695168, -0.03243403509259224, -0.004975564777851105, 0.010365774855017662, -0.01698579266667366, -0.04211164265871048, -0.02693321369588375, -0.012811820022761822, -0.04480641335248947, 0.027834083884954453, 0.05144667997956276, 0.04944017156... |
77 | 77 | ['Ondrej Bajgar', 'Rudolf Kadlec', 'Jan Kleindienst'] | 1610.00956v1 | There is a practically unlimited amount of natural language data available.
Still, recent work in text comprehension has focused on datasets which are
small relative to current computing possibilities. This article is making a
case for the community to move to larger data and as a step in that direction
it is proposing... | Embracing data abundance: BookTest Dataset for Reading Comprehension | 2,016 | http://arxiv.org/pdf/1610.00956v1 | Title Embracing data abundance BookTest Dataset Reading Comprehension Summary practically unlimited amount natural language data available Still recent work text comprehension focused datasets small relative current computing possibility article making case community move larger data step direction proposing BookTest n... | [0.025445127859711647, 0.03385183960199356, -0.012874915264546871, 0.03594973310828209, -0.02379055880010128, 0.024186957627534866, 0.03685464709997177, -0.0361676961183548, -0.015774870291352272, -0.04073069989681244, -0.010656061582267284, -0.012908689677715302, 0.03424857184290886, 0.0068349954672157764, 0.015691727... |
78 | 78 | ['James Bradbury', 'Stephen Merity', 'Caiming Xiong', 'Richard Socher'] | 1611.01576v2 | Recurrent neural networks are a powerful tool for modeling sequential data,
but the dependence of each timestep's computation on the previous timestep's
output limits parallelism and makes RNNs unwieldy for very long sequences. We
introduce quasi-recurrent neural networks (QRNNs), an approach to neural
sequence modelin... | Quasi-Recurrent Neural Networks | 2,016 | http://arxiv.org/pdf/1611.01576v2 | Title QuasiRecurrent Neural Networks Summary Recurrent neural network powerful tool modeling sequential data dependence timesteps computation previous timesteps output limit parallelism make RNNs unwieldy long sequence introduce quasirecurrent neural network QRNNs approach neural sequence modeling alternate convolution... | [0.015634862706065178, 0.07041216641664505, -0.012827531434595585, 0.017810285091400146, -0.03620230033993721, 0.011213905178010464, 0.013429611921310425, 0.008376345969736576, -0.011478225700557232, -0.019606759771704674, 0.009776151739060879, -0.02758096717298031, 0.022093847393989563, 0.043517790734767914, -0.013645... |
79 | 79 | ['Jakob N. Foerster', 'Justin Gilmer', 'Jan Chorowski', 'Jascha Sohl-Dickstein', 'David Sussillo'] | 1611.09434v2 | There exist many problem domains where the interpretability of neural network
models is essential for deployment. Here we introduce a recurrent architecture
composed of input-switched affine transformations - in other words an RNN
without any explicit nonlinearities, but with input-dependent recurrent
weights. This sim... | Input Switched Affine Networks: An RNN Architecture Designed for
Interpretability | 2,016 | http://arxiv.org/pdf/1611.09434v2 | Title Input Switched Affine Networks RNN Architecture Designed Interpretability Summary exist many problem domain interpretability neural network model essential deployment introduce recurrent architecture composed inputswitched affine transformation word RNN without explicit nonlinearities inputdependent recurrent wei... | [0.010674873366951942, 0.018224945291876793, -0.038852620869874954, 0.05739248916506767, -0.021765777841210365, -0.01853686012327671, 0.034835197031497955, 0.0044465819373726845, -0.06506939232349396, -0.02134864404797554, -0.030794117599725723, -0.0017826718976721168, 0.0074800411239266396, 0.0892687439918518, 0.02445... |
80 | 80 | ['Michał Daniluk', 'Tim Rocktäschel', 'Johannes Welbl', 'Sebastian Riedel'] | 1702.04521v1 | Neural language models predict the next token using a latent representation
of the immediate token history. Recently, various methods for augmenting neural
language models with an attention mechanism over a differentiable memory have
been proposed. For predicting the next token, these models query information
from a me... | Frustratingly Short Attention Spans in Neural Language Modeling | 2,017 | http://arxiv.org/pdf/1702.04521v1 | Title Frustratingly Short Attention Spans Neural Language Modeling Summary Neural language model predict next token using latent representation immediate token history Recently various method augmenting neural language model attention mechanism differentiable memory proposed predicting next token model query informatio... | [0.04816904664039612, 0.05674388259649277, -0.0201411135494709, 0.01817266270518303, -0.007094713859260082, -0.011209775693714619, 0.017209338024258614, -0.02115270122885704, -0.04461260139942169, -0.0364251472055912, -0.006139329168945551, -0.03879533335566521, 0.01870228722691536, 0.05745745822787285, 0.0486161746084... |
81 | 81 | ['Zhouhan Lin', 'Minwei Feng', 'Cicero Nogueira dos Santos', 'Mo Yu', 'Bing Xiang', 'Bowen Zhou', 'Yoshua Bengio'] | 1703.03130v1 | This paper proposes a new model for extracting an interpretable sentence
embedding by introducing self-attention. Instead of using a vector, we use a
2-D matrix to represent the embedding, with each row of the matrix attending on
a different part of the sentence. We also propose a self-attention mechanism
and a special... | A Structured Self-attentive Sentence Embedding | 2,017 | http://arxiv.org/pdf/1703.03130v1 | Title Structured Selfattentive Sentence Embedding Summary paper proposes new model extracting interpretable sentence embedding introducing selfattention Instead using vector use 2D matrix represent embedding row matrix attending different part sentence also propose selfattention mechanism special regularization term mo... | [0.025598371401429176, 0.014826016500592232, -0.0061867861077189445, 0.09527096152305603, -0.023132028058171272, 0.004297102335840464, -0.009861559607088566, -0.03200750797986984, 0.048720844089984894, -0.04509393125772476, -0.01509841252118349, 0.065526582300663, -0.048346150666475296, 0.040342625230550766, -0.0106384... |
82 | 82 | ['Samuel Rönnqvist', 'Niko Schenk', 'Christian Chiarcos'] | 1704.08092v1 | We introduce an attention-based Bi-LSTM for Chinese implicit discourse
relations and demonstrate that modeling argument pairs as a joint sequence can
outperform word order-agnostic approaches. Our model benefits from a partial
sampling scheme and is conceptually simple, yet achieves state-of-the-art
performance on the ... | A Recurrent Neural Model with Attention for the Recognition of Chinese
Implicit Discourse Relations | 2,017 | http://arxiv.org/pdf/1704.08092v1 | Title Recurrent Neural Model Attention Recognition Chinese Implicit Discourse Relations Summary introduce attentionbased BiLSTM Chinese implicit discourse relation demonstrate modeling argument pair joint sequence outperform word orderagnostic approach model benefit partial sampling scheme conceptually simple yet achie... | [0.0388648621737957, 0.015349977649748325, -0.009019549936056137, 0.08634036034345627, -0.006240590941160917, -0.01864231750369072, 0.0005546743050217628, -0.009385770186781883, 0.009124504402279854, -0.07483068853616714, -0.003993363585323095, -0.02777963876724243, 0.02795795537531376, 0.013620425947010517, 0.00709872... |
83 | 83 | ['Lara J. Martin', 'Prithviraj Ammanabrolu', 'Xinyu Wang', 'William Hancock', 'Shruti Singh', 'Brent Harrison', 'Mark O. Riedl'] | 1706.01331v3 | Automated story generation is the problem of automatically selecting a
sequence of events, actions, or words that can be told as a story. We seek to
develop a system that can generate stories by learning everything it needs to
know from textual story corpora. To date, recurrent neural networks that learn
language model... | Event Representations for Automated Story Generation with Deep Neural
Nets | 2,017 | http://arxiv.org/pdf/1706.01331v3 | Title Event Representations Automated Story Generation Deep Neural Nets Summary Automated story generation problem automatically selecting sequence event action word told story seek develop system generate story learning everything need know textual story corpus date recurrent neural network learn language model charac... | [0.024326177313923836, 0.034451182931661606, -0.02322245202958584, 0.05542153865098953, -0.05120529234409332, -0.0012775624636560678, 0.022607892751693726, -0.012914764694869518, 0.0013930521672591567, -0.039871297776699066, 0.06197723001241684, 0.01201376412063837, 0.015386915765702724, 0.13734115660190582, -0.0196597... |
84 | 84 | ['Tong Wang', 'Xingdi Yuan', 'Adam Trischler'] | 1706.01450v1 | We propose a generative machine comprehension model that learns jointly to
ask and answer questions based on documents. The proposed model uses a
sequence-to-sequence framework that encodes the document and generates a
question (answer) given an answer (question). Significant improvement in model
performance is observe... | A Joint Model for Question Answering and Question Generation | 2,017 | http://arxiv.org/pdf/1706.01450v1 | Title Joint Model Question Answering Question Generation Summary propose generative machine comprehension model learns jointly ask answer question based document proposed model us sequencetosequence framework encodes document generates question answer given answer question Significant improvement model performance obse... | [0.07029268145561218, 0.05171780288219452, -0.020893651992082596, 0.01403932273387909, -0.014812195673584938, 0.0352637879550457, 0.012427830137312412, -0.00030363400583155453, 0.006699640769511461, -0.0045855785720050335, 0.004981146659702063, -0.005575785879045725, -0.035014454275369644, 0.04250645264983177, 0.010775... |
85 | 85 | ['Wei Wen', 'Yuxiong He', 'Samyam Rajbhandari', 'Minjia Zhang', 'Wenhan Wang', 'Fang Liu', 'Bin Hu', 'Yiran Chen', 'Hai Li'] | 1709.05027v7 | Model compression is significant for the wide adoption of Recurrent Neural
Networks (RNNs) in both user devices possessing limited resources and business
clusters requiring quick responses to large-scale service requests. This work
aims to learn structurally-sparse Long Short-Term Memory (LSTM) by reducing the
sizes of... | Learning Intrinsic Sparse Structures within Long Short-Term Memory | 2,017 | http://arxiv.org/pdf/1709.05027v7 | Title Learning Intrinsic Sparse Structures within Long ShortTerm Memory Summary Model compression significant wide adoption Recurrent Neural Networks RNNs user device possessing limited resource business cluster requiring quick response largescale service request work aim learn structurallysparse Long ShortTerm Memory ... | [0.0015198070323094726, 0.024757659062743187, 0.004226320888847113, 0.05461505427956581, 0.02564852125942707, -0.020287277176976204, 0.035923074930906296, 0.02002188377082348, -0.03183319792151451, -0.012838107533752918, -0.02065843902528286, -0.056550681591033936, 0.009196764789521694, 0.030382253229618073, 0.02733928... |
86 | 86 | ['Huda Hakami', 'Danushka Bollegala', 'Hayashi Kohei'] | 1709.06673v2 | Representing the semantic relations that exist between two given words (or
entities) is an important first step in a wide-range of NLP applications such
as analogical reasoning, knowledge base completion and relational information
retrieval. A simple, yet surprisingly accurate method for representing a
relation between... | Why PairDiff works? -- A Mathematical Analysis of Bilinear Relational
Compositional Operators for Analogy Detection | 2,017 | http://arxiv.org/pdf/1709.06673v2 | Title PairDiff work Mathematical Analysis Bilinear Relational Compositional Operators Analogy Detection Summary Representing semantic relation exist two given word entity important first step widerange NLP application analogical reasoning knowledge base completion relational information retrieval simple yet surprisingl... | [0.005939915776252747, 0.05347522720694542, -0.010577460750937462, 0.0691685602068901, -0.0343889556825161, 0.016756443306803703, 0.007281046360731125, 0.022173646837472916, 0.004108861088752747, -0.05177553743124008, -0.011418481357395649, -0.010816593654453754, -0.019512318074703217, -0.028440771624445915, 0.00320503... |
87 | 87 | ['Zhengdong Lu', 'Haotian Cui', 'Xianggen Liu', 'Yukun Yan', 'Daqi Zheng'] | 1709.08853v4 | We propose Object-oriented Neural Programming (OONP), a framework for
semantically parsing documents in specific domains. Basically, OONP reads a
document and parses it into a predesigned object-oriented data structure
(referred to as ontology in this paper) that reflects the domain-specific
semantics of the document. ... | Object-oriented Neural Programming (OONP) for Document Understanding | 2,017 | http://arxiv.org/pdf/1709.08853v4 | Title Objectoriented Neural Programming OONP Document Understanding Summary propose Objectoriented Neural Programming OONP framework semantically parsing document specific domain Basically OONP read document par predesigned objectoriented data structure referred ontology paper reflects domainspecific semantics document... | [0.028467047959566116, 0.03453904017806053, 0.028609363362193108, 0.026870351284742355, -0.020189322531223297, 0.012441849336028099, -0.004567192867398262, -0.006413863971829414, -0.04142891615629196, -0.08128751069307327, -0.0010759899159893394, 0.06674836575984955, -0.017097996547818184, 0.07013848423957825, -0.05891... |
88 | 88 | ['Bin Bi', 'Hao Ma'] | 1709.10204v2 | This paper proposes a novel neural machine reading model for open-domain
question answering at scale. Existing machine comprehension models typically
assume that a short piece of relevant text containing answers is already
identified and given to the models, from which the models are designed to
extract answers. This a... | A Neural Comprehensive Ranker (NCR) for Open-Domain Question Answering | 2,017 | http://arxiv.org/pdf/1709.10204v2 | Title Neural Comprehensive Ranker NCR OpenDomain Question Answering Summary paper proposes novel neural machine reading model opendomain question answering scale Existing machine comprehension model typically assume short piece relevant text containing answer already identified given model model designed extract answer... | [0.05983716621994972, 0.03970317915081978, 0.001613727188669145, 0.03362972289323807, -0.022936943918466568, 0.048818159848451614, 0.005240192171186209, -0.01142068300396204, 0.03304155170917511, -0.0009634695597924292, 0.01396422740072012, -0.021603025496006012, -0.039359331130981445, 0.040468744933605194, -0.00762926... |
89 | 89 | ['Mirco Ravanelli', 'Philemon Brakel', 'Maurizio Omologo', 'Yoshua Bengio'] | 1710.00641v1 | Speech recognition is largely taking advantage of deep learning, showing that
substantial benefits can be obtained by modern Recurrent Neural Networks
(RNNs). The most popular RNNs are Long Short-Term Memory (LSTMs), which
typically reach state-of-the-art performance in many tasks thanks to their
ability to learn long-... | Improving speech recognition by revising gated recurrent units | 2,017 | http://arxiv.org/pdf/1710.00641v1 | Title Improving speech recognition revising gated recurrent unit Summary Speech recognition largely taking advantage deep learning showing substantial benefit obtained modern Recurrent Neural Networks RNNs popular RNNs Long ShortTerm Memory LSTMs typically reach stateoftheart performance many task thanks ability learn ... | [-0.011323253624141216, 0.04274016618728638, 0.02161041833460331, 0.06906063109636307, -0.012762226164340973, -0.02150745689868927, 0.054751697927713394, 0.015127060003578663, -0.014180455356836319, -0.04825058951973915, -0.04682881385087967, -0.04433776065707207, 0.063630610704422, 0.07375448197126389, 0.0187855027616... |
90 | 90 | ['Baolin Peng', 'Xiujun Li', 'Jianfeng Gao', 'Jingjing Liu', 'Kam-Fai Wong'] | 1801.06176v1 | Training a task-completion dialogue agent with real users via reinforcement
learning (RL) could be prohibitively expensive, because it requires many
interactions with users. One alternative is to resort to a user simulator,
while the discrepancy of between simulated and real users makes the learned
policy unreliable in... | Integrating planning for task-completion dialogue policy learning | 2,018 | http://arxiv.org/pdf/1801.06176v1 | Title Integrating planning taskcompletion dialogue policy learning Summary Training taskcompletion dialogue agent real user via reinforcement learning RL could prohibitively expensive requires many interaction user One alternative resort user simulator discrepancy simulated real user make learned policy unreliable prac... | [0.052217062562704086, 0.045885395258665085, -0.004483434837311506, -0.023136097937822342, -0.023510579019784927, 0.009580758400261402, 0.003991948440670967, -0.031473308801651, -0.006071672774851322, -0.007947076112031937, -0.021096833050251007, 0.024964341893792152, -0.025524942204356194, 0.07312264293432236, -0.0373... |
91 | 91 | ['Andrew L. Maas', 'Peng Qi', 'Ziang Xie', 'Awni Y. Hannun', 'Christopher T. Lengerich', 'Daniel Jurafsky', 'Andrew Y. Ng'] | 1406.7806v2 | Deep neural networks (DNNs) are now a central component of nearly all
state-of-the-art speech recognition systems. Building neural network acoustic
models requires several design decisions including network architecture, size,
and training loss function. This paper offers an empirical investigation on
which aspects of ... | Building DNN Acoustic Models for Large Vocabulary Speech Recognition | 2,014 | http://arxiv.org/pdf/1406.7806v2 | Title Building DNN Acoustic Models Large Vocabulary Speech Recognition Summary Deep neural network DNNs central component nearly stateoftheart speech recognition system Building neural network acoustic model requires several design decision including network architecture size training loss function paper offer empirica... | [0.008880573324859142, 0.018985655158758163, 0.01436031237244606, 0.03560292720794678, -0.003236984834074974, -0.026291022077202797, 0.05768342316150665, -0.005553683266043663, -0.011256711557507515, -0.022665170952677727, -0.0633264109492302, 0.002275430131703615, 0.05088605359196663, 0.013685393147170544, -0.00902808... |
92 | 92 | ['William Chan', 'Ian Lane'] | 1504.01482v1 | We present a novel deep Recurrent Neural Network (RNN) model for acoustic
modelling in Automatic Speech Recognition (ASR). We term our contribution as a
TC-DNN-BLSTM-DNN model, the model combines a Deep Neural Network (DNN) with
Time Convolution (TC), followed by a Bidirectional Long Short-Term Memory
(BLSTM), and a fi... | Deep Recurrent Neural Networks for Acoustic Modelling | 2,015 | http://arxiv.org/pdf/1504.01482v1 | Title Deep Recurrent Neural Networks Acoustic Modelling Summary present novel deep Recurrent Neural Network RNN model acoustic modelling Automatic Speech Recognition ASR term contribution TCDNNBLSTMDNN model model combine Deep Neural Network DNN Time Convolution TC followed Bidirectional Long ShortTerm Memory BLSTM fin... | [-0.023029625415802002, -0.004876798950135708, 0.007106386125087738, 0.05048801377415657, -0.020864373072981834, -0.02969019114971161, 0.021685227751731873, -0.054400306195020676, -0.07343190908432007, -0.0060390206053853035, -0.02053234726190567, -0.012641951441764832, 0.05040573328733444, 0.03846380114555359, 0.01577... |
93 | 93 | ['David Krueger', 'Roland Memisevic'] | 1511.08400v7 | We stabilize the activations of Recurrent Neural Networks (RNNs) by
penalizing the squared distance between successive hidden states' norms.
This penalty term is an effective regularizer for RNNs including LSTMs and
IRNNs, improving performance on character-level language modeling and phoneme
recognition, and outperf... | Regularizing RNNs by Stabilizing Activations | 2,015 | http://arxiv.org/pdf/1511.08400v7 | Title Regularizing RNNs Stabilizing Activations Summary stabilize activation Recurrent Neural Networks RNNs penalizing squared distance successive hidden state norm penalty term effective regularizer RNNs including LSTMs IRNNs improving performance characterlevel language modeling phoneme recognition outperforming weig... | [0.0009324587881565094, 0.07279003411531448, 0.008771753869950771, 0.014386710710823536, 0.0023166430182754993, -0.029290569946169853, 0.03192298859357834, 0.040976859629154205, -0.02445213869214058, 0.016157979145646095, -0.02064850553870201, -0.058204080909490585, 0.01878456398844719, 0.007700189482420683, 0.00976213... |
94 | 94 | ['Noam Shazeer', 'Azalia Mirhoseini', 'Krzysztof Maziarz', 'Andy Davis', 'Quoc Le', 'Geoffrey Hinton', 'Jeff Dean'] | 1701.06538v1 | The capacity of a neural network to absorb information is limited by its
number of parameters. Conditional computation, where parts of the network are
active on a per-example basis, has been proposed in theory as a way of
dramatically increasing model capacity without a proportional increase in
computation. In practice... | Outrageously Large Neural Networks: The Sparsely-Gated
Mixture-of-Experts Layer | 2,017 | http://arxiv.org/pdf/1701.06538v1 | Title Outrageously Large Neural Networks SparselyGated MixtureofExperts Layer Summary capacity neural network absorb information limited number parameter Conditional computation part network active perexample basis proposed theory way dramatically increasing model capacity without proportional increase computation prac... | [0.036124326288700104, 0.03961411491036415, -0.00015774837811477482, 0.06119773909449577, 0.02255227603018284, 0.016653865575790405, 0.03068103827536106, 0.028619125485420227, -0.04319979250431061, -0.026061443611979485, -0.02470974624156952, -0.05041300132870674, 0.009488970972597599, 0.01390055101364851, 0.0352874696... |
95 | 95 | ['Yacine Jernite', 'Samuel R. Bowman', 'David Sontag'] | 1705.00557v1 | This work presents a novel objective function for the unsupervised training
of neural network sentence encoders. It exploits signals from paragraph-level
discourse coherence to train these models to understand text. Our objective is
purely discriminative, allowing us to train models many times faster than was
possible ... | Discourse-Based Objectives for Fast Unsupervised Sentence Representation
Learning | 2,017 | http://arxiv.org/pdf/1705.00557v1 | Title DiscourseBased Objectives Fast Unsupervised Sentence Representation Learning Summary work present novel objective function unsupervised training neural network sentence encoders exploit signal paragraphlevel discourse coherence train model understand text objective purely discriminative allowing u train model man... | [0.009210407733917236, 0.04873969778418541, 0.031147388741374016, 0.05972865968942642, -0.03127425163984299, -0.01853874884545803, -0.0032950961031019688, -0.03150421380996704, -0.008737357333302498, -0.07214866578578949, -0.005271330941468477, 0.027711158618330956, 0.01594187691807747, 0.009612095542252064, -0.0387347... |
96 | 96 | ['Zhengyang Wang', 'Shuiwang Ji'] | 1705.06824v1 | Visual question answering is a recently proposed artificial intelligence task
that requires a deep understanding of both images and texts. In deep learning,
images are typically modeled through convolutional neural networks, and texts
are typically modeled through recurrent neural networks. While the requirement
for mo... | Learning Convolutional Text Representations for Visual Question
Answering | 2,017 | http://arxiv.org/pdf/1705.06824v1 | Title Learning Convolutional Text Representations Visual Question Answering Summary Visual question answering recently proposed artificial intelligence task requires deep understanding image text deep learning image typically modeled convolutional neural network text typically modeled recurrent neural network requireme... | [0.07197307795286179, 0.032304778695106506, -0.005745700094848871, 0.06601636111736298, -0.030158162117004395, 0.01168147474527359, 0.0482843816280365, 0.03252594918012619, -0.0017894002376124263, -0.048014212399721146, -0.0009359444375149906, -0.011657709255814552, -0.019961515441536903, 0.07049494981765747, 0.0264007... |
97 | 97 | ['Kyunghyun Cho', 'Bart van Merrienboer', 'Caglar Gulcehre', 'Dzmitry Bahdanau', 'Fethi Bougares', 'Holger Schwenk', 'Yoshua Bengio'] | 1406.1078v3 | In this paper, we propose a novel neural network model called RNN
Encoder-Decoder that consists of two recurrent neural networks (RNN). One RNN
encodes a sequence of symbols into a fixed-length vector representation, and
the other decodes the representation into another sequence of symbols. The
encoder and decoder of t... | Learning Phrase Representations using RNN Encoder-Decoder for
Statistical Machine Translation | 2,014 | http://arxiv.org/pdf/1406.1078v3 | Title Learning Phrase Representations using RNN EncoderDecoder Statistical Machine Translation Summary paper propose novel neural network model called RNN EncoderDecoder consists two recurrent neural network RNN One RNN encodes sequence symbol fixedlength vector representation decodes representation another sequence sy... | [0.04529965668916702, 0.026439426466822624, -0.013809925876557827, 0.051517702639102936, -0.06017933413386345, 0.012948517687618732, -0.02087404578924179, 0.005684196949005127, -0.05261906981468201, -0.06446237862110138, 0.006176763214170933, -0.00038310239324346185, 0.026355110108852386, 0.004713333677500486, -0.00273... |
98 | 98 | ['Zhiyuan Tang', 'Dong Wang', 'Zhiyong Zhang'] | 1505.04630v5 | Recurrent neural networks (RNNs), particularly long short-term memory (LSTM),
have gained much attention in automatic speech recognition (ASR). Although some
successful stories have been reported, training RNNs remains highly
challenging, especially with limited training data. Recent research found that
a well-trained ... | Recurrent Neural Network Training with Dark Knowledge Transfer | 2,015 | http://arxiv.org/pdf/1505.04630v5 | Title Recurrent Neural Network Training Dark Knowledge Transfer Summary Recurrent neural network RNNs particularly long shortterm memory LSTM gained much attention automatic speech recognition ASR Although successful story reported training RNNs remains highly challenging especially limited training data Recent researc... | [0.01449339184910059, 0.026175323873758316, 0.0177297405898571, 0.04964710772037506, -0.008629859425127506, -0.005975619424134493, 0.03218740597367287, -0.02890104427933693, -0.05938320606946945, -0.03964444249868393, -0.058293748646974564, -0.00953069981187582, 0.027942024171352386, 0.006167636718600988, 0.01253994181... |
99 | 99 | ['Haşim Sak', 'Andrew Senior', 'Françoise Beaufays'] | 1402.1128v1 | Long Short-Term Memory (LSTM) is a recurrent neural network (RNN)
architecture that has been designed to address the vanishing and exploding
gradient problems of conventional RNNs. Unlike feedforward neural networks,
RNNs have cyclic connections making them powerful for modeling sequences. They
have been successfully u... | Long Short-Term Memory Based Recurrent Neural Network Architectures for
Large Vocabulary Speech Recognition | 2,014 | http://arxiv.org/pdf/1402.1128v1 | Title Long ShortTerm Memory Based Recurrent Neural Network Architectures Large Vocabulary Speech Recognition Summary Long ShortTerm Memory LSTM recurrent neural network RNN architecture designed address vanishing exploding gradient problem conventional RNNs Unlike feedforward neural network RNNs cyclic connection makin... | [0.0024354096967726946, -0.0008057195809669793, 0.014775389805436134, 0.06812814623117447, 0.011045847088098526, -0.021482767537236214, 0.03765282779932022, -0.0010265184100717306, -0.00853055901825428, -0.03915851190686226, -0.05398586392402649, -0.031463298946619034, 0.04310267046093941, 0.008393342606723309, 0.00355... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.