اثر رقیق سازی در شبکه های عصبی بازگشتی نامتقارن / Effect of dilution in asymmetric recurrent neural networks

اثر رقیق سازی در شبکه های عصبی بازگشتی نامتقارن Effect of dilution in asymmetric recurrent neural networks

  • نوع فایل : کتاب
  • زبان : انگلیسی
  • ناشر : Elsevier
  • چاپ و سال / کشور: 2018

توضیحات

رشته های مرتبط مهندسی پزشکی، فناوری اطلاعات
گرایش های مرتبط بیوالکتریک، شبکه های کامپیوتری
مجله شبکه های عصبی – Neural Networks
دانشگاه Center for Life Nanoscience – Istituto Italiano di Tecnologia – Italy
شناسه دیجیتال – doi https://doi.org/10.1016/j.neunet.2018.04.003
منتشر شده در نشریه الزویر
کلمات کلیدی انگلیسی Recurrent neural networks, McCulloch–Pitts neurons, Memory models, Maximum memory storage

Description

1. Introduction Recurrent neural networks are able to store stimuli-response associations, and serve as a model of how live neural networks store and recall behaviors as responses to given stimuli. A discretetime deterministic recurrent N binary-neuron neural network is completely characterized by its N 2 edges, and its instantaneous state is defined by a neuron activation vector σ, which is a binary vector of size N. In this paper, we consider a specific kind of recurrent neural network, which is initialized, analogously to a Hopfield network, by assigning to the network’s neurons an initial pattern which is the network stimulus or input. The collection of all possible neuron activation vectors contains 2N allowed vectors σ, these vectors can be partitioned in three categories: steady states, limit cycles, and transient states. Steady states are neuron activation states that do not change in time, and limit cycles are sequences of neuron activation vectors that repeat cyclically, with a period that we call cycle length. From now on, we will consider a steady state as a limit cycle of length 1. A network, given any initial activation vector, always evolves to a limit cycle, which for this reason we also refer to as attractor. In other words, a network associates a limit cycle to any initial neural activation state that is given as an input. For this reason, limit cycles can be considered as behaviors stored as responses to initial stimuli. In the case of length 1 cycles, limit behaviors are a single activation state which in the case of Hopfield networks correspond to the recollection of a memory. In the case of cycles with a length greater than 1, stored limit behaviors are sequences of activation patterns which may correspond to a stored dynamical sequence, such as the performance of a complex motor task, or a dynamic sequence of static memories. In principle, a recurrent neural network stores a certain number of limit behaviors as vectors from a 2N set in a data structure defined by N 2 parameters. Furthermore, these vectors can be recovered in responses to input stimuli. This clearly has intriguing analogies with content-addressable memory systems capable of indexing large strings of bits (Carpenter, 1989; Hopfield, 1987).
اگر شما نسبت به این اثر یا عنوان محق هستید، لطفا از طریق "بخش تماس با ما" با ما تماس بگیرید و برای اطلاعات بیشتر، صفحه قوانین و مقررات را مطالعه نمایید.

دیدگاه کاربران


لطفا در این قسمت فقط نظر شخصی در مورد این عنوان را وارد نمایید و در صورتیکه مشکلی با دانلود یا استفاده از این فایل دارید در صفحه کاربری تیکت ثبت کنید.

بارگزاری