شبکه عصبی عمیق برای طبقه بندی بافت / Deep neural networks for texture classification—A theoretical analysis

شبکه عصبی عمیق برای طبقه بندی بافت Deep neural networks for texture classification—A theoretical analysis

  • نوع فایل : کتاب
  • زبان : انگلیسی
  • ناشر : Elsevier
  • چاپ و سال / کشور: 2018

توضیحات

رشته های مرتبط مهندسی کامپیوتر
گرایش های مرتبط هوش مصنوعی
مجله شبکه های عصبی – Neural Networks
دانشگاه Louisiana State University – Baton Rouge – USA

منتشر شده در نشریه الزویر
کلمات کلیدی شبکه عصبی عمیق، طبقه بندی بافت، بعد VC

Description

1. Introduction Texture is a key recipe for various object recognition tasks which involve texture-based imagery data like Brodatz (WWW1, 0000), VisTex (WWW2, 0000), Drexel (Oxholm, Bariya, & Nishino, 2012), KTH (WWW3, 0000), UIUCTex (Lazebnik, Schmid, & Ponce, 2005) as well as forest species datasets (de Paula Filho, Oliveira, & Britto Jr, 2009). Texture characterization has also been shown to be useful in addressing other object categorization problems like the Brazilian Forensic Letter Database (BFL) (Freitas, Oliveira, Sabourin, & Bortolozzi, 2008) which was later converted into a textural representation in Hanusiak, Oliveira, Justino, and Sabourin (2012). In Costa, Oliveira, Koerich, and Gouyon (2013), a similar approach was used to find a textural representation of the Latin Music Dataset (Silla Jr., Koerich, & Kaestner, 2008). Over the last decade, Deep Neural Networks have gained popularity due to their ability to learn data representations in both supervised and unsupervised settings and generalize to unseen data samples using hierarchical representations. A notable contribution in Deep Learning is a Deep Belief Network (DBN) formed by stacking Restricted Boltzmann Machines (Hinton, Osindero, & Teh, 2006). Another closely related approach, which has gained much traction over the last decade, is the Convolutional Neural Network (CNN) (Lecun, Bottou, Bengio, & Haffner, 1998). CNNs have been shown to outperform DBN in classical object recognition tasks like MNIST (WWW4, 0000) and CIFAR (Krizhevsky, 2009). Despite these advances in the field of Deep Learning, there has been limited success in learning textural features using Deep Neural Networks. Does this mean that there is some inherent limitation in existing Neural Network architectures and learning algorithms? In this paper, following Basu et al., 2016, we try to answer this question by investigating the use of Deep Neural Networks for the classification of texture datasets. First, we derive the size of the feature space for some standard textural features extracted from the input dataset. We then use the theory of Vapnik–Chervonenkis (VC) dimension to show that hand-crafted feature extraction creates low-dimensional representations, which help in reducing the overall excess error rate. As a corollary to this analysis we derive for the first time upper bounds on the VC dimension of Convolutional Neural Network as well as Dropout and Dropconnect networks and the relation between excess error rate of Dropout and Dropconnect networks.
اگر شما نسبت به این اثر یا عنوان محق هستید، لطفا از طریق "بخش تماس با ما" با ما تماس بگیرید و برای اطلاعات بیشتر، صفحه قوانین و مقررات را مطالعه نمایید.

دیدگاه کاربران


لطفا در این قسمت فقط نظر شخصی در مورد این عنوان را وارد نمایید و در صورتیکه مشکلی با دانلود یا استفاده از این فایل دارید در صفحه کاربری تیکت ثبت کنید.

بارگزاری