Parth Patel Professor Jiang Li CSCI 4800-12 13 November 2018 Deep Learning

Parth Patel
Professor Jiang Li
CSCI 4800-12
13 November 2018
Deep Learning: Application towards Network Training
Deep learning is a machine learning program which helps computers to learn over time. Just like humans, computers learn from experience and grow along time in concepts and techniques. These computers are able to adapt fast and collect knowledge as they improve. There are concepts involved that enables the computer to able to perform tasks of machine learning and build upon them. Deep learning process is highly used towards speech recognition, visual object detection and etc. The application of these techniques is used to make a computer behave like a human operator. There is no human performance involved in making of this process. Kwang Gi Kim writes that level of hierarchal concepts helps computer to learn simpler tasks. By building on harder and complicated concepts, it becomes quite easy for computers to adapt to simpler ones.
The field of deep learning involves the use of relevant mathematical techniques. These topics are connected to the areas of linear algebra, discrete mathematics, and numerical computational methods. Deep learning is built upon using optimization algorithms, neural networks, machine learning, deep forward networks and sequence modeling. These are primary blocks towards using deep learning. There are broad range of concepts involved within the field but separating them into different levels above will help see how they work. These are all techniques applied towards the machine learning areas of speech recognition, bioinformatics and video games. The goal of this paper is to idealize the basics of deep learning. It will also cover ground on how networks are organized and trained to perform on machine learning environments.

Neural Networks
Deep Neural Networks are primarily for pattern recognition and machine learning purposes. The collection of large datasets has been involved to construct models that will help the machines to solve simpler tasks. These large datasets aids towards machine learning methods. The main approach to regulate learning concepts is to provide complex tasks for machines to learn upon. By having these complexities, it becomes quite easy for machine to perform an easy task.
The new large datasets include: LabelME and ImageNET (Deep Convolutional Neural Networks). LabelME has over hundreds of thousands segmented images. Also, there is ImageNET has over 15 million high-resolution images (Deep Convolutional Neural Networks). In order to learn about the different objects from images, it is better to use a model of a larger scale. The dataset model will not able to hold the objects from over millions of the images that are found.

Using what’s called Convolutional Neural Networks (CNNs) or also known as Neural Networks, are used to learn about the complexity of object recognition. CNNs are controlled by their “depth and breadth” (Deep Convolutional Neural Networks). They are most accurate in making assumptions about the behavior of the images. These includes: the specifics of the image, stationarity of statistics and locality of pixel dependencies (Deep Convolutional Neural Networks).

When we compare the CNNs to deep feedforward neural networks, they are much easier to train. Although both networks have similar in-size layers, CNNs are much easier to use. They have less connections and parameters than feedforward neural networks. Because it has less dependencies do not make them for best use towards performance. It is understood that their “theoretically-best” performance is only somewhat worse (Deep Convolutional Neural Networks).

Deep Feedforward Networks