所有提交的电磁系统将被重定向到在线手稿提交系统。作者请直接提交文章在线手稿提交系统各自的杂志。

编辑雷竞技app下载苹果版

2018年机器学习:概述深度学习在计算机视觉应用的最新进展——Abed Benaichouche——《盗梦空间》研究所的人工智能

文摘

最近,深刻学习(DL)赢得了各种挑战在计算机视觉和人工智能。在本文中,我们将介绍真正的使用传统的神经网络(CNN),复发性网络(RNN)和生成对抗网络(GAN)在电脑视觉领域。介绍中,我们将展示一个迟来的决心研究初始人工智能研究所(IIAI)正在推动PC领域的远见和人工智能。CNN,我们将介绍应用脸识别和解释,演示对象识别和姿势相机估计。甘斯,我们将展示其利用图片彩色化和工艺风格。最后,我们目前面临的另一个方法识别和甘super-goals利用CNN和模型。对于每一个演示我们目前结构化系统,其约束和给的观点可能的改进。最近数字技术的进步,数据集的规模已经变得太大,传统的数据处理和机器学习技术不能有效地应对,分析复杂,高维度,和noise-contaminated数据集是一个巨大的挑战,这是至关重要的开发新颖的算法,能够总结,分类,提取重要信息并将其转换为一种可以理解的形式。承担这些问题,深度学习(DL)模型显示突出表现在最近的十年。深度学习(DL)已经彻底改变了人工智能(AI)的未来。解决了很多复杂的问题,存在于人工智能社区多年。 In fact, DL models are deeper variants of artificial neural networks (ANNs) with multiple layers, whether linear or non-linear. Each layer is connected to its lower and upper layers through different weights. The capability of DL models in learning hierarchical features from various types of data, e.g., numerical, image, text and audio, makes them powerful in solving recognition, regression, semi- supervised and unsupervised problems. In recent years, various deep architectures with different learning paradigm are quickly introduced to develop machines that can perform similar to human or even better in different domains of application such as medical diagnosis, self-driving cars, natural language and image processing, and predictive forecasting. To show some recent advances of deep learning to some extent, we select 14 papers from the articles accepted in this journal to organize this issue. Focusing on recent developments in DL architectures and their applications, we classify the articles in this issue into four categories: (1) deep architectures and conventional neural networks, (2) incremental learning, (3) recurrent neural networks, and (4) generative models and adversarial examples.Deep neural network (DNN) is one of the most common DL models that contains multiple layers of linear and non-linear operations. DNN is the extension of standard neural network with multiple hidden layers, which allows the model to learn more complex representations of the input data. In addition, convolutional neural network (CNN) is a variant of DNNs, which is inspired by the visual cortex of animals. CNN usually contains three types of layers, including convolution, pooling, and fully connected layers. The convolution and pooling layers are added in the lower levels. The convolution layers generate a set of linear activations, which is followed by non-linear functions. In fact, the convolution layers apply some filters to reduce complexity of the input data. Then, the pooling layers are used for down-sampling of the filtered results. The pooling layers manage to reduce the size of the activation maps by transferring them into a smaller matrix. Therefore, pooling solves the over-fitting problem by reducing complexity. The fully connected layers are located after the convolution and pooling layers, in order to learn more abstract representations of the input data. In the last layer, a loss function, e.g., a soft-max classifier, is used to map the input data to its.

传记:

Abed Benaichouche初始人工智能研究所工作,阿联酋

Abed Benaichouche

阅读全文下载全文