卷积神经网络 (convolutional neuron networks, CNN )由一个或多个卷积层和顶端的全连通层组成,并且包括相关的英语翻译

卷积神经网络 (convolutional neuron networ

卷积神经网络 (convolutional neuron networks, CNN )由一个或多个卷积层和顶端的全连通层组成,并且包括相关权值和池化层 ( pooling layer) ,这种结构使得 CNN 能够利用输入数据的二维结构。与其他深层结构相比较,卷积神经网络在图像和语音应用中显示出了优异的结果。卷积神经网络还可以使用标准的反向传播算法进行训练,并且,由于具有较少的参数估计,相比其他深度结构更容易训练如图所示为卷积神经网络的示意图。输入的图像通过和三个可训练的滤波器进行卷积,卷积后在 Cl 层产生三个特征映射图,然后特征映射图加权值和偏置后,通过 一个 sigmoid 函数得到三个S2层的特征映射图。这些映射图再经过滤波得到 C3 层。这 个层级结构再和S2一样产生S4。最终,这些像素值被光栅化,并连接成一个向量输入到传统的神经网络,最后得到输出。其中,C层为特征提取层每个神经元的输入与前一层的局部感受野相连,并提取该局部的特征,一旦该局部特征被提取后,它与其他特征间的位置关系也随之确定下来;S 层为降采样层,网 络的每个计算层由多个特征映射组成,每个特征映射为一个平面, 平面上所有神经元的权值相等。特征映射结构采用影响函数核较小的 sigmoid 函数作为卷积网络的激活函数,使得特征映射具有位移不变性
0/5000
源语言: -
目标语言: -
结果 (英语) 1: [复制]
复制成功!
Convolutional neuron networks (CNN) consists of one or more convolutional layers and a fully connected layer at the top, and includes relevant weights and a pooling layer. This structure enables CNN to use input data Two-dimensional structure. Compared with other deep structures, convolutional neural networks have shown excellent results in image and speech applications. Convolutional neural networks can also be trained using standard back-propagation algorithms, and because there are fewer parameter estimates, it is easier to train than other deep structures. The <br>figure is a schematic diagram of a convolutional neural network. The input image is convolved with three trainable filters. After convolution, three feature maps are generated in the Cl layer. After the feature map weights and offsets, the three S2 layers are obtained through a sigmoid function. Feature map. These maps are filtered to get the C3 layer. This hierarchical structure produces S4 like S2. Finally, these pixel values ​​are rasterized and connected into a vector input to the traditional neural network, and finally get the output. <br>Among them, the C layer is the feature extraction layer. The input of each neuron is connected to the local receptive field of the previous layer, and the local features are extracted. Once the local feature is extracted, its positional relationship with other features Determined; the S layer is a down-sampling layer, and each computing layer of the network is composed of multiple feature maps, each feature map is a plane, and the weights of all neurons on the plane are equal. The feature map structure uses the sigmoid function with a small influence function core as the activation function of the convolutional network, so that the feature map has displacement invariance
正在翻译中..
结果 (英语) 2:[复制]
复制成功!
Convolutional networks (CONvolutional neuronal networks, CNN) consist of one or more convolutional layers and the top of the fully connected layers, and include related weights and pooling layers, a structure that enables CNN to leverage the two-dimensional structure of input data. Compared with other deep structures, convolutional neural networks show excellent results in image and speech applications. Convolutional neural networks can also be trained using standard reverse propagation algorithms and are easier to train than other depth structures due to the fact that there are fewer parameter estimates<br>The diagram of the convolutional neural network is shown as shown. The input image is convolued through three trained filters, which are then convoluible to produce three feature maps at the Cl layer, and then the feature map is weighted and biased, and the feature map of the three S2 layers is obtained through a sigmoid function. These maps then get the C3 layer through the filtered wave. This hierarchy then produces S4 in the same way as S2. Eventually, these pixel values are grated and connected into a vector input into a traditional neural network, which is finally output.<br>Wherein, the input of each neuron of the C layer is connected to the local sensory field of the previous layer, and the local feature is extracted, and once the local feature is extracted, the position relationship between it and other features is determined; The feature mapping structure uses the sigmoid function, which affects the smaller core of the function, as the activation function of the convolution network, making the feature mapping invariable in displacement
正在翻译中..
结果 (英语) 3:[复制]
复制成功!
The convolutional neural network (CNN) is composed of one or more convolution layers and full connectivity layers at the top, and includes correlation weights and pooling layers. This structure enables CNN to use the two-dimensional structure of input data. Compared with other deep structures, convolutional neural network shows excellent results in image and speech applications. Convolutional neural network can also be trained by using standard back propagation algorithm, and it is easier to train than other depth structures due to its less parameter estimation<br>As shown in the figure is the schematic diagram of convolutional neural network. The input image is convoluted with three trainable filters. After convolution, three feature maps are generated in CL layer. Then the weighted value and offset of the map are used to get three S2 layer feature maps through a sigmoid function. These maps are filtered to obtain C3 layer. This hierarchy again generates S4 like S2. Finally, these pixel values are rasterized and connected into a vector input to the traditional neural network, and finally the output is obtained.<br>Among them, layer C is the feature extraction layer, the input of each neuron is connected with the local receptive field of the previous layer, and the local feature is extracted. Once the local feature is extracted, the location relationship between it and other features is determined; layer s is the downsampling layer, and each calculation layer of the network is composed of multiple feature maps, each feature map is a plane, The weights of all neurons in the plane are equal. The sigmoid function with smaller influence function kernel is used as the activation function of convolution network in the structure of feature mapping, which makes the feature mapping have displacement invariance<br>
正在翻译中..
 
其它语言
本翻译工具支持: 世界语, 丹麦语, 乌克兰语, 乌兹别克语, 乌尔都语, 亚美尼亚语, 伊博语, 俄语, 保加利亚语, 信德语, 修纳语, 僧伽罗语, 克林贡语, 克罗地亚语, 冰岛语, 加利西亚语, 加泰罗尼亚语, 匈牙利语, 南非祖鲁语, 南非科萨语, 卡纳达语, 卢旺达语, 卢森堡语, 印地语, 印尼巽他语, 印尼爪哇语, 印尼语, 古吉拉特语, 吉尔吉斯语, 哈萨克语, 土库曼语, 土耳其语, 塔吉克语, 塞尔维亚语, 塞索托语, 夏威夷语, 奥利亚语, 威尔士语, 孟加拉语, 宿务语, 尼泊尔语, 巴斯克语, 布尔语(南非荷兰语), 希伯来语, 希腊语, 库尔德语, 弗里西语, 德语, 意大利语, 意第绪语, 拉丁语, 拉脱维亚语, 挪威语, 捷克语, 斯洛伐克语, 斯洛文尼亚语, 斯瓦希里语, 旁遮普语, 日语, 普什图语, 格鲁吉亚语, 毛利语, 法语, 波兰语, 波斯尼亚语, 波斯语, 泰卢固语, 泰米尔语, 泰语, 海地克里奥尔语, 爱尔兰语, 爱沙尼亚语, 瑞典语, 白俄罗斯语, 科西嘉语, 立陶宛语, 简体中文, 索马里语, 繁体中文, 约鲁巴语, 维吾尔语, 缅甸语, 罗马尼亚语, 老挝语, 自动识别, 芬兰语, 苏格兰盖尔语, 苗语, 英语, 荷兰语, 菲律宾语, 萨摩亚语, 葡萄牙语, 蒙古语, 西班牙语, 豪萨语, 越南语, 阿塞拜疆语, 阿姆哈拉语, 阿尔巴尼亚语, 阿拉伯语, 鞑靼语, 韩语, 马其顿语, 马尔加什语, 马拉地语, 马拉雅拉姆语, 马来语, 马耳他语, 高棉语, 齐切瓦语, 等语言的翻译.

Copyright ©2024 I Love Translation. All reserved.

E-mail: