CN109887006A - A method of based on frame difference method accelerans network operations - Google Patents

A method of based on frame difference method accelerans network operations Download PDF

Info

Publication number
CN109887006A
CN109887006A CN201910086650.5A CN201910086650A CN109887006A CN 109887006 A CN109887006 A CN 109887006A CN 201910086650 A CN201910086650 A CN 201910086650A CN 109887006 A CN109887006 A CN 109887006A
Authority
CN
China
Prior art keywords
frame
layer
neural network
linear matrix
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910086650.5A
Other languages
Chinese (zh)
Inventor
钟天浪
钟宇清
黄磊
杨常星
莫冬春
宋蕴
胡俊
陈伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou National Chip Science & Technology Co Ltd
Original Assignee
Hangzhou National Chip Science & Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou National Chip Science & Technology Co Ltd filed Critical Hangzhou National Chip Science & Technology Co Ltd
Priority to CN201910086650.5A priority Critical patent/CN109887006A/en
Publication of CN109887006A publication Critical patent/CN109887006A/en
Pending legal-status Critical Current

Links

Abstract

The present invention relates to a kind of methods based on frame difference method accelerans network operations.The method of the present invention carries out operation by each layer of neural network to reference frame, saves the input value and operation result of linear matrix operation;Processing to frame each in addition to reference frame includes linear matrix arithmetic section and nonlinear operation part.Linear matrix arithmetic section: by the kth frame x of the continuous list entries of neural networkn,kM frame x before subtractingn,k‑m, transformation is reset by small value and obtains Δ xn,k, by Δ xn,kIt is inputted as n-th layer, obtains linear matrix operation output Δ yn,k, obtain the linear convergent rate result y of n-th layer kth framen,k, and then obtain the nonlinear operation output result y of n-th layern,k′;By yn,k′‑yn,k‑m' frame difference DELTA the x by the transformed result of small value clearing as (n+1)th layer of neural network linear matrix operationn+1,k, finally obtain the linear matrix operation output result y of (n+1)th layer of kth framen+1,k;And so on, until final output.Conventional method operation is used for Neural Network Based Nonlinear arithmetic section.The method of the present invention can reduce the neural network computing time with accelerans network operations.

Description

A method of based on frame difference method accelerans network operations
Technical field
The invention belongs to nerual network technique fields, and in particular to a kind of side based on frame difference method accelerans network operations Method.
Background technique
Frame difference method is generally used for the motion detection of image sequence, and be otherwise known as frame differential method or time differencing method, leads to It crosses between multiple frames to time continuous sequence of video images using grey scale difference, and the result thresholding of difference is obtained A kind of moving target detecting method of motion target area in image.Its basic thought is to utilize present frame and consecutive frame pixel ash The close and different feature of angle value carries out calculus of differences to two field pictures, obtains difference image.If there is movement mesh in image Mark, due to the movement of moving target, the pixel gray value difference between adjacent two field pictures will be very big;Mesh is moved if it does not exist Mark, then the pixel gray value difference between adjacent two field pictures will very little.In this way by given threshold, retain gray value difference Big pixel can obtain the profile of foreground moving object in difference image.The calculating of difference needs to be traversed for the institute of image There is pixel, so frame difference method is Pixel-level.The calculation formula of frame differential method is as follows:
Wherein Di(x, y) indicates the moving region of the i-th frame image, and T indicates judgment threshold, Ii(x, y) indicates the i-th frame image Pixel value.Foreground moving object is represented with the pixel that gray level is 255, the pixel that gray level is 0 represents background area.
The advantages of frame differential method is: algorithm realizes that simply programming complexity is low;Less to scene changes such as light Sensitivity, can adapt to various dynamic environment, and stability is preferable.The disadvantage is that: the complete area of object, Zhi Nengti cannot be extracted Take out boundary;The inter frame temporal interval of selection is depended on simultaneously.To the object quickly moved, between needing to select the lesser time Every when object is not overlapped in two frame of front and back, two sseparated objects can be detected as if selection is improper: and it is right The object of microinching, it should select the biggish time difference, if selection of time is inappropriate, when object in two frame of front and back almost When completely overlapped, then it can't detect object.
Sparse Matrix-Vector multiplies the core that (SMVM) operation is many engineering calculation and scientific algorithm, due to nonzero element Sparsity, calculating density is lower, causes computational efficiency not high.Have in the platforms such as GPU, CPU, FPGA, NPU to sparse to this The special acceleration of matrix-vector multiplication handles library, such as MKL, CUDNN accelerate library, mainly to pass through optimization storage organization and improvement fortune The modes such as structure are calculated to improve, the even more a high proportion of speed-up ratio of 10.3 to 74.0 ranges can be obtained.
Neural network is a kind of operational model, is constituted by being coupled to each other between a large amount of node (or neuron).Each A kind of specific output function of node on behalf, referred to as excitation function (activation function).Company between every two node It connects and all represents a weighted value for passing through the connection signal, referred to as weight.The output of network is then according to the connection side of network The difference of formula, weighted value and excitation function and it is different.And network itself is usually all to certain algorithm of nature or function It approaches, it is also possible to the expression to a kind of logic strategy.The core of neural computing is that matrix-vector is multiply-add.
In recent years as the research of neural network algorithm deepens continuously, accuracy rate has been more than institute in many applications There is conventional machines learning algorithm.Neural network algorithm gradually starts to replace traditional algorithm, starts to be deployed to the small calculation such as embedded Above power equipment.Although neural network algorithm accuracy rate is done well, its calculation amount is very huge, especially regards in image Computing overhead is excessive in frequency detection identification, and the speed of service is very slow on embedded device.Due to the core calculations process of neural network That matrix is multiply-add, if it is possible to increase the degree of rarefication of Neural Network Data or weight, so that it may using support sparse matrix to Amount accelerates the equipment of operation or operation library to carry out calculating acceleration to it.
Summary of the invention
It is an object of the invention to provide a kind of methods based on frame difference method accelerans network operations.This method utilizes sequence The redundancy of the information of column interframe reduces acceleration of the redundant computation realization to neural network computing.
Neural network is divided into multilayer by the method for the present invention, and to every layer of reference frame, routinely operation passes through each layer of neural network Operation is carried out, the input value and operation result of linear matrix operation in each layer are saved.
Processing to every layer of each frame in addition to reference frame includes linear matrix arithmetic section and nonlinear operation part, specifically It is as follows:
For the linear matrix arithmetic section of neural network n-th layer: first by the kth of the continuous list entries of neural network Frame xn,kM frame x before subtractingn,k-m, transformation is reset by small value and obtains frame difference DELTA xn,k, then by Δ xn,kAs neural network n-th layer Input, obtain neural network linear matrix operation output Δ yn,k, finally obtain the linear matrix operation output of n-th layer kth frame As a result yn,k, k >=2, m >=1;It is expressed as follows:
yn,k-m=f (xn,k-m), Δ xn,k=xn,k-xn,k-m, Δ yn,k=f (Δ xn,k), yn,k=Δ yn,k+yn,k-m;
And then obtain the nonlinear operation output result y of n-th layern,k': yn,k'=f ' (yn,k);
F indicates the operation of neural network linear matrix, f ' expression Neural Network Based Nonlinear operation;
By yn,k′-yn,k-m' transformed result reset as (n+1)th layer of neural network linear matrix operation by small value Frame difference DELTA xn+1,k, finally obtain the linear matrix operation output result y of (n+1)th layer of kth framen+1,k.And so on, until mind Final output through network kth frame.
Conventional method operation is used for the nonlinear operation part of neural network n-th layer.
The small value resets transformation, and ε will be greater than in input numerical value by referring to1And it is less than ε2Value be set to 0, ε1And ε2To set Determine threshold value, ε1≤ 0, ε2>=0, ε1< ε2
The neural network includes deep neural network (Deep Neural Network, DNN), convolutional neural networks (Convolutional Neural Networks, CNN), self-encoding encoder (Autoencoder), Recognition with Recurrent Neural Network (Recurrent Neural Network,RNN)。
The reference frame refers to that the primitive frame handled without frame difference method or the primitive frame pass through neural computing Obtained each layer operation result.
The neural network linear matrix operation, the operation need to be using support sparse matrix operation in the specific implementation process The hardware device of acceleration or mathematical operation library are realized.
Effect of the invention is the common information by reusing similar sequences, and it is poor to do frame to similar sequences, improves square Battle array degree of rarefication recycles the acceleration for accelerating library to sparse matrix multiplication, realizes the purpose of accelerans network operations, can be big Amplitude reduces the neural network computing time.
Detailed description of the invention
Fig. 1 is the deep neural network flow chart of the invention based on frame difference method;
Fig. 2 is the convolutional neural networks flow chart of the invention based on frame difference method;
Fig. 3 is self-encoding encoder structural schematic diagram in the embodiment of the present invention;
Fig. 4 is Recognition with Recurrent Neural Network structural schematic diagram in the embodiment of the present invention.
Specific embodiment
Below in conjunction with Figure of description and embodiment, the present invention will be further described.
Fig. 1 is deep neural network flow chart.As shown in Figure 1, the 1st frame passes through nerve using routine operation for reference frame Each layer of network carries out operation, saves the input value and operation result of linear matrix operation in each layer.- 1 frame hidden layer 1 of kth is linear Matrix operation output valve is y1_1,k-1, 2 linear matrix operation input value of hidden layer is y1_2,k-1, 2 linear matrix operation of hidden layer is defeated Value is y out2_1,k-1, output layer linear matrix operation input value is y2_2,k-1, output layer linear matrix operation output valve is y3_1,k-1, the non-thread output valve of output layer is y3_2,k-1.The calculation process of each layer of kth frame are as follows:
Calculate two frame input frame differences: Δ x1,k=x1,k-x1,k-1
Frame difference carries out first layer and connects entirely, obtains 1 linear matrix operation output valve of hidden layer: Δ y1_1,k=W1Δx1,k
Hidden layer 1 activates, and obtains 2 linear matrix operation input value of hidden layer: y1_2,k=f ' (y1_1,k-1+Δy1_1,k);
It does the difference progress second layer to connect entirely, obtains 2 linear matrix operation output valve of hidden layer:
Δy2_1,k=W2(y1_2,k-y1_2,k-1);
Hidden layer 2 activates, and obtains output layer linear matrix operation input value: y2_2,k=f ' (y2_1,k-1+Δy2_1,k);
It does difference progress third layer to connect entirely, obtains output layer linear matrix operation output valve:
Δy3_1,k=W3(y2_2,k-y2_2,k-1);
Output layer activation, obtains kth frame final output value: y3_2,k=f ' (y3_1,k-1+Δy3_1,k);
W is network weight, f ' expression nonlinear operation, y3_2,kAs kth frame final output value yk
The network structure of convolutional neural networks (CNN) includes convolutional layer (Conv), active coating (Activiation), Chi Hua The layers such as layer (Pooling), full articulamentum (FC), batch normalization layer (BN).Wherein convolutional layer, full articulamentum etc. belong to linearly Layer, active coating, pond layer, batch normalization layer etc. belong to non-linear layer.
A method of based on frame difference method accelerans network operations, neural network being divided into multilayer, to every layer of reference frame, It routinely operates, operation is carried out by each layer of neural network, saves the input value and operation result of linear matrix operation in each layer; Processing to every layer of each frame in addition to reference frame includes linear matrix arithmetic section and nonlinear operation part.
For the linear matrix arithmetic section of convolutional neural networks n-th layer: first by the continuous input of convolutional neural networks The kth frame x of sequencen,kM frame x before subtractingn,k-m, frame difference, which is obtained, by the clearing transformation of small value obtains Δ xn,k, then by Δ xn,kAs The input of neural network n-th layer obtains neural network linear matrix operation output Δ yn,k, finally obtain the line of n-th layer kth frame Property matrix operation export result yn,k, k >=2, m >=1;It is expressed as follows:
yn,k-m=f (xn,k-m), Δ xn,k=xn,k-xn,k-m, Δ yn,k=f (Δ xn,k), yn,k=Δ yn,k+yn,k-m
And then obtain the nonlinear operation output result y of n-th layern,k': yn,k'=f ' (yn,k);
By yn,k′-yn,k-m' transformed result reset as (n+1)th layer of neural network linear matrix operation by small value Frame difference DELTA xn+1,k, finally obtain the linear matrix operation output result y of (n+1)th layer of kth framen+1,k, and so on, until mind Final output through network kth frame;First frame is as initial frame, directly using original image as input.
Conventional method operation is used for the nonlinear operation part of convolutional neural networks n-th layer.
As shown in Fig. 2, note the 1st layer line matrix operation output valve of -1 frame of kth is y1_1,k-1, the 1st layer of non-linear layer output Value is y1_2,k-1, the 2nd layer line matrix operation output valve is y2_1,k-1, the 2nd layer of non-linear layer output valve is y2_2,k-1, the 3rd layer line Property matrix operation output valve be y3_1,k-1
Kth frame calculation process are as follows:
It calculates in two frame frame differences i.e. figure: Δ x1,k=x1,k-x1,k-1
Frame difference carries out first layer convolution operation, obtains the 1st layer line matrix operation output valve: Δ y1_1,k=W1Δx1,k
First layer nonlinear operation is carried out, the 1st layer of non-linear layer output valve: y is obtained1_2,k=f ' (y1_1,k-1+Δy1_1,k);
It does difference and carries out second layer convolution operation, obtain the 2nd layer line matrix operation output valve:
Δy2_1,k=W2(y1_2,k-y1_2,k-1);
First layer nonlinear operation is carried out, the 2nd layer of non-linear layer output valve: y is obtained2_2,k=f ' (y2_1,k-1+Δy2_1,k);
It does difference to be connected entirely, obtains the 3rd layer line matrix operation output valve: Δ y3_1,k=W3(y2_2,k-y2_2,k-1);
Δy3_1,k+y3_1,k-1Obtain kth frame output valve yk
Self-encoding encoder, it can be understood as one attempts to restore its system being originally inputted, as shown in Figure 3.It is empty in Fig. 3 It is exactly a self-encoding encoder model in line blue box, it is by encoder (Encoder) and the part decoder (Decoder) group At being inherently to do certain transformation to input signal.
Encoder is by input signal xkIt is transformed into encoded signal yk, and decoder will encode ykIt is converted into output signal, it may be assumed that
yk=f ' (W1xk+b1);
Present invention is mainly used in wherein linear matrix operation.To -1 frame of similar sequences kth, the linear result of encoder is remembered For t1,k-1, coding result is yk-1, the linear result of note solution encoder is t2,k-1
Kth frame calculation process are as follows:
Calculate two frame frame differences: Δ xk=x1,k-x1,k-1
Frame difference carries out encoder linear operation: Δ t1,k=W1Δxk
Encoder nonlinear operation: yk=f ' (t1,k-1+Δt1,k);
It is decoded device linear operation: Δ t2,k=W2(yk-yk-1);
It is decoded device nonlinear operation:
Obtain kth frame final result
Recognition with Recurrent Neural Network (RNN) is a kind of neural network for being used for processing sequence data.The neural network on basis only exists Power connection is established between layers, and the power also established between the neuron of the maximum difference of RNN between layers connects It connects.RNN model has more mutation, the RNN model structure of classics described herein, such as Fig. 4.
The left side is the figure that RNN model is not unfolded temporally in the upper figure of Fig. 4, is upper figure if be unfolded in temporal sequence In right-hand component.Right-hand component describes the model of the RNN near sequence index number.Wherein:
(1)xtRepresent the input of the training sample in sequence index t.Likewise, xt-1And xt+1It represents in sequence index The input of training sample when number t-1 and t+1.
(2)h(t)Represent the hidden state of the model in sequence index t.h(t)By x(t)And h(t-1)It codetermines.
(3)o(t)Represent the output of the model in sequence index t.o(t)The only hidden state h current by model(t)It determines.
(4)L(t)Represent the loss function of the model in sequence index t.
(5) y (t) represents the true output of the training sample sequence in sequence index t.
(6) these three matrixes of U, W, V are the linear relationship parameters of our model, it is shared in entire RNN network , this point and DNN are quite different.Also just because of being to have shared, it embodies the thought of " the circulation feedback " of the model of RNN.It is right In any one sequence index t, we are hidden state h(t)By x(t)And h(t-1)It obtains:
h(t)=σ (z(t))=σ (Ux(t)+Wh(t-1)+b)。
Wherein σ is the activation primitive of RNN, and b is the biasing of linear relationship.The output o of model when sequence index t(t)Table It is fairly simple up to formula:
o(t)=Vh(t)+c;
Output is predicted in final sequence index t are as follows: y(t)=σ (o(t))。
Equally, the accelerated method be served only in above-mentioned calculating it is linear calculating in, i.e., Ux, Vh therein and etc. in. Remember intermediate Ux(t)For u(t), Vh(t)For v(t), x is inputted to kth frame(k), remember that it inputs x with former frame(k-1)Frame difference be Δ x, it is hidden Hide layer h(k)It indicates are as follows: h(k)=σ (z(k))=σ ((U Δ x+u(k))+Wh(k-1)+b);
With the hidden layer h of this resulting frame(k)Subtract the hidden layer h of previous frame(k-1), it is denoted as Δ h, exports o(k)It indicates are as follows:
o(k)=(V Δ h+v(k-1))+c;
Our prediction output in final sequence index are as follows: y(k)=σ (o(k));Obtain output result y(k)

Claims (5)

1. neural network is divided into multilayer by a kind of method based on frame difference method accelerans network operations, this method, every layer is joined Frame is examined, routinely operation carries out operation by each layer of neural network, saves the input value and operation of linear matrix operation in each layer As a result;It is characterized in that, including linear matrix arithmetic section and nonlinear operation to the processing of every layer of each frame in addition to reference frame Part, specific as follows:
For the linear matrix arithmetic section of neural network n-th layer: first by the kth frame of the continuous list entries of neural network xn,kM frame x before subtractingn,k-m, transformation is reset by small value and obtains frame difference DELTA xn,k, then by Δ xn,kAs neural network n-th layer Input obtains neural network linear matrix operation output Δ yn,k, finally obtain the linear matrix operation output knot of n-th layer kth frame Fruit yn,k, k >=2, m >=1;It is expressed as follows:
yn,k-m=f (xn,k-m), Δ xn,k=xn,k-xn,k-m, Δ yn,k=f (Δ xn,k), yn,k=Δ yn,k+yn,k-m;F indicates nerve The linear matrix operation of network;
And then obtain the nonlinear operation output result y of n-th layern,k': yn,k'=f ' (yn,k);F ' expression Neural Network Based Nonlinear Operation;
By yn,k′-yn,k-m' poor as the frame of (n+1)th layer of neural network linear matrix operation by the transformed result of small value clearing It is worth Δ xn+1,k, finally obtain the linear matrix operation output result y of (n+1)th layer of kth framen+1,k;And so on, until nerve net The final output of network kth frame;
Conventional method operation is used for the nonlinear operation part of neural network n-th layer.
2. a kind of method based on frame difference method accelerans network operations as described in claim 1, it is characterised in that: described Small value resets transformation, and ε will be greater than in input numerical value by referring to1And it is less than ε2Value be set to 0, ε1And ε2For given threshold, ε1≤ 0, ε2 >=0, ε1< ε2
3. a kind of method based on frame difference method accelerans network operations as described in claim 1, it is characterised in that: described The neural network includes deep neural network, convolutional neural networks, self-encoding encoder, Recognition with Recurrent Neural Network.
4. a kind of method based on frame difference method accelerans network operations as described in claim 1, it is characterised in that: the ginseng It examines frame and refers to each layer operation that the primitive frame handled without frame difference method or the primitive frame are obtained by neural computing As a result.
5. a kind of method based on frame difference method accelerans network operations as described in claim 1, it is characterised in that: the mind Through the linear matrix operation of network, realized using the hardware device or mathematical operation library of supporting sparse matrix operation to accelerate.
CN201910086650.5A 2019-01-29 2019-01-29 A method of based on frame difference method accelerans network operations Pending CN109887006A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910086650.5A CN109887006A (en) 2019-01-29 2019-01-29 A method of based on frame difference method accelerans network operations

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910086650.5A CN109887006A (en) 2019-01-29 2019-01-29 A method of based on frame difference method accelerans network operations

Publications (1)

Publication Number Publication Date
CN109887006A true CN109887006A (en) 2019-06-14

Family

ID=66927232

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910086650.5A Pending CN109887006A (en) 2019-01-29 2019-01-29 A method of based on frame difference method accelerans network operations

Country Status (1)

Country Link
CN (1) CN109887006A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110673104A (en) * 2019-08-22 2020-01-10 西安电子科技大学 External radiation source radar real-time signal processing method and system based on CPU architecture
CN110826710A (en) * 2019-10-18 2020-02-21 南京大学 Hardware acceleration implementation system and method of RNN forward propagation model based on transverse pulse array
CN111010493A (en) * 2019-12-12 2020-04-14 清华大学 Method and device for video processing by using convolutional neural network
CN111524069A (en) * 2020-04-16 2020-08-11 杭州国芯科技股份有限公司 Design method of image interpolation convolution kernel
CN112116912A (en) * 2020-09-23 2020-12-22 平安国际智慧城市科技股份有限公司 Data processing method, device, equipment and medium based on artificial intelligence

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150199963A1 (en) * 2012-10-23 2015-07-16 Google Inc. Mobile speech recognition hardware accelerator
CN105787867A (en) * 2016-04-21 2016-07-20 华为技术有限公司 Method and apparatus for processing video images based on neural network algorithm
CN107871159A (en) * 2016-09-23 2018-04-03 三星电子株式会社 The method of neutral net equipment and operation neutral net equipment
CN108388834A (en) * 2017-01-24 2018-08-10 福特全球技术公司 The object detection mapped using Recognition with Recurrent Neural Network and cascade nature
CN108986787A (en) * 2017-05-31 2018-12-11 英特尔公司 Use the feature extraction of neural network accelerator

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150199963A1 (en) * 2012-10-23 2015-07-16 Google Inc. Mobile speech recognition hardware accelerator
CN105787867A (en) * 2016-04-21 2016-07-20 华为技术有限公司 Method and apparatus for processing video images based on neural network algorithm
CN107871159A (en) * 2016-09-23 2018-04-03 三星电子株式会社 The method of neutral net equipment and operation neutral net equipment
CN108388834A (en) * 2017-01-24 2018-08-10 福特全球技术公司 The object detection mapped using Recognition with Recurrent Neural Network and cascade nature
CN108986787A (en) * 2017-05-31 2018-12-11 英特尔公司 Use the feature extraction of neural network accelerator

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110673104A (en) * 2019-08-22 2020-01-10 西安电子科技大学 External radiation source radar real-time signal processing method and system based on CPU architecture
CN110826710A (en) * 2019-10-18 2020-02-21 南京大学 Hardware acceleration implementation system and method of RNN forward propagation model based on transverse pulse array
CN111010493A (en) * 2019-12-12 2020-04-14 清华大学 Method and device for video processing by using convolutional neural network
CN111010493B (en) * 2019-12-12 2021-03-02 清华大学 Method and device for video processing by using convolutional neural network
CN111524069A (en) * 2020-04-16 2020-08-11 杭州国芯科技股份有限公司 Design method of image interpolation convolution kernel
CN112116912A (en) * 2020-09-23 2020-12-22 平安国际智慧城市科技股份有限公司 Data processing method, device, equipment and medium based on artificial intelligence

Similar Documents

Publication Publication Date Title
CN109887006A (en) A method of based on frame difference method accelerans network operations
Liu et al. An end-to-end network for panoptic segmentation
De Geus et al. Fast panoptic segmentation network
CN112464851A (en) Smart power grid foreign matter intrusion detection method and system based on visual perception
Han et al. A new method in wheel hub surface defect detection: Object detection algorithm based on deep learning
CN110598673A (en) Remote sensing image road extraction method based on residual error network
CN107292458A (en) A kind of Forecasting Methodology and prediction meanss applied to neural network chip
Al-Nima et al. Road tracking using deep reinforcement learning for self-driving car applications
Verma et al. Computational cost reduction of convolution neural networks by insignificant filter removal
Tong et al. New network based on unet++ and densenet for building extraction from high resolution satellite imagery
Chen et al. Deep Kalman filter with optical flow for multiple object tracking
CN109934283B (en) Self-adaptive moving object detection method integrating CNN and SIFT optical flows
Xin et al. Pafnet: An efficient anchor-free object detector guidance
Peng et al. New network based on D-LinkNet and densenet for high resolution satellite imagery road extraction
Yin et al. Object detection implementation and optimization on embedded GPU system
CN117079095A (en) Deep learning-based high-altitude parabolic detection method, system, medium and equipment
Gautam et al. Efficient fuzzy edge detection using successive Otsu's method
Tang et al. Lightweight network with one-shot aggregation for image super-resolution
CN116452599A (en) Contour-based image instance segmentation method and system
Han et al. Deltaframe-bp: An algorithm using frame difference for deep convolutional neural networks training and inference on video data
Reddy et al. Pedestrian Detection Using YOLOv5 For Autonomous Driving Applications
Chen et al. Towards pedestrian target detection with optimized mask R-CNN
Zhou et al. Compressed video action recognition using motion vector representation
Zhang Intelligent transport surveillance memory enhanced method for detection of abnormal behavior in video
CN110414301A (en) It is a kind of based on double compartment crowd density estimation methods for taking the photograph head

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190614