中国3大做外贸的网站,seo建站营销,手机网站建设必要性,中国工厂网目录1、前向传播2、后向传播这里是完成的吴恩达的深度学习课程作业中的一个inverted dropout的作业题#xff0c;是一种很流行的正则化方式。这里做一个记录,重点记录了如何实现前向和后向的inverted dropout#xff0c;都是代码片段#xff0c;无法运行#xff1b;完整的代…
目录1、前向传播2、后向传播这里是完成的吴恩达的深度学习课程作业中的一个inverted dropout的作业题是一种很流行的正则化方式。这里做一个记录,重点记录了如何实现前向和后向的inverted dropout都是代码片段无法运行完整的代码请参见吴恩达的第二课的第一周的作业。1、前向传播
def forward_propagation_with_dropout(X, parameters, keep_prob 0.5):Implements the forward propagation: LINEAR - RELU DROPOUT - LINEAR - RELU DROPOUT - LINEAR - SIGMOID.Arguments:X -- input dataset, of shape (2, number of examples)parameters -- python dictionary containing your parametersW1, b1, W2, b2, W3, b3:W1 -- weight matrix of shape (20, 2)b1 -- bias vector of shape (20, 1)W2 -- weight matrix of shape (3, 20)b2 -- bias vector of shape (3, 1)W3 -- weight matrix of shape (1, 3)b3 -- bias vector of shape (1, 1)keep_prob - probability of keeping a neuron active during drop-out,scalarReturns:A3 -- last activation value, output of the forward propagation,of shape (1,1)cache -- tuple, information stored for computing the backward propagationnp.random.seed(1)# retrieve parametersW1 parameters[W1]b1 parameters[b1]W2 parameters[W2]b2 parameters[b2]W3 parameters[W3]b3 parameters[b3]# LINEAR - RELU - LINEAR - RELU - LINEAR - SIGMOIDZ1 np.dot(W1, X) b1A1 relu(Z1)### START CODE HERE ### (approx. 4 lines) # Steps 1-4 below correspond to the Steps 1-4 described above. # Step 1: initialize matrix D1 np.random.rand(..., ...)D1 np.random.rand(A1.shape[0], A1.shape[1])# Step 2: convert entries of D1 to 0 or 1 # (using keep_prob as the threshold)D1 D1 keep_prob# Step 3: shut down some neurons of A1A1 A1 * D1# Step 4: scale the value of neurons that havent been shut downA1 A1 / keep_prob### END CODE HERE ###Z2 np.dot(W2, A1) b2A2 relu(Z2)### START CODE HERE ### (approx. 4 lines)# Step 1: initialize matrix D2 np.random.rand(..., ...)D2 np.random.rand(A2.shape[0], A2.shape[1])# Step 2: convert entries of D2 to 0 or 1 # (using keep_prob as the threshold)D2 D2 keep_prob# Step 3: shut down some neurons of A2A2 A2 * D2# Step 4: scale the value of neurons that havent been shut downA2 A2 / keep_prob### END CODE HERE ###Z3 np.dot(W3, A2) b3A3 sigmoid(Z3)cache (Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3)return A3, cache2、后向传播
def backward_propagation_with_dropout(X, Y, cache, keep_prob):Implements the backward propagation of our baseline model to which we added dropout.Arguments:X -- input dataset, of shape (2, number of examples)Y -- true labels vector, of shape (output size, number of examples)cache -- cache output from forward_propagation_with_dropout()keep_prob - probability of keeping a neuronactive during drop-out, scalarReturns:gradients -- A dictionary with the gradients with respectto each parameter, activation and pre-activation variablesm X.shape[1](Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3) cachedZ3 A3 - YdW3 1./m * np.dot(dZ3, A2.T)db3 1./m * np.sum(dZ3, axis1, keepdims True)dA2 np.dot(W3.T, dZ3)### START CODE HERE ### (≈ 2 lines of code)# Step 1: Apply mask D2 to shut down the# same neurons as during the forward propagationdA2 dA2 * D2# Step 2: Scale the value of neurons that havent been shut downdA2 dA2 / keep_prob ### END CODE HERE ###dZ2 np.multiply(dA2, np.int64(A2 0))dW2 1./m * np.dot(dZ2, A1.T)db2 1./m * np.sum(dZ2, axis1, keepdims True)dA1 np.dot(W2.T, dZ2)### START CODE HERE ### (≈ 2 lines of code)# Step 1: Apply mask D1 to shut down the# same neurons as during the forward propagationdA1 dA1 * D1# Step 2: Scale the value of neurons that havent been shut downdA1 dA1 / keep_prob ### END CODE HERE ###dZ1 np.multiply(dA1, np.int64(A1 0))dW1 1./m * np.dot(dZ1, X.T)db1 1./m * np.sum(dZ1, axis1, keepdims True)gradients {dZ3: dZ3, dW3: dW3, db3: db3,dA2: dA2,dZ2: dZ2, dW2: dW2, db2: db2, dA1: dA1, dZ1: dZ1, dW1: dW1, db1: db1}return gradients