当前位置: 首页 > news >正文

泉州哪里做网站去国外做外卖网站好

泉州哪里做网站,去国外做外卖网站好,app制作成本,湖州网站建设服务1--前言 以论文《High-Resolution Image Synthesis with Latent Diffusion Models》 开源的项目为例#xff0c;剖析Stable Diffusion经典组成部分#xff0c;巩固学习加深印象。 2--UNetModel 一个可以debug的小demo#xff1a;SD_UNet​​​​​​​ 以文生图为例#…1--前言 以论文《High-Resolution Image Synthesis with Latent Diffusion Models》  开源的项目为例剖析Stable Diffusion经典组成部分巩固学习加深印象。 2--UNetModel 一个可以debug的小demoSD_UNet​​​​​​​ 以文生图为例剖析UNetModel核心组成模块。 2-1--Forward总揽 提供的文生图Demo中实际传入的参数只有x、timesteps和context三个其中         x 表示随机初始化的噪声Tensorshape: [B*2, 4, 64, 64]*2表示使用Classifier-Free Diffusion Guidance。         timesteps 表示去噪过程中每一轮传入的timestepshape: [B*2]。         context表示经过CLIP编码后对应的文本Promptshape: [B*2, 77, 768]。 def forward(self, x, timestepsNone, contextNone, yNone,**kwargs):Apply the model to an input batch.:param x: an [N x C x ...] Tensor of inputs.:param timesteps: a 1-D batch of timesteps.:param context: conditioning plugged in via crossattn:param y: an [N] Tensor of labels, if class-conditional.:return: an [N x C x ...] Tensor of outputs.assert (y is not None) (self.num_classes is not None), must specify y if and only if the model is class-conditionalhs []t_emb timestep_embedding(timesteps, self.model_channels, repeat_onlyFalse) # Create sinusoidal timestep embeddings.emb self.time_embed(t_emb) # MLPif self.num_classes is not None:assert y.shape (x.shape[0],)emb emb self.label_emb(y)h x.type(self.dtype)for module in self.input_blocks:h module(h, emb, context)hs.append(h)h self.middle_block(h, emb, context)for module in self.output_blocks:h th.cat([h, hs.pop()], dim1)h module(h, emb, context)h h.type(x.dtype)if self.predict_codebook_ids:return self.id_predictor(h)else:return self.out(h) 2-2--timestep embedding生成 使用函数 timestep_embedding() 和 self.time_embed() 对传入的timestep进行位置编码生成sinusoidal timestep embeddings。         其中 timestep_embedding() 函数定义如下而self.time_embed()是一个MLP函数。 def timestep_embedding(timesteps, dim, max_period10000, repeat_onlyFalse):Create sinusoidal timestep embeddings.:param timesteps: a 1-D Tensor of N indices, one per batch element.These may be fractional.:param dim: the dimension of the output.:param max_period: controls the minimum frequency of the embeddings.:return: an [N x dim] Tensor of positional embeddings.if not repeat_only:half dim // 2freqs torch.exp(-math.log(max_period) * torch.arange(start0, endhalf, dtypetorch.float32) / half).to(devicetimesteps.device)args timesteps[:, None].float() * freqs[None]embedding torch.cat([torch.cos(args), torch.sin(args)], dim-1)if dim % 2:embedding torch.cat([embedding, torch.zeros_like(embedding[:, :1])], dim-1)else:embedding repeat(timesteps, b - b d, ddim)return embedding self.time_embed nn.Sequential(linear(model_channels, time_embed_dim),nn.SiLU(),linear(time_embed_dim, time_embed_dim), ) 2-3--self.input_blocks下采样 在 Forward() 中使用 self.input_blocks 将输入噪声进行分辨率下采样经过下采样具体维度变化为[B*2, 4, 64, 64]  [B*2, 1280, 8, 8]         下采样模块共有12个 module其组成如下 ModuleList((0): TimestepEmbedSequential((0): Conv2d(4, 320, kernel_size(3, 3), stride(1, 1), padding(1, 1)))(1-2): 2 x TimestepEmbedSequential((0): ResBlock((in_layers): Sequential((0): GroupNorm32(32, 320, eps1e-05, affineTrue)(1): SiLU()(2): Conv2d(320, 320, kernel_size(3, 3), stride(1, 1), padding(1, 1)))(h_upd): Identity()(x_upd): Identity()(emb_layers): Sequential((0): SiLU()(1): Linear(in_features1280, out_features320, biasTrue))(out_layers): Sequential((0): GroupNorm32(32, 320, eps1e-05, affineTrue)(1): SiLU()(2): Dropout(p0, inplaceFalse)(3): Conv2d(320, 320, kernel_size(3, 3), stride(1, 1), padding(1, 1)))(skip_connection): Identity())(1): SpatialTransformer((norm): GroupNorm(32, 320, eps1e-06, affineTrue)(proj_in): Conv2d(320, 320, kernel_size(1, 1), stride(1, 1))(transformer_blocks): ModuleList((0): BasicTransformerBlock((attn1): CrossAttention((to_q): Linear(in_features320, out_features320, biasFalse)(to_k): Linear(in_features320, out_features320, biasFalse)(to_v): Linear(in_features320, out_features320, biasFalse)(to_out): Sequential((0): Linear(in_features320, out_features320, biasTrue)(1): Dropout(p0.0, inplaceFalse)))(ff): FeedForward((net): Sequential((0): GEGLU((proj): Linear(in_features320, out_features2560, biasTrue))(1): Dropout(p0.0, inplaceFalse)(2): Linear(in_features1280, out_features320, biasTrue)))(attn2): CrossAttention((to_q): Linear(in_features320, out_features320, biasFalse)(to_k): Linear(in_features768, out_features320, biasFalse)(to_v): Linear(in_features768, out_features320, biasFalse)(to_out): Sequential((0): Linear(in_features320, out_features320, biasTrue)(1): Dropout(p0.0, inplaceFalse)))(norm1): LayerNorm((320,), eps1e-05, elementwise_affineTrue)(norm2): LayerNorm((320,), eps1e-05, elementwise_affineTrue)(norm3): LayerNorm((320,), eps1e-05, elementwise_affineTrue)))(proj_out): Conv2d(320, 320, kernel_size(1, 1), stride(1, 1))))(3): TimestepEmbedSequential((0): Downsample((op): Conv2d(320, 320, kernel_size(3, 3), stride(2, 2), padding(1, 1))))(4): TimestepEmbedSequential((0): ResBlock((in_layers): Sequential((0): GroupNorm32(32, 320, eps1e-05, affineTrue)(1): SiLU()(2): Conv2d(320, 640, kernel_size(3, 3), stride(1, 1), padding(1, 1)))(h_upd): Identity()(x_upd): Identity()(emb_layers): Sequential((0): SiLU()(1): Linear(in_features1280, out_features640, biasTrue))(out_layers): Sequential((0): GroupNorm32(32, 640, eps1e-05, affineTrue)(1): SiLU()(2): Dropout(p0, inplaceFalse)(3): Conv2d(640, 640, kernel_size(3, 3), stride(1, 1), padding(1, 1)))(skip_connection): Conv2d(320, 640, kernel_size(1, 1), stride(1, 1)))(1): SpatialTransformer((norm): GroupNorm(32, 640, eps1e-06, affineTrue)(proj_in): Conv2d(640, 640, kernel_size(1, 1), stride(1, 1))(transformer_blocks): ModuleList((0): BasicTransformerBlock((attn1): CrossAttention((to_q): Linear(in_features640, out_features640, biasFalse)(to_k): Linear(in_features640, out_features640, biasFalse)(to_v): Linear(in_features640, out_features640, biasFalse)(to_out): Sequential((0): Linear(in_features640, out_features640, biasTrue)(1): Dropout(p0.0, inplaceFalse)))(ff): FeedForward((net): Sequential((0): GEGLU((proj): Linear(in_features640, out_features5120, biasTrue))(1): Dropout(p0.0, inplaceFalse)(2): Linear(in_features2560, out_features640, biasTrue)))(attn2): CrossAttention((to_q): Linear(in_features640, out_features640, biasFalse)(to_k): Linear(in_features768, out_features640, biasFalse)(to_v): Linear(in_features768, out_features640, biasFalse)(to_out): Sequential((0): Linear(in_features640, out_features640, biasTrue)(1): Dropout(p0.0, inplaceFalse)))(norm1): LayerNorm((640,), eps1e-05, elementwise_affineTrue)(norm2): LayerNorm((640,), eps1e-05, elementwise_affineTrue)(norm3): LayerNorm((640,), eps1e-05, elementwise_affineTrue)))(proj_out): Conv2d(640, 640, kernel_size(1, 1), stride(1, 1))))(5): TimestepEmbedSequential((0): ResBlock((in_layers): Sequential((0): GroupNorm32(32, 640, eps1e-05, affineTrue)(1): SiLU()(2): Conv2d(640, 640, kernel_size(3, 3), stride(1, 1), padding(1, 1)))(h_upd): Identity()(x_upd): Identity()(emb_layers): Sequential((0): SiLU()(1): Linear(in_features1280, out_features640, biasTrue))(out_layers): Sequential((0): GroupNorm32(32, 640, eps1e-05, affineTrue)(1): SiLU()(2): Dropout(p0, inplaceFalse)(3): Conv2d(640, 640, kernel_size(3, 3), stride(1, 1), padding(1, 1)))(skip_connection): Identity())(1): SpatialTransformer((norm): GroupNorm(32, 640, eps1e-06, affineTrue)(proj_in): Conv2d(640, 640, kernel_size(1, 1), stride(1, 1))(transformer_blocks): ModuleList((0): BasicTransformerBlock((attn1): CrossAttention((to_q): Linear(in_features640, out_features640, biasFalse)(to_k): Linear(in_features640, out_features640, biasFalse)(to_v): Linear(in_features640, out_features640, biasFalse)(to_out): Sequential((0): Linear(in_features640, out_features640, biasTrue)(1): Dropout(p0.0, inplaceFalse)))(ff): FeedForward((net): Sequential((0): GEGLU((proj): Linear(in_features640, out_features5120, biasTrue))(1): Dropout(p0.0, inplaceFalse)(2): Linear(in_features2560, out_features640, biasTrue)))(attn2): CrossAttention((to_q): Linear(in_features640, out_features640, biasFalse)(to_k): Linear(in_features768, out_features640, biasFalse)(to_v): Linear(in_features768, out_features640, biasFalse)(to_out): Sequential((0): Linear(in_features640, out_features640, biasTrue)(1): Dropout(p0.0, inplaceFalse)))(norm1): LayerNorm((640,), eps1e-05, elementwise_affineTrue)(norm2): LayerNorm((640,), eps1e-05, elementwise_affineTrue)(norm3): LayerNorm((640,), eps1e-05, elementwise_affineTrue)))(proj_out): Conv2d(640, 640, kernel_size(1, 1), stride(1, 1))))(6): TimestepEmbedSequential((0): Downsample((op): Conv2d(640, 640, kernel_size(3, 3), stride(2, 2), padding(1, 1))))(7): TimestepEmbedSequential((0): ResBlock((in_layers): Sequential((0): GroupNorm32(32, 640, eps1e-05, affineTrue)(1): SiLU()(2): Conv2d(640, 1280, kernel_size(3, 3), stride(1, 1), padding(1, 1)))(h_upd): Identity()(x_upd): Identity()(emb_layers): Sequential((0): SiLU()(1): Linear(in_features1280, out_features1280, biasTrue))(out_layers): Sequential((0): GroupNorm32(32, 1280, eps1e-05, affineTrue)(1): SiLU()(2): Dropout(p0, inplaceFalse)(3): Conv2d(1280, 1280, kernel_size(3, 3), stride(1, 1), padding(1, 1)))(skip_connection): Conv2d(640, 1280, kernel_size(1, 1), stride(1, 1)))(1): SpatialTransformer((norm): GroupNorm(32, 1280, eps1e-06, affineTrue)(proj_in): Conv2d(1280, 1280, kernel_size(1, 1), stride(1, 1))(transformer_blocks): ModuleList((0): BasicTransformerBlock((attn1): CrossAttention((to_q): Linear(in_features1280, out_features1280, biasFalse)(to_k): Linear(in_features1280, out_features1280, biasFalse)(to_v): Linear(in_features1280, out_features1280, biasFalse)(to_out): Sequential((0): Linear(in_features1280, out_features1280, biasTrue)(1): Dropout(p0.0, inplaceFalse)))(ff): FeedForward((net): Sequential((0): GEGLU((proj): Linear(in_features1280, out_features10240, biasTrue))(1): Dropout(p0.0, inplaceFalse)(2): Linear(in_features5120, out_features1280, biasTrue)))(attn2): CrossAttention((to_q): Linear(in_features1280, out_features1280, biasFalse)(to_k): Linear(in_features768, out_features1280, biasFalse)(to_v): Linear(in_features768, out_features1280, biasFalse)(to_out): Sequential((0): Linear(in_features1280, out_features1280, biasTrue)(1): Dropout(p0.0, inplaceFalse)))(norm1): LayerNorm((1280,), eps1e-05, elementwise_affineTrue)(norm2): LayerNorm((1280,), eps1e-05, elementwise_affineTrue)(norm3): LayerNorm((1280,), eps1e-05, elementwise_affineTrue)))(proj_out): Conv2d(1280, 1280, kernel_size(1, 1), stride(1, 1))))(8): TimestepEmbedSequential((0): ResBlock((in_layers): Sequential((0): GroupNorm32(32, 1280, eps1e-05, affineTrue)(1): SiLU()(2): Conv2d(1280, 1280, kernel_size(3, 3), stride(1, 1), padding(1, 1)))(h_upd): Identity()(x_upd): Identity()(emb_layers): Sequential((0): SiLU()(1): Linear(in_features1280, out_features1280, biasTrue))(out_layers): Sequential((0): GroupNorm32(32, 1280, eps1e-05, affineTrue)(1): SiLU()(2): Dropout(p0, inplaceFalse)(3): Conv2d(1280, 1280, kernel_size(3, 3), stride(1, 1), padding(1, 1)))(skip_connection): Identity())(1): SpatialTransformer((norm): GroupNorm(32, 1280, eps1e-06, affineTrue)(proj_in): Conv2d(1280, 1280, kernel_size(1, 1), stride(1, 1))(transformer_blocks): ModuleList((0): BasicTransformerBlock((attn1): CrossAttention((to_q): Linear(in_features1280, out_features1280, biasFalse)(to_k): Linear(in_features1280, out_features1280, biasFalse)(to_v): Linear(in_features1280, out_features1280, biasFalse)(to_out): Sequential((0): Linear(in_features1280, out_features1280, biasTrue)(1): Dropout(p0.0, inplaceFalse)))(ff): FeedForward((net): Sequential((0): GEGLU((proj): Linear(in_features1280, out_features10240, biasTrue))(1): Dropout(p0.0, inplaceFalse)(2): Linear(in_features5120, out_features1280, biasTrue)))(attn2): CrossAttention((to_q): Linear(in_features1280, out_features1280, biasFalse)(to_k): Linear(in_features768, out_features1280, biasFalse)(to_v): Linear(in_features768, out_features1280, biasFalse)(to_out): Sequential((0): Linear(in_features1280, out_features1280, biasTrue)(1): Dropout(p0.0, inplaceFalse)))(norm1): LayerNorm((1280,), eps1e-05, elementwise_affineTrue)(norm2): LayerNorm((1280,), eps1e-05, elementwise_affineTrue)(norm3): LayerNorm((1280,), eps1e-05, elementwise_affineTrue)))(proj_out): Conv2d(1280, 1280, kernel_size(1, 1), stride(1, 1))))(9): TimestepEmbedSequential((0): Downsample((op): Conv2d(1280, 1280, kernel_size(3, 3), stride(2, 2), padding(1, 1))))(10-11): 2 x TimestepEmbedSequential((0): ResBlock((in_layers): Sequential((0): GroupNorm32(32, 1280, eps1e-05, affineTrue)(1): SiLU()(2): Conv2d(1280, 1280, kernel_size(3, 3), stride(1, 1), padding(1, 1)))(h_upd): Identity()(x_upd): Identity()(emb_layers): Sequential((0): SiLU()(1): Linear(in_features1280, out_features1280, biasTrue))(out_layers): Sequential((0): GroupNorm32(32, 1280, eps1e-05, affineTrue)(1): SiLU()(2): Dropout(p0, inplaceFalse)(3): Conv2d(1280, 1280, kernel_size(3, 3), stride(1, 1), padding(1, 1)))(skip_connection): Identity())) ) 12个 module 都使用了 TimestepEmbedSequential 类进行封装根据不同的网络层将输入噪声x与timestep embedding和prompt context进行运算。 class TimestepEmbedSequential(nn.Sequential, TimestepBlock):A sequential module that passes timestep embeddings to the children thatsupport it as an extra input.def forward(self, x, emb, contextNone):for layer in self:if isinstance(layer, TimestepBlock):x layer(x, emb)elif isinstance(layer, SpatialTransformer):x layer(x, context)else:x layer(x)return x 2-3-1--Module0 Module 0 是一个2D卷积层主要对输入噪声进行特征提取 # init 初始化 self.input_blocks nn.ModuleList([TimestepEmbedSequential(conv_nd(dims, in_channels, model_channels, 3, padding1))] )# 打印 self.input_blocks[0] TimestepEmbedSequential((0): Conv2d(4, 320, kernel_size(3, 3), stride(1, 1), padding(1, 1)) ) 2-3-2--Module1和Module2 Module1和Module2的结构相同都由一个ResBlock和一个SpatialTransformer组成 # init 初始化 for _ in range(num_res_blocks):layers [ResBlock(ch,time_embed_dim,dropout,out_channelsmult * model_channels,dimsdims,use_checkpointuse_checkpoint,use_scale_shift_normuse_scale_shift_norm,)]ch mult * model_channelsif ds in attention_resolutions:if num_head_channels -1:dim_head ch // num_headselse:num_heads ch // num_head_channelsdim_head num_head_channelsif legacy:#num_heads 1dim_head ch // num_heads if use_spatial_transformer else num_head_channelslayers.append(AttentionBlock(ch,use_checkpointuse_checkpoint,num_headsnum_heads,num_head_channelsdim_head,use_new_attention_orderuse_new_attention_order,) if not use_spatial_transformer else SpatialTransformer(ch, num_heads, dim_head, depthtransformer_depth, context_dimcontext_dim))self.input_blocks.append(TimestepEmbedSequential(*layers))self._feature_size chinput_block_chans.append(ch)# 打印 self.input_blocks[1] TimestepEmbedSequential((0): ResBlock((in_layers): Sequential((0): GroupNorm32(32, 320, eps1e-05, affineTrue)(1): SiLU()(2): Conv2d(320, 320, kernel_size(3, 3), stride(1, 1), padding(1, 1)))(h_upd): Identity()(x_upd): Identity()(emb_layers): Sequential((0): SiLU()(1): Linear(in_features1280, out_features320, biasTrue))(out_layers): Sequential((0): GroupNorm32(32, 320, eps1e-05, affineTrue)(1): SiLU()(2): Dropout(p0, inplaceFalse)(3): Conv2d(320, 320, kernel_size(3, 3), stride(1, 1), padding(1, 1)))(skip_connection): Identity())(1): SpatialTransformer((norm): GroupNorm(32, 320, eps1e-06, affineTrue)(proj_in): Conv2d(320, 320, kernel_size(1, 1), stride(1, 1))(transformer_blocks): ModuleList((0): BasicTransformerBlock((attn1): CrossAttention((to_q): Linear(in_features320, out_features320, biasFalse)(to_k): Linear(in_features320, out_features320, biasFalse)(to_v): Linear(in_features320, out_features320, biasFalse)(to_out): Sequential((0): Linear(in_features320, out_features320, biasTrue)(1): Dropout(p0.0, inplaceFalse)))(ff): FeedForward((net): Sequential((0): GEGLU((proj): Linear(in_features320, out_features2560, biasTrue))(1): Dropout(p0.0, inplaceFalse)(2): Linear(in_features1280, out_features320, biasTrue)))(attn2): CrossAttention((to_q): Linear(in_features320, out_features320, biasFalse)(to_k): Linear(in_features768, out_features320, biasFalse)(to_v): Linear(in_features768, out_features320, biasFalse)(to_out): Sequential((0): Linear(in_features320, out_features320, biasTrue)(1): Dropout(p0.0, inplaceFalse)))(norm1): LayerNorm((320,), eps1e-05, elementwise_affineTrue)(norm2): LayerNorm((320,), eps1e-05, elementwise_affineTrue)(norm3): LayerNorm((320,), eps1e-05, elementwise_affineTrue)))(proj_out): Conv2d(320, 320, kernel_size(1, 1), stride(1, 1))) )# 打印 self.input_blocks[2] TimestepEmbedSequential((0): ResBlock((in_layers): Sequential((0): GroupNorm32(32, 320, eps1e-05, affineTrue)(1): SiLU()(2): Conv2d(320, 320, kernel_size(3, 3), stride(1, 1), padding(1, 1)))(h_upd): Identity()(x_upd): Identity()(emb_layers): Sequential((0): SiLU()(1): Linear(in_features1280, out_features320, biasTrue))(out_layers): Sequential((0): GroupNorm32(32, 320, eps1e-05, affineTrue)(1): SiLU()(2): Dropout(p0, inplaceFalse)(3): Conv2d(320, 320, kernel_size(3, 3), stride(1, 1), padding(1, 1)))(skip_connection): Identity())(1): SpatialTransformer((norm): GroupNorm(32, 320, eps1e-06, affineTrue)(proj_in): Conv2d(320, 320, kernel_size(1, 1), stride(1, 1))(transformer_blocks): ModuleList((0): BasicTransformerBlock((attn1): CrossAttention((to_q): Linear(in_features320, out_features320, biasFalse)(to_k): Linear(in_features320, out_features320, biasFalse)(to_v): Linear(in_features320, out_features320, biasFalse)(to_out): Sequential((0): Linear(in_features320, out_features320, biasTrue)(1): Dropout(p0.0, inplaceFalse)))(ff): FeedForward((net): Sequential((0): GEGLU((proj): Linear(in_features320, out_features2560, biasTrue))(1): Dropout(p0.0, inplaceFalse)(2): Linear(in_features1280, out_features320, biasTrue)))(attn2): CrossAttention((to_q): Linear(in_features320, out_features320, biasFalse)(to_k): Linear(in_features768, out_features320, biasFalse)(to_v): Linear(in_features768, out_features320, biasFalse)(to_out): Sequential((0): Linear(in_features320, out_features320, biasTrue)(1): Dropout(p0.0, inplaceFalse)))(norm1): LayerNorm((320,), eps1e-05, elementwise_affineTrue)(norm2): LayerNorm((320,), eps1e-05, elementwise_affineTrue)(norm3): LayerNorm((320,), eps1e-05, elementwise_affineTrue)))(proj_out): Conv2d(320, 320, kernel_size(1, 1), stride(1, 1))) ) 2-3-3--Module3 Module3是一个下采样2D卷积层。 # init 初始化 if level ! len(channel_mult) - 1:out_ch chself.input_blocks.append(TimestepEmbedSequential(ResBlock(ch,time_embed_dim,dropout,out_channelsout_ch,dimsdims,use_checkpointuse_checkpoint,use_scale_shift_normuse_scale_shift_norm,downTrue,)if resblock_updownelse Downsample(ch, conv_resample, dimsdims, out_channelsout_ch)))# 打印 self.input_blocks[3] TimestepEmbedSequential((0): Downsample((op): Conv2d(320, 320, kernel_size(3, 3), stride(2, 2), padding(1, 1))) ) 2-3-4--Module4、Module5、Module7和Module8 与Module1和Module2的结构相同都由一个ResBlock和一个SpatialTransformer组成只有特征维度上的区别 2-3-4--Module6和Module9 与Module3的结构相同是一个下采样2D卷积层。 2-3--5--Module10和Module11 Module10和Module12的结构相同只由一个ResBlock组成。 # 打印 self.input_blocks[10] TimestepEmbedSequential((0): ResBlock((in_layers): Sequential((0): GroupNorm32(32, 1280, eps1e-05, affineTrue)(1): SiLU()(2): Conv2d(1280, 1280, kernel_size(3, 3), stride(1, 1), padding(1, 1)))(h_upd): Identity()(x_upd): Identity()(emb_layers): Sequential((0): SiLU()(1): Linear(in_features1280, out_features1280, biasTrue))(out_layers): Sequential((0): GroupNorm32(32, 1280, eps1e-05, affineTrue)(1): SiLU()(2): Dropout(p0, inplaceFalse)(3): Conv2d(1280, 1280, kernel_size(3, 3), stride(1, 1), padding(1, 1)))(skip_connection): Identity()) )# 打印 self.input_blocks[11] TimestepEmbedSequential((0): ResBlock((in_layers): Sequential((0): GroupNorm32(32, 1280, eps1e-05, affineTrue)(1): SiLU()(2): Conv2d(1280, 1280, kernel_size(3, 3), stride(1, 1), padding(1, 1)))(h_upd): Identity()(x_upd): Identity()(emb_layers): Sequential((0): SiLU()(1): Linear(in_features1280, out_features1280, biasTrue))(out_layers): Sequential((0): GroupNorm32(32, 1280, eps1e-05, affineTrue)(1): SiLU()(2): Dropout(p0, inplaceFalse)(3): Conv2d(1280, 1280, kernel_size(3, 3), stride(1, 1), padding(1, 1)))(skip_connection): Identity()) )
http://www.zqtcl.cn/news/778797/

相关文章:

  • 十大免费ae模板网站wordpress 远程设置
  • 青岛网站的优化云南抖音推广
  • 做中英文版的网站需要注意什么如何偷别人dedecms网站的模板
  • 免费微网站制作最近三天发生的重要新闻
  • 网站优化网络推广seo编程软件python
  • 建设部网站官网合同免费申请网站永久
  • 遵化建设局网站哈尔滨网站制作公司价格
  • 科技因子网站建设方案河南网站推广优化公司
  • 什么网站了解国家建设的行情如何建设自己的php网站
  • 大连市平台网站外包公司和劳务派遣
  • 广州建网站公司排名嵌入式软件开发工程师工作内容
  • 计算机软件网站建设免费asp网站源码
  • 网站建设介绍ppt镇江网站搜索引擎优化
  • 珠海自助建站软件泉州网站开发
  • ios个人开发者账号多少钱拼多多seo怎么优化
  • 五金网站建设信息产业部备案网站
  • 网站被百度惩罚放弃互联网平台宣传推广方案
  • 自己怎么做网站首页自动app优化
  • 图形设计网站泉州网站建设企业
  • 免费建各种网站有没有做网站的团队
  • 做网站做网站的公司电商网站怎么做
  • 福建专业网站建设公司《设计》韩国
  • 怎么区分网站是模板做的Wordpress福利资源模板
  • 文案类的网站最新域名网站
  • 网站seo优化效果智能营销系统开发
  • 国外做储物的网站个人网站建设在哪里
  • 北京高端网站设计外包公司不用代码做网站的工具
  • 网站开发交付资料广告设计公司经营范围
  • 如何建立一个好的网站wordpress 看不到主题
  • 古典网站织梦模板云南app软件开发