当前位置: 首页 > news >正文

免费的行情软件网站下载入口学做网站卖东西去哪学

免费的行情软件网站下载入口,学做网站卖东西去哪学,优化神马排名软件,蓬莱网站建设哪家好文章目录 1. 获取数据2. 创建Dataset和DataLoader3. 定义模型4. 创建训练模型引擎函数5. 创建保存模型的函数6. 训练、评估并保存模型 模块化涉及将jupyter notebook代码转换为一系列提供类似功能的不同 Python 脚本。 可以将笔记本代码从一系列单元格转换为以下 Python 文件 data_setup.py - 如果需要用于准备和下载数据的文件。engine.py - 包含各种训练函数的文件。model_builder.py 或 model.py - 用于创建 PyTorch 模型的文件。train.py - 用于利用所有其他文件并训练目标 PyTorch 模型的文件。utils.py - 专用于有用实用功能的文件。 上述文件的命名和布局将取决于您的用例和代码要求。 Python 脚本与单个notebook单元一样通用这意味着您可以为几乎任何类型的功能创建脚本。 notebook非常适合快速迭代探索和运行实验但是对于较大规模的项目您可能会发现 Python 脚本更具可重复性且更易于运行。 在你去download别人开源的项目时可能会指示您在终端/命令行中运行如下代码来训练模型 python train.py --model MODEL_NAME --batch_size BATCH_SIZE --lr LEARNING_RATE --num_epochs NUM_EPOCHStrain.py 是目标 Python 脚本它可能包含训练 PyTorch 模型的函数--model 、 --batch_size 、 --lr 和 --num_epochs 被称为参数标志。 可以将它们设置为您喜欢的任何值如果它们与 train.py 兼容它们就会工作如果不兼容它们就会出错。 例如训练 TinyVGG 模型 10 个时期批量大小为 32学习率为 0.001 python train.py --model tinyvgg --batch_size 32 --lr 0.001 --num_epochs 10Python脚本的目录结构 going_modular/ ├── going_modular/ │ ├── data_setup.py │ ├── engine.py │ ├── model_builder.py │ ├── train.py │ └── utils.py ├── models/ │ ├── 05_going_modular_cell_mode_tinyvgg_model.pth │ └── 05_going_modular_script_mode_tinyvgg_model.pth └── data/└── pizza_steak_sushi/├── train/│ ├── pizza/│ │ ├── image01.jpeg│ │ └── ...│ ├── steak/│ └── sushi/└── test/├── pizza/├── steak/└── sushi/1. 获取数据 2. 创建Dataset和DataLoader ( data_setup.py ) Contains functionality for creating PyTorch DataLoaders for image classification data.import osfrom torchvision import datasets, transforms from torch.utils.data import DataLoaderNUM_WORKERS os.cpu_count()def create_dataloaders(train_dir: str, test_dir: str, transform: transforms.Compose, batch_size: int, num_workers: intNUM_WORKERS ):Creates training and testing DataLoaders.Takes in a training directory and testing directory path and turnsthem into PyTorch Datasets and then into PyTorch DataLoaders.Args:train_dir: Path to training directory.test_dir: Path to testing directory.transform: torchvision transforms to perform on training and testing data.batch_size: Number of samples per batch in each of the DataLoaders.num_workers: An integer for number of workers per DataLoader.Returns:A tuple of (train_dataloader, test_dataloader, class_names).Where class_names is a list of the target classes.Example usage:train_dataloader, test_dataloader, class_names \ create_dataloaders(train_dirpath/to/train_dir,test_dirpath/to/test_dir,transformsome_transform,batch_size32,num_workers4)# Use ImageFolder to create dataset(s)train_data datasets.ImageFolder(train_dir, transformtransform)test_data datasets.ImageFolder(test_dir, transformtransform)# Get class namesclass_names train_data.classes# Turn images into data loaderstrain_dataloader DataLoader(train_data,batch_sizebatch_size,shuffleTrue,num_workersnum_workers,pin_memoryTrue,)test_dataloader DataLoader(test_data,batch_sizebatch_size,shuffleFalse,num_workersnum_workers,pin_memoryTrue,)return train_dataloader, test_dataloader, class_names如果我们想要创建 DataLoader 我们现在可以在 data_setup.py 中使用该函数如下所示 # Import data_setup.py from going_modular import data_setup# Create train/test dataloader and get class names as a list train_dataloader, test_dataloader, class_names data_setup.create_dataloaders(...)3. 定义模型 model_builder.py Contains PyTorch model code to instantiate a TinyVGG model.import torch from torch import nn class TinyVGG(nn.Module):Creates the TinyVGG architecture.Replicates the TinyVGG architecture from the CNN explainer website in PyTorch.See the original architecture here: https://poloclub.github.io/cnn-explainer/Args:input_shape: An integer indicating number of input channels.hidden_units: An integer indicating number of hidden units between layers.output_shape: An integer indicating number of output units.def __init__(self, input_shape: int, hidden_units: int, output_shape: int) - None:super().__init__()self.conv_block_1 nn.Sequential(nn.Conv2d(in_channelsinput_shape, out_channelshidden_units, kernel_size3, stride1, padding0), nn.ReLU(),nn.Conv2d(in_channelshidden_units, out_channelshidden_units,kernel_size3,stride1,padding0),nn.ReLU(),nn.MaxPool2d(kernel_size2,stride2))self.conv_block_2 nn.Sequential(nn.Conv2d(hidden_units, hidden_units, kernel_size3, padding0),nn.ReLU(),nn.Conv2d(hidden_units, hidden_units, kernel_size3, padding0),nn.ReLU(),nn.MaxPool2d(2))self.classifier nn.Sequential(nn.Flatten(),# Where did this in_features shape come from? # Its because each layer of our network compresses and changes the shape of our inputs data.nn.Linear(in_featureshidden_units*13*13,out_featuresoutput_shape))def forward(self, x: torch.Tensor):x self.conv_block_1(x)x self.conv_block_2(x)x self.classifier(x)return x# return self.classifier(self.block_2(self.block_1(x))) # - leverage the benefits of operator fusion4. 创建训练模型引擎函数 train_step() - 接受模型、 DataLoader 、损失函数和优化器并在 DataLoader 上训练模型。test_step() - 接受模型、 DataLoader 和损失函数并在 DataLoader 上评估模型。train() - 对给定数量的 epoch 一起执行 1. 和 2. 并返回结果字典。 由于这些将成为我们模型训练的引擎因此我们可以将它们全部放入名为 engine.py 的 Python 脚本中 Contains functions for training and testing a PyTorch model.import torchfrom tqdm.auto import tqdm from typing import Dict, List, Tupledef train_step(model: torch.nn.Module, dataloader: torch.utils.data.DataLoader, loss_fn: torch.nn.Module, optimizer: torch.optim.Optimizer,device: torch.device) - Tuple[float, float]:Trains a PyTorch model for a single epoch.Turns a target PyTorch model to training mode and thenruns through all of the required training steps (forwardpass, loss calculation, optimizer step).Args:model: A PyTorch model to be trained.dataloader: A DataLoader instance for the model to be trained on.loss_fn: A PyTorch loss function to minimize.optimizer: A PyTorch optimizer to help minimize the loss function.device: A target device to compute on (e.g. cuda or cpu).Returns:A tuple of training loss and training accuracy metrics.In the form (train_loss, train_accuracy). For example:(0.1112, 0.8743)# Put model in train modemodel.train()# Setup train loss and train accuracy valuestrain_loss, train_acc 0, 0# Loop through data loader data batchesfor batch, (X, y) in enumerate(dataloader):# Send data to target deviceX, y X.to(device), y.to(device)# 1. Forward passy_pred model(X)# 2. Calculate and accumulate lossloss loss_fn(y_pred, y)train_loss loss.item() # 3. Optimizer zero gradoptimizer.zero_grad()# 4. Loss backwardloss.backward()# 5. Optimizer stepoptimizer.step()# Calculate and accumulate accuracy metric across all batchesy_pred_class torch.argmax(torch.softmax(y_pred, dim1), dim1)train_acc (y_pred_class y).sum().item()/len(y_pred)# Adjust metrics to get average loss and accuracy per batch train_loss train_loss / len(dataloader)train_acc train_acc / len(dataloader)return train_loss, train_accdef test_step(model: torch.nn.Module, dataloader: torch.utils.data.DataLoader, loss_fn: torch.nn.Module,device: torch.device) - Tuple[float, float]:Tests a PyTorch model for a single epoch.Turns a target PyTorch model to eval mode and then performsa forward pass on a testing dataset.Args:model: A PyTorch model to be tested.dataloader: A DataLoader instance for the model to be tested on.loss_fn: A PyTorch loss function to calculate loss on the test data.device: A target device to compute on (e.g. cuda or cpu).Returns:A tuple of testing loss and testing accuracy metrics.In the form (test_loss, test_accuracy). For example:(0.0223, 0.8985)# Put model in eval modemodel.eval() # Setup test loss and test accuracy valuestest_loss, test_acc 0, 0# Turn on inference context managerwith torch.inference_mode():# Loop through DataLoader batchesfor batch, (X, y) in enumerate(dataloader):# Send data to target deviceX, y X.to(device), y.to(device)# 1. Forward passtest_pred_logits model(X)# 2. Calculate and accumulate lossloss loss_fn(test_pred_logits, y)test_loss loss.item()# Calculate and accumulate accuracytest_pred_labels test_pred_logits.argmax(dim1)test_acc ((test_pred_labels y).sum().item()/len(test_pred_labels))# Adjust metrics to get average loss and accuracy per batch test_loss test_loss / len(dataloader)test_acc test_acc / len(dataloader)return test_loss, test_accdef train(model: torch.nn.Module, train_dataloader: torch.utils.data.DataLoader, test_dataloader: torch.utils.data.DataLoader, optimizer: torch.optim.Optimizer,loss_fn: torch.nn.Module,epochs: int,device: torch.device) - Dict[str, List]:Trains and tests a PyTorch model.Passes a target PyTorch models through train_step() and test_step()functions for a number of epochs, training and testing the modelin the same epoch loop.Calculates, prints and stores evaluation metrics throughout.Args:model: A PyTorch model to be trained and tested.train_dataloader: A DataLoader instance for the model to be trained on.test_dataloader: A DataLoader instance for the model to be tested on.optimizer: A PyTorch optimizer to help minimize the loss function.loss_fn: A PyTorch loss function to calculate loss on both datasets.epochs: An integer indicating how many epochs to train for.device: A target device to compute on (e.g. cuda or cpu).Returns:A dictionary of training and testing loss as well as training andtesting accuracy metrics. Each metric has a value in a list for each epoch.In the form: {train_loss: [...],train_acc: [...],test_loss: [...],test_acc: [...]} For example if training for epochs2: {train_loss: [2.0616, 1.0537],train_acc: [0.3945, 0.3945],test_loss: [1.2641, 1.5706],test_acc: [0.3400, 0.2973]} # Create empty results dictionaryresults {train_loss: [],train_acc: [],test_loss: [],test_acc: []}# Loop through training and testing steps for a number of epochsfor epoch in tqdm(range(epochs)):train_loss, train_acc train_step(modelmodel,dataloadertrain_dataloader,loss_fnloss_fn,optimizeroptimizer,devicedevice)test_loss, test_acc test_step(modelmodel,dataloadertest_dataloader,loss_fnloss_fn,devicedevice)# Print out whats happeningprint(fEpoch: {epoch1} | ftrain_loss: {train_loss:.4f} | ftrain_acc: {train_acc:.4f} | ftest_loss: {test_loss:.4f} | ftest_acc: {test_acc:.4f})# Update results dictionaryresults[train_loss].append(train_loss)results[train_acc].append(train_acc)results[test_loss].append(test_loss)results[test_acc].append(test_acc)# Return the filled results at the end of the epochsreturn results现在我们已经有了 engine.py 脚本我们可以通过以下方式从中导入函数 # Import engine.py from going_modular import engine# Use train() by calling it from engine.py engine.train(...)5. 创建保存模型的函数 ( utils.py ) 将 save_model() 函数保存到名为 utils.py 的文件中 Contains various utility functions for PyTorch model training and saving.import torch from pathlib import Pathdef save_model(model: torch.nn.Module,target_dir: str,model_name: str):Saves a PyTorch model to a target directory.Args:model: A target PyTorch model to save.target_dir: A directory for saving the model to.model_name: A filename for the saved model. Should includeeither .pth or .pt as the file extension.Example usage:save_model(modelmodel_0,target_dirmodels,model_name05_going_modular_tingvgg_model.pth)# Create target directorytarget_dir_path Path(target_dir)target_dir_path.mkdir(parentsTrue,exist_okTrue)# Create model save pathassert model_name.endswith(.pth) or model_name.endswith(.pt), model_name should end with .pt or .pthmodel_save_path target_dir_path / model_name# Save the model state_dict()print(f[INFO] Saving model to: {model_save_path})torch.save(objmodel.state_dict(),fmodel_save_path)可以导入它并通过以下方式使用它而不是重新编写它 # Import utils.py from going_modular import utils# Save a model to file save_model(model...target_dir...,model_name...)6. 训练、评估并保存模型 ( train.py ) 可以在命令行上使用一行代码来训练 PyTorch 模型 python train.py要创建 train.py 我们将执行以下步骤 导入各种依赖项即 torch 、 os 、 torchvision.transforms 以及 going_modular 目录 data_setup 、 model_builder 、 utils 。注意由于 train.py 将位于 going_modular 目录中因此我们可以通过 import … 而不是 from going_modular import … 导入其他模块。设置各种超参数例如批量大小、时期数、学习率和隐藏单元数将来可以通过 Python 的 argparse 设置。设置训练和测试目录。设置与设备无关的代码。创建必要的数据转换。使用 data_setup.py 创建 DataLoaders。使用 model_builder.py 创建模型。设置损失函数和优化器。使用 engine.py 训练模型。使用 utils.py 保存模型。 Trains a PyTorch image classification model using device-agnostic code. import os import torch import data_setup, engine, model_builder, utilsfrom torchvision import transforms# Setup hyperparameters NUM_EPOCHS 5 BATCH_SIZE 32 HIDDEN_UNITS 10 LEARNING_RATE 0.001# Setup directories train_dir data/pizza_steak_sushi/train test_dir data/pizza_steak_sushi/test# Setup target device device cuda if torch.cuda.is_available() else cpu# Create transforms data_transform transforms.Compose([transforms.Resize((64, 64)),transforms.ToTensor() ])# Create DataLoaders with help from data_setup.py train_dataloader, test_dataloader, class_names data_setup.create_dataloaders(train_dirtrain_dir,test_dirtest_dir,transformdata_transform,batch_sizeBATCH_SIZE )# Create model with help from model_builder.py model model_builder.TinyVGG(input_shape3,hidden_unitsHIDDEN_UNITS,output_shapelen(class_names) ).to(device)# Set loss and optimizer loss_fn torch.nn.CrossEntropyLoss() optimizer torch.optim.Adam(model.parameters(),lrLEARNING_RATE)# Start training with help from engine.py engine.train(modelmodel,train_dataloadertrain_dataloader,test_dataloadertest_dataloader,loss_fnloss_fn,optimizeroptimizer,epochsNUM_EPOCHS,devicedevice)# Save the model with help from utils.py utils.save_model(modelmodel,target_dirmodels,model_name05_going_modular_script_mode_tinyvgg_model.pth)可以调整 train.py 文件以使用 Python 的 argparse 模块的参数标志输入这将允许我们提供不同的超参数设置如前面讨论的 python train.py --model MODEL_NAME --batch_size BATCH_SIZE --lr LEARNING_RATE --num_epochs NUM_EPOCHS
http://www.zqtcl.cn/news/563787/

相关文章:

  • 网站域名账号江苏百度推广代理商
  • 专题网站建站对网站分析
  • 外贸出口网站建设如何搭建自己的网站服务器
  • 云南省建设厅网站职称评审房地产推广方案和推广思路
  • 湘潭建设路街道网站app的设计与开发
  • 《网站开发实践》 实训报告广告策划书案例完整版
  • 一级 爰做片免费网站做中学学中做网站
  • 网站排名如何提升网络营销的有哪些特点
  • 巨腾外贸网站建设个人主页网站模板免费
  • 有哪些网站免费做推广淄博网站电子商城平台建设
  • 网站建设的技术支持论文做网站买什么品牌笔记本好
  • 凡科网站后台在哪里.工程与建设
  • 静态网站源文件下载建设手机网站价格
  • 苏州做网站优化的网站开发邮件
  • 做网站怎么搭建环境阿里云大学 网站建设
  • 网站改版业务嵌入式培训推荐
  • 腾讯云 怎样建设网站网站开发 报价
  • 网络科技公司门户网站免费人脉推广官方软件
  • 建和做网站网络营销推广可以理解为
  • 太原市网站建设网站人防工程做资料的网站
  • 怎么做免费推广网站做网站第一部
  • 橙色网站后台模板WordPress的SEO插件安装失败
  • 做网站好还是做微信小程序好外包加工网外放加工活
  • 中国建设银行网站查征信电子商务网站建设及推广
  • 扫描网站漏洞的软件php网站后台验证码不显示
  • 诸城哪里有做网站的做网站的尺寸
  • 网站开发参考书目做网站推广赚钱吗
  • 九度网站建设网站做ppt模板
  • 浙江做公司网站多少钱评论回复网站怎么做
  • 江门网络建站模板虚拟主机价格一般多少钱