当前位置: 首页 > news >正文

深圳微商城网站制作价格亚马逊网站联盟

深圳微商城网站制作价格,亚马逊网站联盟,mc做弊端网站,网站推广费计入什么科目hadoop使用内置包进行性能测试 hadoop使用内置包进行性能测试 hadoop使用内置包进行性能测试TestDFSIO read writeNNBenchMRBenchSliveTest TestDFSIO read write TestDFSIO 是一个 Hadoop 自带的基准测试工具#xff0c;用于测试 HDFS 的读写性能。它会模拟大量…hadoop使用内置包进行性能测试 hadoop使用内置包进行性能测试 hadoop使用内置包进行性能测试TestDFSIO read writeNNBenchMRBenchSliveTest TestDFSIO read write TestDFSIO 是一个 Hadoop 自带的基准测试工具用于测试 HDFS 的读写性能。它会模拟大量的文件读写操作并输出相应的性能指标。 测试内容 TestDFSIO 使用了两个阶段进行测试 — 写入Write和读取Read。在写入阶段它将生成指定数量和大小的文件并将其写入到 HDFS 中。在读取阶段它将从 HDFS 读取这些文件并计算读取速度。 测试参数 hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestDFSIO.javaprivate static final Logger LOG LoggerFactory.getLogger(TestDFSIO.class);private static final int DEFAULT_BUFFER_SIZE 1000000;private static final String BASE_FILE_NAME test_io_;private static final String DEFAULT_RES_FILE_NAME TestDFSIO_results.log;private static final long MEGA ByteMultiple.MB.value();private static final int DEFAULT_NR_BYTES 128;private static final int DEFAULT_NR_FILES 4;private static final String USAGE Usage: TestDFSIO.class.getSimpleName() [genericOptions] -read [-random | -backward | -skip [-skipSize Size]] | -write | -append | -truncate | -clean [-compression codecClassName] [-nrFiles N] [-size Size[B|KB|MB|GB|TB]] [-resFile resultFileName] [-bufferSize Bytes] [-storagePolicy storagePolicyName] [-erasureCodePolicy erasureCodePolicyName];-write指定写入阶段的测试。-read指定读取阶段的测试。-nrFiles number要创建的文件数量。-size size每个文件的大。-resFile filename指定结果输出文件的路径。-bufferSize bufferSize此参数用于指定读写文件时使用的缓冲区大小。compression codecClassName指定在写入文件时使用的压缩编解码器。storagePolicy storagePolicyName用于在写入文件时指定存储策略。存储策略决定了数据在 HDFS 中的复制和分布方式。erasureCodePolicy erasureCodePolicyName用于在写入文件时指定纠删码Erasure Code策略。 执行语句示例 创建 10 个大小为 100MB 的文件并将结果输出到当前路径下的TestDFSIO_results.log文件中。 hadoop jar hadoop-mapreduce-client-jobclient-3.3.6-tests.jar TestDFSIO -write -nrFiles 10 -size 100MB读取10 个大小为 100MB 的文件并将结果输出到当前路径下的TestDFSIO_results.log文件中。 hadoop jar hadoop-mapreduce-client-test.jar TestDFSIO -read -nrFiles 10 -fileSize 100TestDFSIO_results.log保存内容 ----- TestDFSIO ----- : writeDate time: Thu Dec 14 13:21:57 CST 2023Number of files: 10Total MBytes processed: 1000Throughput mb/sec: 49.72Average IO rate mb/sec: 190.11IO rate std deviation: 75.6Test exec time sec: 27.01----- TestDFSIO ----- : readDate time: Thu Dec 14 13:36:56 CST 2023Number of files: 10Total MBytes processed: 1000Throughput mb/sec: 631.31Average IO rate mb/sec: 655.3IO rate std deviation: 127.5Test exec time sec: 8.03NNBench Hadoop NNbench 是一个用于测试 HDFS Hadoop 分布式文件系统性能的工具。它可以模拟多个客户端同时进行读写操作以评估 HDFS 的并发访问能力和吞吐量。 测试内容 数据生成NNbench 会生成一系列指定大小的数据块Blocks这些数据块将被写入到 HDFS 中。数据块的大小由 -blockSize 参数指定。写入测试多个客户端同时向 HDFS 写入生成的数据块可以设置写入的线程数和每个线程的写入次数。写入操作的参数由 -operation create_write 指定。读取测试多个客户端同时从 HDFS 读取之前写入的数据块。读取操作的参数由 -operation open_read 指定。 测试参数 hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/hdfs/NNBench.javaOptions:-operation Available operations are create_write open_read rename delete. This option is mandatory* NOTE: The open_read, rename and delete operations assume that the files they operate on, are already available. The create_write operation must be run before running the other operations.-maps number of maps. default is 1. This is not mandatory-reduces number of reduces. default is 1. This is not mandatory-startTime time to start, given in seconds from the epoch. Make sure this is far enough into the future, so all maps (operations) will start at the same time. default is launch time 2 mins. This is not mandatory-blockSize Block size in bytes. default is 1. This is not mandatory-bytesToWrite Bytes to write. default is 0. This is not mandatory-bytesPerChecksum Bytes per checksum for the files. default is 1. This is not mandatory-numberOfFiles number of files to create. default is 1. This is not mandatory-replicationFactorPerFile Replication factor for the files. default is 1. This is not mandatory-baseDir base DFS path. default is /benchmarks/NNBench. Supports cross-cluster access by using full path with schema and cluster. This is not mandatory-readFileAfterOpen true or false. if true, it reads the file and reports the average time to read. This is valid with the open_read operation. default is false. This is not mandatory-help: Display the help statement-operation write|read指定进行写入或读取操作。-maps numMaps指定 MapReduce 作业中使用的 Mapper 数量。-reduces numReduces指定 MapReduce 作业中使用的 Reducer 数量。-blockSize blockSize指定生成的数据块的大小。-bytesToWrite bytesToWrite指定总共写入的数据大小。-bytesPerChecksum bytesPerChecksum指定每个校验和的大小。-numberOfFiles numberOfFiles指定创建/读取的文件数。-replicationFactorPerFile replicationFactor指定写入数据块的副本数量。baseDir baseDir基础DFS路径默认值为/backages/NNBench。readFileAfterOpen true|false如果为true则读取文件并报告平均读取时间。只对open_read操作有效默认为false。... 执行语句示例 写入操作-opertation create_write采用 12 个 Mapper 和 6 个 Reducer 来处理数据每个数据块大小为 64MB-blockSize 67108864总共写入 128MB 的数据-bytesToWrite 134217728写入文件数为1-numberOfFiles 1并且将数据块副本数设置为 2-replicationFactor 2基础DFS路径为(/benchmarks/NNBench-’hostname -s’)测试结果输出到当前路径下的NNBench_results.log文件中。 hadoop jar hadoop-mapreduce-client-jobclient-3.3.6-tests.jar nnbench -operation create_write -maps 12 -reduces 6 -blockSize 67108864 -bytesToWrite 134217728 -numberOfFiles 1 -replicationFactorPerFile 2 -basedir /benchmarks/NNBench-’hostname -s’读取操作-opertation open_read采用 12 个 Mapper 和 6 个 Reducer 来处理数据每个数据块大小为 64MB-blockSize 67108864总共写入 128MB 的数据-bytesToWrite 134217728写入文件数为1-numberOfFiles 1并且将数据块副本数设置为 2-replicationFactor 2报告平均读取时间-readFileAfterOpen true基础DFS路径为(/benchmarks/NNBench-’hostname -s’)测试结果输出到当前路径下的NNBench_results.log文件中。 hadoop jar hadoop-mapreduce-client-jobclient-3.3.6-tests.jar nnbench -operation open_read -maps 12 -reduces 6 -blockSize 67108864 -bytesToWrite 134217728 -numberOfFiles 1 -replicationFactorPerFile 2 -readFileAfterOpen true -basedir /benchmarks/NNBench-’hostname -s’NNBench_results.log -------------- NNBench -------------- : Version: NameNode Benchmark 0.4Date time: 2023-12-14 14:11:35,409Test Operation: create_writeStart time: 2023-12-14 14:11:20,732Maps to run: 12Reduces to run: 6Block Size (bytes): 67108864Bytes to write: 134217728Bytes per checksum: 1Number of files: 1Replication factor: 2Successful file operations: 12# maps that missed the barrier: 0# exceptions: 0TPS: Create/Write/Close: 2 Avg exec time (ms): Create/Write/Close: 882.25Avg Lat (ms): Create/Write: 670.9166666666666Avg Lat (ms): Close: 109.58333333333333RAW DATA: AL Total #1: 8051RAW DATA: AL Total #2: 1315RAW DATA: TPS Total (ms): 10587RAW DATA: Longest Map Time (ms): 11954.0RAW DATA: Late maps: 0RAW DATA: # of exceptions: 0-------------- NNBench -------------- : Version: NameNode Benchmark 0.4Date time: 2023-12-14 14:17:34,927Test Operation: open_readStart time: 2023-12-14 14:17:26,840Maps to run: 12Reduces to run: 6Block Size (bytes): 67108864Bytes to write: 134217728Bytes per checksum: 1Number of files: 1Replication factor: 2Successful file operations: 12# maps that missed the barrier: 0# exceptions: 0TPS: Open/Read: 2Avg Exec time (ms): Open/Read: 320.75Avg Lat (ms): Open: 3.0Avg Lat (ms): Read: 175.75RAW DATA: AL Total #1: 36RAW DATA: AL Total #2: 2109RAW DATA: TPS Total (ms): 3849RAW DATA: Longest Map Time (ms): 4837.0RAW DATA: Late maps: 0RAW DATA: # of exceptions: 0MRBench Hadoop Mrbench 是一个用于测试 Hadoop MapReduce 的性能的工具。它可以模拟 MapReduce 作业的执行过程以评估 Hadoop 集群的并发处理能力和吞吐量。 测试内容 数据生成Mrbench 会生成指定数量的输入数据块blocks每个数据块的大小由 -blockSize 参数指定。MapReduce 作业执行通过调度 MapReduce 作业来模拟实际的计算任务。作业的类型、数量和规模由 -numRuns、-maps 和 -reduces 参数指定。数据读取测试过程中将对生成的数据块进行读取操作以验证数据的正确性。 测试参数 hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/MRBench.javaUsage: mrbench [-baseDir base DFS path for output/input, default is /benchmarks/MRBench] [-jar local path to job jar file containing Mapper and Reducer implementations, default is current jar file] [-numRuns number of times to run the job, default is 1] [-maps number of maps for each run, default is 2] [-reduces number of reduces for each run, default is 1] [-inputLines number of input lines to generate, default is 1] [-inputType type of input to generate, one of ascending (default), descending, random] [-verbose]-maps numMaps指定 MapReduce 作业中使用的 Mapper 数量。-reduces numReduces指定 MapReduce 作业中使用的 Reducer 数量。-numRuns numRuns指定要执行的 MapReduce 作业的次数。-inputLines指定要输入的数据行数... 模拟运行10次 MapReduce 作业-numRuns 10使用 12 个 Mapper 和 6 个 Reducer 来处理数据-maps 12 -reduces 6每个文件输入数据行数为50。 执行语句示例 hadoop jar hadoop-mapreduce-client-jobclient-3.3.6-tests.jar mrbench -maps 12 -reduces 6 -numRuns 10 -inputLines 50DataLines Maps Reduces AvgTime (milliseconds) 50 12 6 2683SliveTest SliveTest 测试的主要目的是评估 Hadoop 文件系统HDFS和 MapReduce 框架在高负载、并发访问以及数据完整性方面的表现。 由于 SliveTest 类是作为 Hadoop MapReduce 项目中的一个测试类存在的它通常不作为独立工具来使用而是通过运行整个测试套件进行测试。 测试参数 hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/slive/SliveTest.javausage: SliveTest 0.1.0-append arg pct,distribution where distribution is one ofbeg,end,uniform,mid-appendSize arg Min,max for size to append(minmaxMAX_LONGblocksize)-baseDir arg Base directory path-blockSize arg Min,max for dfs file block size-cleanup arg Cleanup remove directory after reporting-create arg pct,distribution where distribution is one ofbeg,end,uniform,mid-delete arg pct,distribution where distribution is one ofbeg,end,uniform,mid-dirSize arg Max files per directory-duration arg Duration of a map task in seconds (MAX_INT for nolimit)-exitOnError Exit on first error-files arg Max total number of files-help Usage information-ls arg pct,distribution where distribution is one ofbeg,end,uniform,mid-maps arg Number of maps-mkdir arg pct,distribution where distribution is one ofbeg,end,uniform,mid-ops arg Max number of operations per map-packetSize arg Dfs write packet size-queue arg Queue name-read arg pct,distribution where distribution is one ofbeg,end,uniform,mid-readSize arg Min,max for size to read (minmaxMAX_LONGreadentire file)-reduces arg Number of reduces-rename arg pct,distribution where distribution is one ofbeg,end,uniform,mid-replication arg Min,max value for replication amount-resFile arg Result file name-seed arg Random number seed-sleep arg Min,max for millisecond of random sleep to perform(between operations)-truncate arg pct,distribution where distribution is one ofbeg,end,uniform,mid-truncateSize arg Min,max for size to truncate(minmaxMAX_LONGblocksize)-truncateWait arg Should wait for truncate recovery-writeSize arg Min,max for size to write(minmaxMAX_LONGblocksize)-append arg追加操作的百分比和分布方式。分布方式可以是以下之一beg起始、end结束、uniform均匀、mid中间。-appendSize arg追加操作的最小和最大文件大小。如果未指定则默认为 MAX_LONGblocksize。-baseDir arg基础目录路径即执行操作的根目录。-blockSize argHDFS 文件块大小的最小和最大值。-cleanup arg在报告后进行清理并删除目录。arg 可以是 true 或 false。-create arg创建文件的百分比和分布方式。分布方式可以是以下之一beg起始、end结束、uniform均匀、mid中间。-delete arg删除操作的百分比和分布方式。分布方式可以是以下之一beg起始、end结束、uniform均匀、mid中间。-dirSize arg每个目录的最大文件数。-duration arg每个 Map 任务的持续时间秒。设置为 MAX_INT 表示没有时间限制。-exitOnError在出现错误时立即退出。-files arg总文件数的最大值。-help显示使用信息。-ls arg列出操作的百分比和分布方式。分布方式可以是以下之一beg起始、end结束、uniform均匀、mid中间。-maps argMap 任务的数量。-mkdir arg创建目录的百分比和分布方式。分布方式可以是以下之一beg起始、end结束、uniform均匀、mid中间。-ops arg每个 Map 任务的最大操作数。-packetSize argDFS 写入数据包的大小。-queue arg指定作业提交到的队列名称。-read arg读取操作的百分比和分布方式。分布方式可以是以下之一beg起始、end结束、uniform均匀、mid中间。-readSize arg读取操作的最小和最大文件大小。如果未指定则默认为 MAX_LONGread entire file。-reduces argReduce 任务的数量。-rename arg重命名操作的百分比和分布方式。分布方式可以是以下之一beg起始、end结束、uniform均匀、mid中间。-replication arg文件的最小和最大副本数量。-resFile arg结果文件名。-seed arg随机数种子。-sleep arg执行操作之间的随机休眠时间单位为毫秒。指定一个最小值和最大值。-truncate arg截断操作的百分比和分布方式。分布方式可以是以下之一beg起始、end结束、uniform均匀、mid中间。-truncateSize arg截断操作的最小和最大文件大小。如果未指定则默认为 MIN_LONGblocksize。-truncateWait arg是否等待截断恢复完成。arg 可以是 true 或 false。-writeSize arg: … hadoop org.apache.hadoop.fs.slive.SliveTest -maps 12 -reduces 6 -files 5 -ops 5 -truncateWait true -cleanup true执行完毕后如果没有设置 -cleanup true 则需要手动清理测试痕迹 hdfs dfs -rm -r -f hdfs://hadoop-system-namenode:9000/test/slive/slive/output2023-12-14 16:51:05,863 INFO slive.SliveTest: Reporting on job: 2023-12-14 16:51:05,863 INFO slive.SliveTest: Writing report using contents of /test/slive/slive/output 2023-12-14 16:51:05,925 INFO slive.SliveTest: Report results being placed to logging output and to file /opt/hadoop/share/hadoop/mapreduce/part-0000 2023-12-14 16:51:05,926 INFO slive.ReportWriter: Basic report for operation type AppendOp 2023-12-14 16:51:05,926 INFO slive.ReportWriter: ------------- 2023-12-14 16:51:05,926 INFO slive.ReportWriter: Measurement files_not_found 1 2023-12-14 16:51:05,926 INFO slive.ReportWriter: Measurement op_count 1 2023-12-14 16:51:05,927 INFO slive.ReportWriter: ------------- 2023-12-14 16:51:05,927 INFO slive.ReportWriter: Basic report for operation type CreateOp 2023-12-14 16:51:05,927 INFO slive.ReportWriter: ------------- 2023-12-14 16:51:05,927 INFO slive.ReportWriter: Measurement bytes_written 67108864 2023-12-14 16:51:05,927 INFO slive.ReportWriter: Measurement milliseconds_taken 3018 2023-12-14 16:51:05,927 INFO slive.ReportWriter: Measurement op_count 1 2023-12-14 16:51:05,927 INFO slive.ReportWriter: Measurement successes 1 2023-12-14 16:51:05,927 INFO slive.ReportWriter: Rate for measurement bytes_written 21.206 MB/sec 2023-12-14 16:51:05,927 INFO slive.ReportWriter: Rate for measurement op_count 0.331 operations/sec 2023-12-14 16:51:05,927 INFO slive.ReportWriter: Rate for measurement successes 0.331 successes/sec 2023-12-14 16:51:05,927 INFO slive.ReportWriter: ------------- 2023-12-14 16:51:05,927 INFO slive.ReportWriter: Basic report for operation type DeleteOp 2023-12-14 16:51:05,927 INFO slive.ReportWriter: ------------- 2023-12-14 16:51:05,927 INFO slive.ReportWriter: Measurement failures 1 2023-12-14 16:51:05,927 INFO slive.ReportWriter: Measurement op_count 1 2023-12-14 16:51:05,927 INFO slive.ReportWriter: ------------- 2023-12-14 16:51:05,927 INFO slive.ReportWriter: Basic report for operation type ListOp 2023-12-14 16:51:05,927 INFO slive.ReportWriter: ------------- 2023-12-14 16:51:05,927 INFO slive.ReportWriter: Measurement files_not_found 1 2023-12-14 16:51:05,927 INFO slive.ReportWriter: Measurement op_count 1 2023-12-14 16:51:05,927 INFO slive.ReportWriter: ------------- 2023-12-14 16:51:05,927 INFO slive.ReportWriter: Basic report for operation type MkdirOp 2023-12-14 16:51:05,927 INFO slive.ReportWriter: ------------- 2023-12-14 16:51:05,927 INFO slive.ReportWriter: Measurement milliseconds_taken 35 2023-12-14 16:51:05,927 INFO slive.ReportWriter: Measurement op_count 1 2023-12-14 16:51:05,927 INFO slive.ReportWriter: Measurement successes 1 2023-12-14 16:51:05,927 INFO slive.ReportWriter: Rate for measurement op_count 28.571 operations/sec 2023-12-14 16:51:05,927 INFO slive.ReportWriter: Rate for measurement successes 28.571 successes/sec 2023-12-14 16:51:05,927 INFO slive.ReportWriter: ------------- 2023-12-14 16:51:05,927 INFO slive.ReportWriter: Basic report for operation type ReadOp 2023-12-14 16:51:05,927 INFO slive.ReportWriter: ------------- 2023-12-14 16:51:05,927 INFO slive.ReportWriter: Measurement files_not_found 1 2023-12-14 16:51:05,927 INFO slive.ReportWriter: Measurement op_count 1 2023-12-14 16:51:05,927 INFO slive.ReportWriter: ------------- 2023-12-14 16:51:05,927 INFO slive.ReportWriter: Basic report for operation type RenameOp 2023-12-14 16:51:05,927 INFO slive.ReportWriter: ------------- 2023-12-14 16:51:05,927 INFO slive.ReportWriter: Measurement failures 1 2023-12-14 16:51:05,927 INFO slive.ReportWriter: Measurement op_count 1 2023-12-14 16:51:05,927 INFO slive.ReportWriter: ------------- 2023-12-14 16:51:05,927 INFO slive.ReportWriter: Basic report for operation type SliveMapper 2023-12-14 16:51:05,927 INFO slive.ReportWriter: ------------- 2023-12-14 16:51:05,927 INFO slive.ReportWriter: Measurement milliseconds_taken 4486 2023-12-14 16:51:05,928 INFO slive.ReportWriter: Measurement op_count 8 2023-12-14 16:51:05,928 INFO slive.ReportWriter: Rate for measurement op_count 1.783 operations/sec 2023-12-14 16:51:05,928 INFO slive.ReportWriter: ------------- 2023-12-14 16:51:05,928 INFO slive.ReportWriter: Basic report for operation type TruncateOp 2023-12-14 16:51:05,928 INFO slive.ReportWriter: ------------- 2023-12-14 16:51:05,928 INFO slive.ReportWriter: Measurement files_not_found 1 2023-12-14 16:51:05,928 INFO slive.ReportWriter: Measurement op_count 1 2023-12-14 16:51:05,928 INFO slive.ReportWriter: ------------- 2023-12-14 16:51:05,928 INFO slive.SliveTest: Cleaning up job: 2023-12-14 16:51:05,928 INFO slive.SliveTest: Attempting to recursively delete /test/slive/slive
http://www.zqtcl.cn/news/227514/

相关文章:

  • wordpress Apache升级优化营商环境的意义
  • 单页式网站系统wordpress自定义字段怎么用
  • 南宁网站设计要多少钱修改wordpress中的 功能 小工具
  • 南昌高端网站开发费用表域名价格排行
  • 怎么接网站开发外包中国观鸟记录的网站架构
  • 青海省住房和城乡建设厅的官方网站网站举报能不能查到举报人
  • dw做的网站如何上传云服务器网址生成app一键生成器
  • 山西建设厅网站密钥房山营销型网站建设
  • 网站空间多少钱哪里接单做网站
  • 建设部网站资质人员查询页面设计的对称方法包括哪几种形式
  • 滁州网站建设哪个好点iis发布网站无法访问
  • 网站项目建设的定义百度站长平台清退
  • ip开源网站FPGA可以做点什么建设网站的工作职责
  • 重庆微信网站开发公司建设网站技术标准
  • 网站开发浏览器银川市建设诚信平台网站
  • 找合伙人做红木家具网站建设银行员工学习网站
  • iis的默认网站没有自动启动长春小程序开发制作
  • 佛山住房和城乡建设部网站wordpress 英文主题
  • 零食网站策划书厦门建设网站的公司
  • 自己做的网站怎么发布到网上湖南做网站 干净磐石网络
  • steam网站代做设计公司招聘信息
  • 网站开发 书籍无广告自助建站
  • 青岛电子商务网站建设wordpress购物车会员
  • 大理建网站沉默是金吉他谱
  • 门户网站需要多少费用wordpress的中文插件安装
  • 男做基视频网站怎么做网上直营店网站
  • 网站栏目排序个人站长网站应该如何定位
  • phpcms wap网站搭建学网站开发难吗
  • 做一个网页一般多少钱seo实训思考与总结
  • 怎么用wordpress做搜索网站wordpress 作品集插件