当前位置: 首页 > news >正文

贵州省文化旅游网站建设的必要性万网网站到期后续费一年多少钱

贵州省文化旅游网站建设的必要性,万网网站到期后续费一年多少钱,美食分享网站设计,网站不备案怎么办一、网络报文处理 学过网络通信的都知道#xff0c;其实在网络的底层数据就是一包#xff08;帧#xff09;包的。换句话说#xff0c;所有的网络设备转发的其实就是一包包的二进制流数据。对设备或者驱动来说#xff0c;这些数据没有什么任何意义#xff0c;它们只是负…一、网络报文处理 学过网络通信的都知道其实在网络的底层数据就是一包帧包的。换句话说所有的网络设备转发的其实就是一包包的二进制流数据。对设备或者驱动来说这些数据没有什么任何意义它们只是负责进行检验、处理、转发。说白了就像一个个的物流中转站它只管看看包裹是否损坏发往何地然后扔到指定的传送带上即可。网络上的数据包也是如此。 现实世界中当双11时包裹量大增物流中心也得搞一些处理的方法或者加人加机器或者改善流程更或者直接升级物流设备采用机器人自动仓储。同样对于网络世界也是如此它会从软件到硬件有一个完整的数据处理的流程包括应用框架算法以及后面要提到的硬件的处理等等。 那么首先这里要先弄明白网络处理模块有哪些 1、首先得有输入输出模块这是网络吞吐的接口即 Packet input 报文输入 Packet output 硬件发出 2、然后需要对报文进行处理 Pre-processing 报文比较粗粒度处理 Input classification 报文较细粒度分流 3、然后是数据管理和控制模块 Ingress queuing 提供基于描述符的队列FIFO Delivery/Scheduling 根据队列优先级和CPU状态进行调度 Accelerator 提供加解密和压缩/解压缩等硬件功能 Egress queueing 在出口上根据QOS等级进行调度 4、完成后清扫现场 Post processing 后期报文处理释放缓存 其实把这些模块按功能逻辑一划分立刻就明白了这比画张图还好理解。 二、转发应用框架 说到应该框架就要谈到转发模型一提到模型大家就基本可以明白了如果没有明显的技术突破模型基本是不会动的。所以这里用到的模型有两种 1、Pipleline模型Packet Framework Pipeline很好理解计算机的CPU中使用就是这种流水模型。流水模型非常适合于一些有节奏的有规律的工作。比如对CPU密集型应用和IO密集型应用可以分别用不同的引擎来处理。在DPDK中其可以按功能分成zoom out多核应用框架和zoom in单个流水线模块。 在这些模块中通过使用三部分即逻辑端口、查找表和处理逻辑单元来实现对Pipeline的报文处理。端口做为每流水单元的模块输入而通过查找表来确定处理方法而处理逻辑则决定了报文的处理和最终流向。这样一层层的堆叠就形成了一个Pipeline。 DPDK支持的Pipeline有以下几种 Packet I/O Flow classification Firewall Routing Metering Traffic Mgmt 这些Pipeline都可以简单的通过配置文件来使用其进行应用。但是这种模型由于流水的限制不容易进行扩展对多核支持的也不如RTC好。 2、run to completion模型RTC 看到这个模型写过网络编程的小伙伴是不是想到了IOCP完成端口这两还真得非常类似说白了都是为了充分挖掘多核的优势。它对于处理一些上下文逻辑关系并行的数据流则非常有优势它可以充分使用各个核心动态的分配处理各个逻辑层并且很容易进行扩展。 在DPDK中可以通过参数指令将核心绑定到线程上这样不同的数据收发队列就可以与逻辑核心从而保证一个报文只能在一个线程中进行处理。同时通用的处理器单元使得编程也变得更简单。 3、二者的比较 通过上面分析其实可以总结出来对于并行度要求高但优化处理不高的报文可以使用RTC模型反之可以使用Pipeline模型。前者更适合于高并发的短连接后者更适合于长连接连续数据处理方便进行优化动作。 三、相关算法 相关的算法就比较简单了主要有以下几种 1、精确匹配算法 从名字就可以看出来直接就可以匹配上精确配对。在网络中常用的就是哈希。不管你是哪种哈希反正是哈希。应用哈希就需要解决哈希冲突的问题常用的还是两种链表和开放地址。这些都是老生常谈不再赘述。 同样在DPDK中对哈希的校验也进行了优化对字节对齐进行了处理。然后使用不同的硬件指令一次处理相关校验或者在无法使用硬件时使用查表的方法进行这是典型的空间换时间。 2、最长匹配算法 最长前缀匹配Longest Prefix Matching, LPM算法是指在IP协议中被路由器用于在路由表中进行选择的一个算法。这个算法也很常见在密码学和网络中经常可以用到。一般比较常用的是LPM算法。 3、ACL算法 ACL算法其实就是通过访问一个控制库利用分类规则来对输入的数据包进行处理分类。ACL 库利用N元组的匹配规则进行类型匹配提供如下操作 创建ACaccess domain 的上下文 加规则到AC的上下文中 对于所有规则创建相关的结构体 进行入方向报文分类 销毁AC相关的资源 四、报文分发 DPDK中提供了一套报文转发的库和API它的原理基本上就是通过distributor分发给不同的工作者Worker。而distributor则从Mbuf中拿到相关数据。这样就形成了一个完整的分发流程。 五、源码 下面看一下DPDK中相关的源码 //dpdk-stable-19.11.14\lib\librte_eventdev...... #include rte_eventdev.h #include rte_eventdev_pmd.hstatic struct rte_eventdev rte_event_devices[RTE_EVENT_MAX_DEVS];struct rte_eventdev *rte_eventdevs rte_event_devices;static struct rte_eventdev_global eventdev_globals {.nb_devs 0 };/* Event dev north bound API implementation */uint8_t rte_event_dev_count(void) {return eventdev_globals.nb_devs; }int rte_event_dev_get_dev_id(const char *name) {int i;uint8_t cmp;if (!name)return -EINVAL;for (i 0; i eventdev_globals.nb_devs; i) {cmp (strncmp(rte_event_devices[i].data-name, name,RTE_EVENTDEV_NAME_MAX_LEN) 0) ||(rte_event_devices[i].dev ? (strncmp(rte_event_devices[i].dev-driver-name, name,RTE_EVENTDEV_NAME_MAX_LEN) 0) : 0);if (cmp (rte_event_devices[i].attached RTE_EVENTDEV_ATTACHED))return i;}return -ENODEV; }int rte_event_dev_socket_id(uint8_t dev_id) {struct rte_eventdev *dev;RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);dev rte_eventdevs[dev_id];return dev-data-socket_id; }int rte_event_dev_info_get(uint8_t dev_id, struct rte_event_dev_info *dev_info) {struct rte_eventdev *dev;RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);dev rte_eventdevs[dev_id];if (dev_info NULL)return -EINVAL;memset(dev_info, 0, sizeof(struct rte_event_dev_info));RTE_FUNC_PTR_OR_ERR_RET(*dev-dev_ops-dev_infos_get, -ENOTSUP);(*dev-dev_ops-dev_infos_get)(dev, dev_info);dev_info-dequeue_timeout_ns dev-data-dev_conf.dequeue_timeout_ns;dev_info-dev dev-dev;return 0; } ...... int rte_event_port_link(uint8_t dev_id, uint8_t port_id,const uint8_t queues[], const uint8_t priorities[],uint16_t nb_links) {struct rte_eventdev *dev;uint8_t queues_list[RTE_EVENT_MAX_QUEUES_PER_DEV];uint8_t priorities_list[RTE_EVENT_MAX_QUEUES_PER_DEV];uint16_t *links_map;int i, diag;RTE_EVENTDEV_VALID_DEVID_OR_ERRNO_RET(dev_id, EINVAL, 0);dev rte_eventdevs[dev_id];if (*dev-dev_ops-port_link NULL) {RTE_EDEV_LOG_ERR(Function not supported\n);rte_errno ENOTSUP;return 0;}if (!is_valid_port(dev, port_id)) {RTE_EDEV_LOG_ERR(Invalid port_id% PRIu8, port_id);rte_errno EINVAL;return 0;}if (queues NULL) {for (i 0; i dev-data-nb_queues; i)queues_list[i] i;queues queues_list;nb_links dev-data-nb_queues;}if (priorities NULL) {for (i 0; i nb_links; i)priorities_list[i] RTE_EVENT_DEV_PRIORITY_NORMAL;priorities priorities_list;}for (i 0; i nb_links; i)if (queues[i] dev-data-nb_queues) {rte_errno EINVAL;return 0;}diag (*dev-dev_ops-port_link)(dev, dev-data-ports[port_id],queues, priorities, nb_links);if (diag 0)return diag;links_map dev-data-links_map;/* Point links_map to this port specific area */links_map (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);for (i 0; i diag; i)links_map[queues[i]] (uint8_t)priorities[i];return diag; }int rte_event_port_unlink(uint8_t dev_id, uint8_t port_id,uint8_t queues[], uint16_t nb_unlinks) {struct rte_eventdev *dev;uint8_t all_queues[RTE_EVENT_MAX_QUEUES_PER_DEV];int i, diag, j;uint16_t *links_map;RTE_EVENTDEV_VALID_DEVID_OR_ERRNO_RET(dev_id, EINVAL, 0);dev rte_eventdevs[dev_id];if (*dev-dev_ops-port_unlink NULL) {RTE_EDEV_LOG_ERR(Function not supported);rte_errno ENOTSUP;return 0;}if (!is_valid_port(dev, port_id)) {RTE_EDEV_LOG_ERR(Invalid port_id% PRIu8, port_id);rte_errno EINVAL;return 0;}links_map dev-data-links_map;/* Point links_map to this port specific area */links_map (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);if (queues NULL) {j 0;for (i 0; i dev-data-nb_queues; i) {if (links_map[i] !EVENT_QUEUE_SERVICE_PRIORITY_INVALID) {all_queues[j] i;j;}}queues all_queues;} else {for (j 0; j nb_unlinks; j) {if (links_map[queues[j]] EVENT_QUEUE_SERVICE_PRIORITY_INVALID)break;}}nb_unlinks j;for (i 0; i nb_unlinks; i)if (queues[i] dev-data-nb_queues) {rte_errno EINVAL;return 0;}diag (*dev-dev_ops-port_unlink)(dev, dev-data-ports[port_id],queues, nb_unlinks);if (diag 0)return diag;for (i 0; i diag; i)links_map[queues[i]] EVENT_QUEUE_SERVICE_PRIORITY_INVALID;return diag; }int rte_event_port_unlinks_in_progress(uint8_t dev_id, uint8_t port_id) {struct rte_eventdev *dev;RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);dev rte_eventdevs[dev_id];if (!is_valid_port(dev, port_id)) {RTE_EDEV_LOG_ERR(Invalid port_id% PRIu8, port_id);return -EINVAL;}/* Return 0 if the PMD does not implement unlinks in progress.* This allows PMDs which handle unlink synchronously to not implement* this function at all.*/RTE_FUNC_PTR_OR_ERR_RET(*dev-dev_ops-port_unlinks_in_progress, 0);return (*dev-dev_ops-port_unlinks_in_progress)(dev,dev-data-ports[port_id]); }int rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,uint8_t queues[], uint8_t priorities[]) {struct rte_eventdev *dev;uint16_t *links_map;int i, count 0;RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);dev rte_eventdevs[dev_id];if (!is_valid_port(dev, port_id)) {RTE_EDEV_LOG_ERR(Invalid port_id% PRIu8, port_id);return -EINVAL;}links_map dev-data-links_map;/* Point links_map to this port specific area */links_map (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);for (i 0; i dev-data-nb_queues; i) {if (links_map[i] ! EVENT_QUEUE_SERVICE_PRIORITY_INVALID) {queues[count] i;priorities[count] (uint8_t)links_map[i];count;}}return count; }int rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns,uint64_t *timeout_ticks) {struct rte_eventdev *dev;RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);dev rte_eventdevs[dev_id];RTE_FUNC_PTR_OR_ERR_RET(*dev-dev_ops-timeout_ticks, -ENOTSUP);if (timeout_ticks NULL)return -EINVAL;return (*dev-dev_ops-timeout_ticks)(dev, ns, timeout_ticks); } ... 其实其核心的两组API一个在rte_eventdev.c和rte_service.c中上面是前者下面看看后者 #include eal_private.h#define RTE_SERVICE_NUM_MAX 64#define SERVICE_F_REGISTERED (1 0) #define SERVICE_F_STATS_ENABLED (1 1) #define SERVICE_F_START_CHECK (1 2)/* runstates for services and lcores, denoting if they are active or not */ #define RUNSTATE_STOPPED 0 #define RUNSTATE_RUNNING 1/* internal representation of a service */ struct rte_service_spec_impl {/* public part of the struct */struct rte_service_spec spec;/* atomic lock that when set indicates a service core is currently* running this service callback. When not set, a core may take the* lock and then run the service callback.*/rte_atomic32_t execute_lock;/* API set/get-able variables */int8_t app_runstate;int8_t comp_runstate;uint8_t internal_flags;/* per service statistics *//* Indicates how many cores the service is mapped to run on.* It does not indicate the number of cores the service is running* on currently.*/rte_atomic32_t num_mapped_cores;uint64_t calls;uint64_t cycles_spent; } __rte_cache_aligned;/* the internal values of a service core */ struct core_state {/* map of services IDs are run on this core */uint64_t service_mask;uint8_t runstate; /* running or stopped */uint8_t is_service_core; /* set if core is currently a service core */uint8_t service_active_on_lcore[RTE_SERVICE_NUM_MAX];uint64_t loops;uint64_t calls_per_service[RTE_SERVICE_NUM_MAX]; } __rte_cache_aligned;static uint32_t rte_service_count; static struct rte_service_spec_impl *rte_services; static struct core_state *lcore_states; static uint32_t rte_service_library_initialized;int32_t rte_service_init(void) {if (rte_service_library_initialized) {RTE_LOG(NOTICE, EAL,service library init() called, init flag %d\n,rte_service_library_initialized);return -EALREADY;}rte_services rte_calloc(rte_services, RTE_SERVICE_NUM_MAX,sizeof(struct rte_service_spec_impl),RTE_CACHE_LINE_SIZE);if (!rte_services) {RTE_LOG(ERR, EAL, error allocating rte services array\n);goto fail_mem;}lcore_states rte_calloc(rte_service_core_states, RTE_MAX_LCORE,sizeof(struct core_state), RTE_CACHE_LINE_SIZE);if (!lcore_states) {RTE_LOG(ERR, EAL, error allocating core states array\n);goto fail_mem;}int i;struct rte_config *cfg rte_eal_get_configuration();for (i 0; i RTE_MAX_LCORE; i) {if (lcore_config[i].core_role ROLE_SERVICE) {if ((unsigned int)i cfg-master_lcore)continue;rte_service_lcore_add(i);}}rte_service_library_initialized 1;return 0; fail_mem:rte_free(rte_services);rte_free(lcore_states);return -ENOMEM; }...... static int32_t service_runner_func(void *arg) {RTE_SET_USED(arg);uint32_t i;const int lcore rte_lcore_id();struct core_state *cs lcore_states[lcore];while (cs-runstate RUNSTATE_RUNNING) {const uint64_t service_mask cs-service_mask;for (i 0; i RTE_SERVICE_NUM_MAX; i) {if (!service_valid(i))continue;/* return value ignored as no change to code flow */service_run(i, cs, service_mask, service_get(i), 1);}cs-loops;rte_smp_rmb();}/* Switch off this core for all services, to ensure that future* calls to may_be_active() know this core is switched off.*/for (i 0; i RTE_SERVICE_NUM_MAX; i)cs-service_active_on_lcore[i] 0;return 0; }int32_t rte_service_lcore_count(void) {int32_t count 0;uint32_t i;for (i 0; i RTE_MAX_LCORE; i)count lcore_states[i].is_service_core;return count; }int32_t rte_service_lcore_list(uint32_t array[], uint32_t n) {uint32_t count rte_service_lcore_count();if (count n)return -ENOMEM;if (!array)return -EINVAL;uint32_t i;uint32_t idx 0;for (i 0; i RTE_MAX_LCORE; i) {struct core_state *cs lcore_states[i];if (cs-is_service_core) {array[idx] i;idx;}}return count; }int32_t rte_service_lcore_count_services(uint32_t lcore) {if (lcore RTE_MAX_LCORE)return -EINVAL;struct core_state *cs lcore_states[lcore];if (!cs-is_service_core)return -ENOTSUP;return __builtin_popcountll(cs-service_mask); }int32_t rte_service_start_with_defaults(void) {/* create a default mapping from cores to services, then start the* services to make them transparent to unaware applications.*/uint32_t i;int ret;uint32_t count rte_service_get_count();int32_t lcore_iter 0;uint32_t ids[RTE_MAX_LCORE] {0};int32_t lcore_count rte_service_lcore_list(ids, RTE_MAX_LCORE);if (lcore_count 0)return -ENOTSUP;for (i 0; (int)i lcore_count; i)rte_service_lcore_start(ids[i]);for (i 0; i count; i) {/* do 1:1 core mapping here, with each service getting* assigned a single core by default. Adding multiple services* should multiplex to a single core, or 1:1 if there are the* same amount of services as service-cores*/ret rte_service_map_lcore_set(i, ids[lcore_iter], 1);if (ret)return -ENODEV;lcore_iter;if (lcore_iter lcore_count)lcore_iter 0;ret rte_service_runstate_set(i, 1);if (ret)return -ENOEXEC;}return 0; }static int32_t service_update(struct rte_service_spec *service, uint32_t lcore,uint32_t *set, uint32_t *enabled) {uint32_t i;int32_t sid -1;for (i 0; i RTE_SERVICE_NUM_MAX; i) {if ((struct rte_service_spec *)rte_services[i] service service_valid(i)) {sid i;break;}}if (sid -1 || lcore RTE_MAX_LCORE)return -EINVAL;if (!lcore_states[lcore].is_service_core)return -EINVAL;uint64_t sid_mask UINT64_C(1) sid;if (set) {uint64_t lcore_mapped lcore_states[lcore].service_mask sid_mask;if (*set !lcore_mapped) {lcore_states[lcore].service_mask | sid_mask;rte_atomic32_inc(rte_services[sid].num_mapped_cores);}if (!*set lcore_mapped) {lcore_states[lcore].service_mask ~(sid_mask);rte_atomic32_dec(rte_services[sid].num_mapped_cores);}}if (enabled)*enabled !!(lcore_states[lcore].service_mask (sid_mask));rte_smp_wmb();return 0; }int32_t rte_service_map_lcore_set(uint32_t id, uint32_t lcore, uint32_t enabled) {struct rte_service_spec_impl *s;SERVICE_VALID_GET_OR_ERR_RET(id, s, -EINVAL);uint32_t on enabled 0;return service_update(s-spec, lcore, on, 0); }int32_t rte_service_map_lcore_get(uint32_t id, uint32_t lcore) {struct rte_service_spec_impl *s;SERVICE_VALID_GET_OR_ERR_RET(id, s, -EINVAL);uint32_t enabled;int ret service_update(s-spec, lcore, 0, enabled);if (ret 0)return enabled;return ret; }static void set_lcore_state(uint32_t lcore, int32_t state) {/* mark core state in hugepage backed config */struct rte_config *cfg rte_eal_get_configuration();cfg-lcore_role[lcore] state;/* mark state in process local lcore_config */lcore_config[lcore].core_role state;/* update per-lcore optimized state tracking */lcore_states[lcore].is_service_core (state ROLE_SERVICE); }int32_t rte_service_lcore_reset_all(void) {/* loop over cores, reset all to mask 0 */uint32_t i;for (i 0; i RTE_MAX_LCORE; i) {if (lcore_states[i].is_service_core) {lcore_states[i].service_mask 0;set_lcore_state(i, ROLE_RTE);lcore_states[i].runstate RUNSTATE_STOPPED;}}for (i 0; i RTE_SERVICE_NUM_MAX; i)rte_atomic32_set(rte_services[i].num_mapped_cores, 0);rte_smp_wmb();return 0; }int32_t rte_service_lcore_add(uint32_t lcore) {if (lcore RTE_MAX_LCORE)return -EINVAL;if (lcore_states[lcore].is_service_core)return -EALREADY;set_lcore_state(lcore, ROLE_SERVICE);/* ensure that after adding a core the mask and state are defaults */lcore_states[lcore].service_mask 0;lcore_states[lcore].runstate RUNSTATE_STOPPED;rte_smp_wmb();return rte_eal_wait_lcore(lcore); }int32_t rte_service_lcore_del(uint32_t lcore) {if (lcore RTE_MAX_LCORE)return -EINVAL;struct core_state *cs lcore_states[lcore];if (!cs-is_service_core)return -EINVAL;if (cs-runstate ! RUNSTATE_STOPPED)return -EBUSY;set_lcore_state(lcore, ROLE_RTE);rte_smp_wmb();return 0; }int32_t rte_service_lcore_start(uint32_t lcore) {if (lcore RTE_MAX_LCORE)return -EINVAL;struct core_state *cs lcore_states[lcore];if (!cs-is_service_core)return -EINVAL;if (cs-runstate RUNSTATE_RUNNING)return -EALREADY;/* set core to run state first, and then launch otherwise it will* return immediately as runstate keeps it in the service poll loop*/cs-runstate RUNSTATE_RUNNING;int ret rte_eal_remote_launch(service_runner_func, 0, lcore);/* returns -EBUSY if the core is already launched, 0 on success */return ret; }...... RTC及算法相关代码可自行在源码中查找这里不再赘述。 五、总结 学习这种功能知识点最重要的是把握整体逻辑和处理流程。算法和框架可以先放到一边待了解清楚整体流程后再深入到其中进行学习能更好的理解和掌握相关的知识体系。学习要有学习方法要有清晰的思路。万不可一上来就陷入细节出力甚多却所得甚少。
http://www.zqtcl.cn/news/563695/

相关文章:

  • 外贸出口网站建设如何搭建自己的网站服务器
  • 云南省建设厅网站职称评审房地产推广方案和推广思路
  • 湘潭建设路街道网站app的设计与开发
  • 《网站开发实践》 实训报告广告策划书案例完整版
  • 一级 爰做片免费网站做中学学中做网站
  • 网站排名如何提升网络营销的有哪些特点
  • 巨腾外贸网站建设个人主页网站模板免费
  • 有哪些网站免费做推广淄博网站电子商城平台建设
  • 网站建设的技术支持论文做网站买什么品牌笔记本好
  • 凡科网站后台在哪里.工程与建设
  • 静态网站源文件下载建设手机网站价格
  • 苏州做网站优化的网站开发邮件
  • 做网站怎么搭建环境阿里云大学 网站建设
  • 网站改版业务嵌入式培训推荐
  • 腾讯云 怎样建设网站网站开发 报价
  • 网络科技公司门户网站免费人脉推广官方软件
  • 建和做网站网络营销推广可以理解为
  • 太原市网站建设网站人防工程做资料的网站
  • 怎么做免费推广网站做网站第一部
  • 橙色网站后台模板WordPress的SEO插件安装失败
  • 做网站好还是做微信小程序好外包加工网外放加工活
  • 中国建设银行网站查征信电子商务网站建设及推广
  • 扫描网站漏洞的软件php网站后台验证码不显示
  • 诸城哪里有做网站的做网站的尺寸
  • 网站开发参考书目做网站推广赚钱吗
  • 九度网站建设网站做ppt模板
  • 浙江做公司网站多少钱评论回复网站怎么做
  • 江门网络建站模板虚拟主机价格一般多少钱
  • 网站建设公司云南深圳手机商城网站设计费用
  • 汇泽网站建设网页版快手