CN111522643A - Multi-queue scheduling method and device based on FPGA, computer equipment and storage medium - Google Patents

Multi-queue scheduling method and device based on FPGA, computer equipment and storage medium Download PDF

Info

Publication number
CN111522643A
CN111522643A CN202010321678.5A CN202010321678A CN111522643A CN 111522643 A CN111522643 A CN 111522643A CN 202010321678 A CN202010321678 A CN 202010321678A CN 111522643 A CN111522643 A CN 111522643A
Authority
CN
China
Prior art keywords
queue
scheduled
scheduling
fpga
read pointer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010321678.5A
Other languages
Chinese (zh)
Inventor
黄锡军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou DPTech Technologies Co Ltd
Original Assignee
Hangzhou DPTech Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou DPTech Technologies Co Ltd filed Critical Hangzhou DPTech Technologies Co Ltd
Priority to CN202010321678.5A priority Critical patent/CN111522643A/en
Publication of CN111522643A publication Critical patent/CN111522643A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application provides a multi-queue scheduling method and device based on an FPGA, computer equipment and a storage medium. Wherein the method comprises the following steps: storing cells of a plurality of channels into a plurality of queues in a buffer space one by one; when a cell is stored in a queue, determining that the queue is a queue to be scheduled; and according to the time difference of the queue storage cells, sequentially scheduling by using the read pointer of each queue to be scheduled. Therefore, the method and the device can ensure the functionality of fair scheduling and effectively reduce the resources occupied by the FPGA when a plurality of queues are scheduled.

Description

Multi-queue scheduling method and device based on FPGA, computer equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for multi-queue scheduling based on an FPGA, a computer device, and a storage medium.
Background
Multi-queue scheduling can be divided into prioritized scheduling and non-prioritized scheduling. Priority scheduling, namely determining scheduling sequence according to the priority of each queue; scheduling without priority, no priority division between queues, and fair scheduling is usually required for each queue to ensure fairness of scheduling. In the related art, when there are multiple non-priority queues to be scheduled on the FPGA, an independent FIFO (First Input First Output) memory is used to buffer cells of each queue, and the cells of each queue are scheduled in a preemptive scheduling manner to be uniformly scheduled out. Although the preemptive scheduling can ensure the fairness of scheduling and has high scheduling performance, the preemptive scheduling increases the complexity of codes and occupies more resources.
Disclosure of Invention
In order to overcome the problems in the related art, the application provides a multi-queue scheduling method and device based on an FPGA, computer equipment and a storage medium.
According to a first aspect of an embodiment of the present application, a method for multi-queue scheduling based on an FPGA is provided, where the method includes:
storing cells of a plurality of channels into a plurality of queues in a buffer space one by one;
when a cell is stored in a queue, determining that the queue is a queue to be scheduled;
and according to the time difference of the queue storage cells, sequentially scheduling by using the read pointer of each queue to be scheduled.
According to a second aspect of the embodiments of the present application, there is provided an FPGA-based multi-queue scheduling apparatus, including:
the storage module is configured to store the cells of the channels into a plurality of queues in a buffer space one by one;
the device comprises a determining module, a scheduling module and a scheduling module, wherein the determining module is configured to determine that a queue is a queue to be scheduled when a cell is stored in the queue;
and the scheduling module is configured to sequentially schedule by using the read pointer of each queue to be scheduled according to the time difference of the queue storage cells.
According to a third aspect of embodiments of the present application, there is provided a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the program:
storing cells of a plurality of channels into a plurality of queues in a buffer space one by one;
when a cell is stored in a queue, determining that the queue is a queue to be scheduled;
and according to the time difference of the queue storage cells, sequentially scheduling by using the read pointer of each queue to be scheduled.
According to a fourth aspect of embodiments of the present application, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of:
storing cells of a plurality of channels into a plurality of queues in a buffer space one by one;
when a cell is stored in a queue, determining that the queue is a queue to be scheduled;
and according to the time difference of the queue storage cells, sequentially scheduling by using the read pointer of each queue to be scheduled.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
in the technical scheme, the received cells of the channels are stored into a plurality of queues in a buffer space of the FPGA one by one. And if one queue is determined as the queue to be scheduled, scheduling by using the read pointer of the queue to call one cell of the queue. When cells are input into corresponding queues one by one, which queues are to be scheduled can be determined in sequence. After one queue completes one-time scheduling, other queues in the queue to be scheduled are selected for scheduling, and the like, so that the scheduling of a plurality of queues is completed. The technical scheme finishes the scheduling of a plurality of queues in a buffer space and schedules a cell of each queue in turn. Therefore, the method and the device can ensure the functionality of fair scheduling and effectively reduce the resources occupied by the FPGA when a plurality of queues are scheduled.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
Fig. 1 is a schematic flowchart of a multi-queue scheduling method based on an FPGA according to an exemplary embodiment of the present application.
Fig. 2 is an application schematic diagram of a multi-queue scheduling method based on an FPGA according to an exemplary embodiment of the present application.
Fig. 3 is an application diagram of a multi-queue scheduling method based on an FPGA according to an exemplary embodiment of the present application.
Fig. 4 is a schematic structural diagram of a multi-queue scheduling apparatus based on an FPGA according to an exemplary embodiment of the present application.
Fig. 5 is a schematic structural diagram of a computer device according to an exemplary embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
The following describes a multi-queue scheduling method, apparatus, computer device and storage medium based on FPGA in detail with reference to the accompanying drawings. The features of the following examples and embodiments may be combined with each other without conflict.
The application provides a multi-queue scheduling method based on an FPGA. Fig. 1 is a schematic flowchart of a multi-queue scheduling method based on an FPGA according to an exemplary embodiment of the present application. As shown in fig. 1, the FPGA-based multi-queue scheduling method includes the following steps 101 to 103:
step 101, storing cells of multiple channels one by one into multiple queues in a buffer space.
In this step, the FPGA may receive multi-channel cells from an external device, the cells of multiple channels may be input to the FPGA through a serial interface, and the FPGA may call a RAM (Random Access Memory) resource, input the cells into one buffer space one by one, and distribute the cells into multiple queues in the buffer space. The channels correspond to the queues, and one channel can contain a plurality of cells, so that the cells in the channel are stored into the same queue one by one. In one example, Block RAM (Block random access memory) resources may be called if a large cache space is required. In another example, a Distributed RAM resource may be called if less cache space is needed.
Step 102, when a cell is stored in a queue, determining that the queue is a queue to be scheduled.
In this step, when a cell is stored in a queue, the queue may be considered to have a scheduling requirement, that is, the queue may be determined to be a queue to be scheduled. It should be understood that when a queue has no cell input stored, then the queue has no scheduling requirements.
It should be noted that one queue may include a write pointer and a read pointer, and when one cell is stored in the buffer space, the cell is written and read by means of the write pointer. When a cell is stored in a queue, adding 1 to a write pointer of the queue; when a cell calls out a queue, the read pointer of the queue is incremented by 1.
It should be understood that the storage space of each queue may be limited, and when a stored cell reaches the storage space of the queue, the value of the write pointer will reach a maximum value, and due to the dynamic scheduling of the queue, if a cell is called out, a storage space will also be released, and subsequent cells may continue to be stored in the queue, and when the value of the write pointer is added to the corresponding maximum value of the queue storage space, the next stored cell will return to the starting value. For example, a queue can store up to 7 cells, the value of the write pointer is 1 when the queue stores the first cell, the value of the write pointer is 7 when the queue is full of 7 cells, if the subsequent queue is scheduled to fetch at least one cell, the queue can continue to store cells, and when the 8 th cell stores in the queue, the value of the write pointer of the queue returns to the initial value, i.e., the value of the write pointer is 1. It will be appreciated that the dynamic change of the value of the read pointer is similar to the write pointer and will not be described in detail here.
And 103, sequentially scheduling by using the read pointer of each queue to be scheduled according to the time difference of the queue storage cells.
In this step, when a queue is determined as a queue to be scheduled, scheduling of the queue may be triggered, and cells in the queue are scheduled and output by using the obtained read pointer of the queue, so as to complete one-time scheduling of the queue. Because cells can be continuously input into the buffer space during the scheduling process of one queue, when cells are stored in other queues, the other queues can also be determined as queues to be scheduled. When one queue completes one-time scheduling, a plurality of queues to be scheduled exist, and then, in order to ensure fair scheduling, other queues which are not scheduled yet can be scheduled. If there are more queues to be scheduled, the scheduling sequence of each queue to be scheduled can be determined according to the time difference of the queue storage cells, and then the next queue to be scheduled is determined and scheduled according to the sequence.
In the multi-queue scheduling method based on the FPGA of this embodiment, the received cells of the multiple channels are stored into multiple queues in one buffer space of the FPGA one by one. And if one queue is determined as the queue to be scheduled, scheduling by using the read pointer of the queue to call one cell of the queue. When cells are input into corresponding queues one by one, which queues are to be scheduled can be determined in sequence. After one queue completes one-time scheduling, other queues in the queue to be scheduled are selected for scheduling, and the like, so that the scheduling of a plurality of queues is completed. The technical scheme finishes the scheduling of a plurality of queues in a buffer space and schedules a cell of each queue in turn. Therefore, the method and the device can ensure the functionality of fair scheduling and effectively reduce the resources occupied by the FPGA when a plurality of queues are scheduled.
In an exemplary embodiment of the present application, the step of determining that a queue is scheduled to be a queue to be scheduled when a cell is stored in the queue includes: when a cell is stored in a queue, updating the queue state identifier of the queue to be in a non-empty state; determining that the queue is a queue to be scheduled based on the non-empty state.
In this embodiment, the queue status flag may describe an empty status of a queue, that is, whether there is a cell to be scheduled in the queue. And determining whether the queue has a scheduling requirement or not by judging the queue state identifier of the queue, and determining whether the queue is a queue to be scheduled or not. In one example, a queue status flag may be described as 0, 1, e.g., when the queue is in an empty state, the queue status flag is 0; when the queue is in a non-empty state, the queue state flag is 1.
The empty and full state of one queue can be controlled by a write pointer and a read pointer of the queue, and when the values of the read pointer and the write pointer are equal, the queue is in an empty state; and when the value of the read pointer is smaller than that of the write pointer, the queue is in a non-empty state.
In order to more clearly understand the process of one-time scheduling, in an exemplary embodiment of the present application, the process of scheduling each queue to be scheduled may include the following steps:
1) reading a read pointer and a write pointer of a queue to be scheduled based on a queue number of the current queue to be scheduled;
2) determining whether the queue to be scheduled is in a non-empty state based on a read pointer and a write pointer of the queue to be scheduled;
3) if yes, a cell in the queue to be scheduled is called and output through a read pointer of the queue to be scheduled.
In this embodiment, the read pointer and the write pointer of the corresponding queue may be read according to the queue number of the queue to be scheduled, and whether the queue to be scheduled is in a non-empty state is determined again according to the values of the read pointer and the write pointer, so as to ensure that no scheduling round empty occurs. If the queue to be scheduled is in a non-empty state, a cell is scheduled through the read pointer of the queue and is output to finish one-time scheduling.
In an exemplary embodiment of the present application, if the read pointer and the write pointer of the queue to be scheduled are equal, that is, the queue to be scheduled is in an empty state, then no scheduling output is performed.
In order to avoid the situation of empty queue caused by the queue being in an empty state when the queue is subsequently scheduled, in an exemplary embodiment of the present application, after updating the read pointer of the queue to be scheduled, the method further includes: and determining whether to update the queue state identifier of the queue to be scheduled or not based on the current write pointer and the updated read pointer of the queue to be scheduled. Therefore, the state of the queue can be updated, so that whether the queue is to be scheduled or not is determined according to the latest state of the queue, the occurrence of scheduling empty is avoided, the time can be saved, and the resource occupation can be reduced.
It should be understood that, since the update of the queue status will be delayed, some schedules will be empty, however, as the throughput increases, a large number of cells will be accumulated due to the previous empty, and the number of times of empty will gradually decrease for buffering a part of cells on each queue, or even the empty condition will not occur.
In an exemplary embodiment of the present application, after acquiring the current write pointer and the updated read pointer of the queue to be scheduled, the method further includes: and when the difference value between the current write pointer and the updated read pointer is smaller than a set difference value, sending a reminding signal, wherein the reminding signal is used for reminding that one queue is close to a full state so as to adjust the sequence of sending the cells by a plurality of channels. Therefore, the condition of the queue to be fully stored can be fed back to the external equipment, the external equipment can decide the output of each queue after acquiring the information, and the condition that the queue in a cell waiting full state which is still queued and stored in the buffer space releases the storage space to cause blockage can be avoided.
In order to more clearly understand the technical solution of the present application, fig. 2 is an application schematic diagram of a multi-queue scheduling method based on an FPGA according to an exemplary embodiment of the present application. As shown in fig. 2, the FPGA-based multi-queue scheduling method of the present application may be applied to multi-channel scheduling of DMA (Direct Memory Access), where a CPU stores cells of multiple channels into multiple queues respectively, sends the cells to an FPGA through a PCIE (Peripheral Component interconnect express) interface, stores the cells in multiple queues of a buffer space, and schedules the multiple queues in sequence by DMA to output the cells of each queue.
When the number of queues is large, each queue can occupy more RAM resources of the FPGA by directly scheduling, and in an exemplary embodiment of the present application, the queues can be grouped, one queue in one group is scheduled each time, and each group is sequentially scheduled, so that fairness is ensured. And when each packet completes one-time scheduling, the position of the current scheduling, namely the queue which completes one-time scheduling, is recorded, and when the next scheduling of the packet is completed, the queue which is scheduled next time can be determined to be other queues except the queue which is scheduled according to the recorded position of the last scheduling.
Fig. 3 is an application diagram of a multi-queue scheduling method based on an FPGA according to an exemplary embodiment of the present application. As shown in fig. 3, there are 16 queues to be scheduled, and in this embodiment, by using the 6-input characteristic of a Look-Up-Table (LUT), a queue status line is divided into 4-bit groups, and each group records the last scheduled position with 2 bits, so that the use of the LUT can be saved, the maximum scheduling performance is ensured, and a register can be inserted between multiple levels of LUT cascades to improve the scheduling timing.
The application also provides a multi-queue scheduling device based on the FPGA. Fig. 4 is a schematic structural diagram of a multi-queue scheduling apparatus based on an FPGA according to an exemplary embodiment of the present application. As shown in fig. 4, the apparatus 40 includes:
a storage module 410 configured to store the cells of the plurality of channels one by one into a plurality of queues in one buffer space;
a determining module 420 configured to determine that a queue is a queue to be scheduled when a cell is stored in the queue;
and the scheduling module 430 is configured to perform scheduling in sequence by using the read pointer of each queue to be scheduled according to the time difference of the queue storing cells.
In an exemplary embodiment of the present application, the determining module includes:
a first updating submodule configured to update a queue state flag of a queue to a non-empty state when a cell is stored in the queue;
a first determination submodule configured to determine that the queue is a queue to be scheduled based on the non-empty state.
In an exemplary embodiment of the application, the first update submodule is further configured to increment a write pointer of a queue by 1 when a cell is stored in the queue.
In an exemplary embodiment of the present application, the scheduling module includes:
the reading sub-module is configured to read a read pointer and a write pointer of a queue to be scheduled based on a queue number of the current queue to be scheduled;
a second determining submodule configured to determine whether the queue to be scheduled is in a non-empty state based on the read pointer and the write pointer of the queue to be scheduled;
and the scheduling submodule is configured to, if so, invoke and output one cell in the queue to be scheduled through the read pointer of the queue to be scheduled.
In an exemplary embodiment of the present application, the scheduling module further includes:
and the second updating submodule is configured to add 1 to the read pointer of the queue to be scheduled after the cells of the queue to be scheduled are called and output.
In an exemplary embodiment of the present application, the scheduling module further includes:
and the third updating submodule is configured to determine whether to update the queue state identifier of the queue to be scheduled based on the current write pointer and the updated read pointer of the queue to be scheduled after the read pointer of the queue to be scheduled is updated.
In an exemplary embodiment of the present application, the scheduling module further includes:
and the sending module is configured to send a reminding signal when a difference value between the current write pointer and the updated read pointer is smaller than a set difference value after the current write pointer and the updated read pointer of the queue to be scheduled are obtained, wherein the reminding signal is used for reminding that one queue is close to a full state so as to adjust the sequence of sending the cells by a plurality of channels.
The implementation process of the functions and actions of each module in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, wherein the modules described as separate parts may or may not be physically separate, and the parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the application. One of ordinary skill in the art can understand and implement it without inventive effort.
The application also provides a computer device. Fig. 5 is a schematic structural diagram of a computer device according to an exemplary embodiment of the present application. As shown in fig. 5, the computer device 50 comprises a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the program:
storing cells of a plurality of channels into a plurality of queues in a buffer space one by one;
when a cell is stored in a queue, determining that the queue is a queue to be scheduled;
and according to the time difference of the queue storage cells, sequentially scheduling by using the read pointer of each queue to be scheduled.
The present application further provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of:
storing cells of a plurality of channels into a plurality of queues in a buffer space one by one;
when a cell is stored in a queue, determining that the queue is a queue to be scheduled;
and according to the time difference of the queue storage cells, sequentially scheduling by using the read pointer of each queue to be scheduled.
Embodiments of the present application may take the form of a computer program product embodied on one or more readable media having program code embodied therein, including but not limited to disk storage, CD-ROM, optical storage, and the like. Computer-usable readable media, which include both non-transitory and non-transitory, removable and non-removable media, may implement the information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer readable media include, but are not limited to: phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), erasable programmable read only memory (EEPROM), flash memory or other memory technologies, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic tape storage or other magnetic storage devices, or any other non-transmission medium, may be used to store information that may be accessed by a computing device.
The foregoing description of specific embodiments of the present application has been presented. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as within the scope of the present disclosure as long as there is no contradiction between the combinations of the technical features.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.

Claims (10)

1. A multi-queue scheduling method based on FPGA is characterized by comprising the following steps:
storing cells of a plurality of channels into a plurality of queues in a buffer space one by one;
when a cell is stored in a queue, determining that the queue is a queue to be scheduled;
and according to the time difference of the queue storage cells, sequentially scheduling by using the read pointer of each queue to be scheduled.
2. The FPGA-based multi-queue scheduling method of claim 1, wherein the step of determining that a queue is scheduled to be a queue to be scheduled when a cell is stored in the queue comprises:
when a cell is stored in a queue, updating the queue state identifier of the queue to be in a non-empty state;
determining that the queue is a queue to be scheduled based on the non-empty state.
3. The FPGA-based multi-queue scheduling method of claim 2, wherein when a cell is stored in a queue, further comprising:
adding 1 to the write pointer of the queue.
4. The FPGA-based multi-queue scheduling method of claim 3, wherein the step of scheduling each queue to be scheduled comprises:
reading a read pointer and a write pointer of a queue to be scheduled based on a queue number of the current queue to be scheduled;
determining whether the queue to be scheduled is in a non-empty state based on a read pointer and a write pointer of the queue to be scheduled;
if yes, a cell in the queue to be scheduled is called and output through a read pointer of the queue to be scheduled.
5. The FPGA-based multi-queue scheduling method of claim 4, further comprising, after the cells of the queue to be scheduled are called and output:
and adding 1 to the read pointer of the queue to be scheduled.
6. The FPGA-based multi-queue scheduling method of claim 5, further comprising, after updating the read pointer of the queue to be scheduled:
and determining whether to update the queue state identifier of the queue to be scheduled or not based on the current write pointer and the updated read pointer of the queue to be scheduled.
7. The FPGA-based multi-queue scheduling method of claim 6, after obtaining the current write pointer and the updated read pointer of the queue to be scheduled, further comprising:
and when the difference value between the current write pointer and the updated read pointer is smaller than a set difference value, sending a reminding signal, wherein the reminding signal is used for reminding that one queue is close to a full state so as to adjust the sequence of sending the cells by a plurality of channels.
8. An FPGA-based multi-queue scheduling apparatus, comprising:
the storage module is configured to store the cells of the channels into a plurality of queues in a buffer space one by one;
the device comprises a determining module, a scheduling module and a scheduling module, wherein the determining module is configured to determine that a queue is a queue to be scheduled when a cell is stored in the queue;
and the scheduling module is configured to sequentially schedule by using the read pointer of each queue to be scheduled according to the time difference of the queue storage cells.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the FPGA-based multi-queue scheduling method of any one of claims 1 to 7 are implemented when the program is executed by the processor.
10. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out the steps of the FPGA-based multi-queue scheduling method according to any one of claims 1 to 7.
CN202010321678.5A 2020-04-22 2020-04-22 Multi-queue scheduling method and device based on FPGA, computer equipment and storage medium Pending CN111522643A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010321678.5A CN111522643A (en) 2020-04-22 2020-04-22 Multi-queue scheduling method and device based on FPGA, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010321678.5A CN111522643A (en) 2020-04-22 2020-04-22 Multi-queue scheduling method and device based on FPGA, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111522643A true CN111522643A (en) 2020-08-11

Family

ID=71903432

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010321678.5A Pending CN111522643A (en) 2020-04-22 2020-04-22 Multi-queue scheduling method and device based on FPGA, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111522643A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112559041A (en) * 2020-12-07 2021-03-26 山东航天电子技术研究所 Compatible processing method for ground direct instruction and satellite autonomous instruction
CN114245469A (en) * 2022-02-23 2022-03-25 南京风启科技有限公司 Multi-stage scheduling method and system supporting multiple time periods
GB2617792A (en) * 2021-07-21 2023-10-18 Boe Technology Group Co Ltd Display panel and display device

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1474568A (en) * 2002-08-06 2004-02-11 华为技术有限公司 Direct internal storage access system and method of multiple path data
CN1645839A (en) * 2005-01-25 2005-07-27 南开大学 Communicating network exchanging system and controlling method based on parallel buffer structure
CN101222422A (en) * 2007-09-28 2008-07-16 东南大学 Just expandable network scheduling method
CN101291546A (en) * 2008-06-11 2008-10-22 清华大学 Switching structure coprocessor of core router
CN102088412A (en) * 2011-03-02 2011-06-08 华为技术有限公司 Exchange unit chip, router and transmission method of cell information
CN102437929A (en) * 2011-12-16 2012-05-02 华为技术有限公司 Method and device for de-queuing data in queue manager
CN103873550A (en) * 2012-12-10 2014-06-18 罗伯特·博世有限公司 Method for data transmission among ecus and/or measuring devices
CN104052831A (en) * 2014-06-11 2014-09-17 华为技术有限公司 Data transmission method and device based on queues and communication system
CN104125168A (en) * 2013-04-27 2014-10-29 中兴通讯股份有限公司 A scheduling method and system for shared resources
CN105573711A (en) * 2014-10-14 2016-05-11 深圳市中兴微电子技术有限公司 Data caching methods and apparatuses
CN106354673A (en) * 2016-08-25 2017-01-25 北京网迅科技有限公司杭州分公司 Data transmission method and device based on a plurality of DMA queues
CN107509127A (en) * 2017-07-27 2017-12-22 中国船舶重工集团公司第七二四研究所 A kind of adaptive polling dispatching method of multi fiber input rank
CN109495400A (en) * 2018-10-18 2019-03-19 中国航空无线电电子研究所 Fiber optic network interchanger
CN109962859A (en) * 2017-12-26 2019-07-02 北京华为数字技术有限公司 A kind of method for dispatching message and equipment
CN110011936A (en) * 2019-03-15 2019-07-12 北京星网锐捷网络技术有限公司 Thread scheduling method and device based on multi-core processor
US10402223B1 (en) * 2017-04-26 2019-09-03 Xilinx, Inc. Scheduling hardware resources for offloading functions in a heterogeneous computing system
CN110520853A (en) * 2017-04-17 2019-11-29 微软技术许可有限责任公司 The queue management of direct memory access
CN110837410A (en) * 2019-10-30 2020-02-25 北京奇艺世纪科技有限公司 Task scheduling method and device, electronic equipment and computer readable storage medium

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1474568A (en) * 2002-08-06 2004-02-11 华为技术有限公司 Direct internal storage access system and method of multiple path data
CN1645839A (en) * 2005-01-25 2005-07-27 南开大学 Communicating network exchanging system and controlling method based on parallel buffer structure
CN101222422A (en) * 2007-09-28 2008-07-16 东南大学 Just expandable network scheduling method
CN101291546A (en) * 2008-06-11 2008-10-22 清华大学 Switching structure coprocessor of core router
CN102088412A (en) * 2011-03-02 2011-06-08 华为技术有限公司 Exchange unit chip, router and transmission method of cell information
CN102437929A (en) * 2011-12-16 2012-05-02 华为技术有限公司 Method and device for de-queuing data in queue manager
CN103873550A (en) * 2012-12-10 2014-06-18 罗伯特·博世有限公司 Method for data transmission among ecus and/or measuring devices
CN104125168A (en) * 2013-04-27 2014-10-29 中兴通讯股份有限公司 A scheduling method and system for shared resources
CN104052831A (en) * 2014-06-11 2014-09-17 华为技术有限公司 Data transmission method and device based on queues and communication system
CN105573711A (en) * 2014-10-14 2016-05-11 深圳市中兴微电子技术有限公司 Data caching methods and apparatuses
CN106354673A (en) * 2016-08-25 2017-01-25 北京网迅科技有限公司杭州分公司 Data transmission method and device based on a plurality of DMA queues
CN110520853A (en) * 2017-04-17 2019-11-29 微软技术许可有限责任公司 The queue management of direct memory access
US10402223B1 (en) * 2017-04-26 2019-09-03 Xilinx, Inc. Scheduling hardware resources for offloading functions in a heterogeneous computing system
CN107509127A (en) * 2017-07-27 2017-12-22 中国船舶重工集团公司第七二四研究所 A kind of adaptive polling dispatching method of multi fiber input rank
CN109962859A (en) * 2017-12-26 2019-07-02 北京华为数字技术有限公司 A kind of method for dispatching message and equipment
CN109495400A (en) * 2018-10-18 2019-03-19 中国航空无线电电子研究所 Fiber optic network interchanger
CN110011936A (en) * 2019-03-15 2019-07-12 北京星网锐捷网络技术有限公司 Thread scheduling method and device based on multi-core processor
CN110837410A (en) * 2019-10-30 2020-02-25 北京奇艺世纪科技有限公司 Task scheduling method and device, electronic equipment and computer readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈仲谋: "WRR算法能为业务流提供最佳的QoS保证", 《科技信息》, 10 June 2007 (2007-06-10), pages 1 - 2 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112559041A (en) * 2020-12-07 2021-03-26 山东航天电子技术研究所 Compatible processing method for ground direct instruction and satellite autonomous instruction
CN112559041B (en) * 2020-12-07 2022-06-07 山东航天电子技术研究所 Compatible processing method for ground direct instruction and satellite autonomous instruction
GB2617792A (en) * 2021-07-21 2023-10-18 Boe Technology Group Co Ltd Display panel and display device
CN114245469A (en) * 2022-02-23 2022-03-25 南京风启科技有限公司 Multi-stage scheduling method and system supporting multiple time periods

Similar Documents

Publication Publication Date Title
CN111522643A (en) Multi-queue scheduling method and device based on FPGA, computer equipment and storage medium
CN111400022A (en) Resource scheduling method and device and electronic equipment
CN109857342B (en) Data reading and writing method and device, exchange chip and storage medium
KR102238034B1 (en) Techniques for Behavioral Pairing in Task Assignment System
US10209924B2 (en) Access request scheduling method and apparatus
RU2641250C2 (en) Device and method of queue management
JP2020502647A (en) Heterogeneous event queue
CN112099975A (en) Message processing method and system, and storage medium
KR20180103589A (en) Scheduling method and scheduler for switching
CN105677744A (en) Method and apparatus for increasing service quality in file system
CN112925616A (en) Task allocation method and device, storage medium and electronic equipment
CN109905331B (en) Queue scheduling method and device, communication equipment and storage medium
CN108595315B (en) Log collection method, device and equipment
CN117234691A (en) Task scheduling method and device
CN111190541B (en) Flow control method of storage system and computer readable storage medium
US11221971B2 (en) QoS-class based servicing of requests for a shared resource
CN112579271A (en) Real-time task scheduling method, module, terminal and storage medium for non-real-time operating system
CN115695330B (en) Scheduling system, method, terminal and storage medium for shreds in embedded system
CN115202842A (en) Task scheduling method and device
CN116107635A (en) Command distributor, command distribution method, scheduler, chip, board card and device
CN112532531B (en) Message scheduling method and device
CN107911317B (en) Message scheduling method and device
CN109426562B (en) priority weighted round robin scheduler
CN114401235B (en) Method, system, medium, equipment and application for processing heavy load in queue management
CN115454889B (en) Storage access scheduling method, system and chip

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination