Research on the Fiber-to-the-Room Network Traffic Prediction Method Based on Crested Porcupine Optimizer Optimization (2024)

1. Introduction

With the rapid evolution of the Internet—coupled with the widespread adoption of online education, e-commerce, high-definition video streaming, and similar applications—there has been a significant surge in people’s demand for network bandwidth and enhanced quality of service. This surge has, in turn, accelerated the expansion of fiber-optic bandwidth networks. However, through a large amount of research, it has been found that optical fiber has found its way into the home, but it is currently only being directed to a home user’s information box, resulting in operators trying to provide high-bandwidth services but not being able to give full play to the optical fiber to operate in the “Gigabit into the home is easy, a hundred megabits into the house is difficult to the” status quo. In response to these challenges, the telecom industry has proposed the concept of “fiber-to-the-room (FTTR)” connectivity [1], which is aimed at ensuring high-speed network access within residential premises through the convergence of FTTR and Wi-Fi to achieve seamless network roaming.

To facilitate seamless roaming, it is imperative to allocate network resources judiciously, ensuring minimal conflict or loss, thereby maximizing bandwidth utilization and delivering stable and reliable network connectivity services to users. Accurate analyses and predictions of network traffic can serve as effective mechanisms for optimizing resource utilization. By predicting network traffic accurately, operators can dynamically allocate resources, enhance resource utilization efficiency, and mitigate wastage. Currently, network traffic prediction methods are primarily studied in terms of three categories: traditional analysis, machine learning, and deep learning.

The traditional analysis methods mainly include the classical Auto Regressive (AR) [2] and Moving Average (MA) [3] models, which have been applied to learning the linear characteristics of network traffic; in addition, these models have also been derived together into the Auto Regressive Moving Average (ARMA) [4] model. ARMA combines both AR and MA to obtain more accurate predictions than both. Subsequently, the Autoregressive Integrated Moving Average model (ARIMA) [5] was developed to build upon the foundations laid by ARMA. These traditional methods, owing to their simplicity, have found widespread application in network traffic prediction.

However, they are limited to forecasting smooth time series, whereas real network traffic often exhibits non-smooth patterns. In response to the limitations of traditional methods, scholars have proposed machine learning methods, including K-Nearest Neighbor (KNN) [6], Extreme Gradient Boosting (XGBoost) [7], Support Vector Regression (SVR) [8], Extreme Learning Machine (ELM) [9], etc. While these methods offer improved capabilities for predicting non-stationary sequences, their performance diminishes on larger datasets and relies heavily on manual feature engineering, which is a time-intensive method. In contrast, deep learning approaches mitigate manual dependency and have gained traction in network traffic prediction. Recurrent Neural Network (RNN) [10], long short-term memory (LSTM) [11], Gated Recurrent Unit (GRU) [12], etc., are commonly employed deep learning models. These models exhibit enhanced capability in capturing intricate temporal dependencies within network traffic data, thus offering superior prediction performance compared to traditional and machine learning methods. Additionally, deep learning techniques demonstrate scalability and adaptability to diverse network environments, making them promising candidates for advancing the field of network traffic prediction.

Additionally, existing network models still exhibit certain limitations. The hyperparameter settings of these models are often derived from prior research and may be somewhat subjective. Optimal hyperparameters can enhance network topology and improve generalization capabilities. Commonly utilized optimization algorithms for these models include Gradient-Based Optimizer (GBO), Grey Wolf Optimizer (GWO), Whale Optimization Algorithm (WOA), etc. M. Abdel-Basset et al. [13] introduced the Crested Porcupine Optimizer (CPO) and conducted a comparative analysis against GBO, GWO, WOA, Equilibrium Optimization (EO), etc. This comparison utilized three standard CEC test functions and six real-world engineering problems. The results indicated that CPO outperforms other algorithms across various performance metrics, such as convergence speed and accuracy, in the majority of cases.

However, existing traffic prediction models, such as the LSTM model proposed by Zhuo et al. [14], as well as the TCN model adopted by Zhang et al. [15], mainly focus on the forward data characteristics of network traffic. These models ignore the implicit information embedded in reverse data, resulting in their inability to comprehensively capture the dynamic characteristics of network traffic. This, in turn, affects the prediction accuracy of traffic models, and it ultimately fails to meet the performance requirements of FTTR networks. Furthermore, existing research predominantly focuses on single-step prediction, which only anticipates traffic conditions for the next time step. Conversely, multi-step prediction not only offers a more comprehensive perspective for FTTR network resource allocation, thus enabling better decision making, but it also reduces the frequency of scheduling operations by forecasting traffic situations across multiple future polling cycles. This reduction in scheduling operations applies to both bandwidth allocation processes, involving REPORT request frames and GATE authorization frames, thus optimizing both uplink and upload signal utilization. Consequently, by predicting traffic patterns across multiple future polling cycles, the number of REPORT request frames and GATE authorization frames in bandwidth allocation processes is minimized, facilitating the efficient utilization of uplink channels and enhancing overall resource utilization.

In addition, since FTTR has been proposed for a relatively short period of time, academic research related to it is still in the stages of exploration and development, and the research at this stage has seen proposals that traffic prediction can be applied to FTTR networks as a strategy. For example, Liu [16] estimated the size of TCP service traffic through traffic prediction, determined whether the network will be congested, and then determined appropriate treatments in advance according to the judgment results. Wu et al. [17] intelligently identified the types of services such as games, downloads, and videos in the network traffic by embedding the eAI module to ensure that the high-priority services could be transmitted with priority. Zhang et al. [18] conducted a study on the link layer, and they predicted the size of the physical layer protocols by predicting the size of the physical layer data unit message size. In addition, they reported the predicted triggered message size to the main gateway in advance to reduce the end-to-end transmission delay of FTTR messages. Although the above studies suggest that traffic prediction methods can be used in FTTR, they did not expand on what kind of prediction models they can be used for—so further research is needed.

In this paper, building upon traditional single-step prediction approaches, we enhance the existing prediction model by introducing a novel traffic prediction model. Our proposed model integrates CPO with BiTCN and BiLSTM, and it is also complemented with a self-attention (SA) mechanism. BiTCN is adept at capturing the local features within time-series data, while BiLSTM excels at capturing long-term dependencies. The incorporation of the SA mechanism enables the model to selectively focus on the key features within the time series data that are relevant to the prediction target. Experimental results demonstrate the superior performance of the proposed BiTCN-BiLSTM-SA model across multiple datasets. The key innovations of this paper are as follows:

(1) We propose a hybrid model, BiTCN-BiLSTM-SA, to tackle the issue of limited information extraction capacity in unidirectional networks. BiTCN is employed to capture the implicit relationships between traffic input and output data, which are then fed into BiLSTM for prediction. Additionally, the SA mechanism is integrated to amplify the impact of the crucial information on BiLSTM output, thereby enhancing the model’s prediction accuracy.

(2) Traditional TCN is used to perform forward convolutional computations on the input sequences for traffic prediction, thus overlooking the potential influence of backward information on prediction outcomes. This study addresses this limitation by introducing BiTCN, which integrates both forward and backward information, thus enhancing the extraction of the features from network traffic sequences.

(3) The application of CPO to network traffic prediction solves the challenges of the BiTCN-BiLSTM-SA model with its high randomness and difficulty in selecting hyperparameters, and it also improves the accuracy of traffic prediction.

(4) Building upon single-step prediction, multi-step prediction is introduced to enhance the scheduling efficiency of the control frames in bandwidth allocation and to optimize the utilization of network resources.

2. FTTR Architecture Description

FTTR adopts the full network grouping method, as shown in Figure 1, which mainly contains the FTTR master gateway, FTTR slave gateway, and supporting optical components such as splitters. The functions of each component are described below.

The FTTR master gateway is a new integrated device with a network location between the OLTs and slave gateways that the operator is responsible for maintaining. It connects upward to the external network via optical fiber to provide FTTH fiber to the home and downward to provide optical ports to connect the slave gateways in each room to form an FTTR network via P2MP, as well as to provide user-side interfaces, such as Ethernet, Wi-Fi, etc., for the purpose of communicating with the terminal equipment. Compared with FTTH, the network deployment is more flexible as the master gateway is continuously expandable downward through the slave gateways, which are connected to the user equipment by wired or wireless means, and they are also connected to the FTTR master gateway upward by optical fiber. Slave gateways can set different service priorities according to specific different services. Slave gateways are created with exclusive functions, such as providing exclusive gaming slave gateways, HD video slave gateways, teleconferencing slave gateways, etc. Usually FTTR slave gateways are equipped with AP functions to provide Wi-Fi access points for wireless devices. Splitters are in between the FTTR master and slave gateways, and they are used to connect the master gateway and multiple slave gateway devices, thus dividing an optical fiber into multiple implementations to distribute the downlink data and centralize uplink data (similar to the role of switches).

FTTR networks can connect up to 256 terminal devices, which is about eight times the maximum number of connections of a traditional network, and it can guarantee a variety of whole-house smart terminal networking. But an increase in the number of terminals will also bring some new problems.

(1) Increased Traffic Volatility: The proliferation of terminals implies a potential increase in network traffic volatility as more devices simultaneously send and receive data.

(2) Complexity in Traffic Patterns: Varied traffic demands and usage patterns among different terminals render traffic patterns more complex and harder to predict.

(3) Decline in Prediction Accuracy: With the expansion of network scale, traditional traffic prediction models may struggle to accurately capture the traffic variations of all terminals, thus leading to decreased prediction accuracy and further exacerbating uneven bandwidth allocation within the network.

Hence, in the context of FTTR, it is imperative to address the aforementioned issues by employing deep learning to construct traffic prediction models, thereby enhancing prediction accuracy and real-time performance. Simultaneously, the introduction of optimization algorithms for intelligent parameter tuning can reduce manual intervention, thus facilitating the intelligent management of FTTR networks.

3. Materials and Methods

In this section, we offer an in-depth overview of the structure of the proposed CPO-BiTCN-BiLSTM-SA traffic prediction model and elucidate the functionalities of its constituent components.

3.1. The Structure of the Proposed Model

The architecture of CPO-BiTCN-BiLSTM-SA comprises two main components (as illustrated in Figure 2): the model building section and algorithm optimization section. It can be specifically described as detailed below.

(1) BiTCN is introduced to address the limitation of traditional TCNs, which typically extract only forward information in network traffic. BiTCN is detailed in Section 3.2.

(2) The BiLSTM network is introduced to further augment the model’s capability to learn from long-term network traffic sequences and extract bidirectional information. BiLSTM is elaborated upon in Section 3.3.

(3) The SA layer is introduced to assign weights to the traffic sequences output from the BiLSTM layer, thus enhancing the network’s focus on important features. The SA mechanism is discussed in Section 3.4.

(4) The aim in introducing CPO is to obtain optimal hyperparameters for the BiTCN-BiLSTM-SA model, thereby reducing the need for manual parameter tuning. Details regarding the utilization of CPO are provided in Section 3.5.

(5) The performance of the CPO-BiTCN-BiLSTM-SA network traffic prediction model is compared with seven benchmark models to illustrate its advantages. Specific experimental results are presented in Section 4.

3.2. Bidirectional Temporal Convolutional Networks

TCN comprises three primary components: causal convolution, dilated convolution and residual connections [19]. These components can amalgamate the strengths of CNN and RNN. Nevertheless, traditional TCN solely conducts forward convolution operations on input sequences, thereby capturing only the forward data features of traffic sequences while disregarding potential hidden features in the reverse direction. Consequently, this article introduces bidirectional convolution to capture bidirectional data features [20]. Compared to TCN, BiTCN enhances the feature fusion effect on the bi-directional time-series information of network traffic by processing past and future data separately through two temporal convolutional networks, which enables the model to consider both forward and backward information in the time series, as well as capture the dependencies in the time series more comprehensively. The architecture of the bidirectional dilated causal convolutional is depicted in Figure 3.

The causal property of causal convolution dictates that an input traffic sequence (x0,x1,,xτ) traverses through the network to produce outputs (y0,y1,,yτ), with each output at any given time step being dependent on the inputs preceding the current time point. To address the network’s limitations in capturing long-range dependency information in traffic data, inflated convolution is introduced. This allows for the acquisition of a larger receptive field with fewer layers, thus facilitating the efficient extraction of deeper information from local to global scales, thereby enhancing the model’s expressive capacity. Simultaneously, this approach renders the network more lightweight and enhances computational efficiency. When dealing with one-dimensional data [x0,x1,,xτ], the output value of the hidden layer at time step t was depicted, as shown in Equation (1).

h(t)=i=0S1f(i)xtdi,

where f(i) denotes the convolution kernel element at position i, S represents the size of the convolution kernel, and d the expansion factor.

To circumvent the problem of losing important original features during the information extraction process, residual connectivity was introduced, which can improve the stability of the model in the process of dealing with long sequential traffic data, thus accelerating the response and convergence of the deep network, as well as mitigating the issue of vanishing gradients. Furthermore, we integrated dropout and batch normalization techniques to prevent model overfitting and to expedite training. The structure of the residual connection is illustrated in Figure 4.

BiTCN addresses the limitations of traditional TCN, which solely extract unidirectional traffic information, by enhancing the extraction of traffic sequence features.

3.3. Bidirectional Long Short-Term Memory Networks

Network traffic data, as a type of time-series data, are characterized by the existence of the temporal continuity and dependence between data points, where adjacent time points can pass information to each other. However, with the deepening of neural network training, traditional RNNs encounter the issue of vanishing gradients, making it challenging to handle long sequence traffic data [21]. LSTM represents a specialized variant of RNN that is designed to tackle the challenge of vanishing gradients during training [22]. LSTM replaces the hidden layer nodes of an RNN with a memory control unit, which regulates the retention of important information while discarding irrelevant data. This mechanism enables LSTM networks to maintain information over longer time intervals, thus enhancing their efficacy in processing data with extended temporal contexts. Consequently, LSTM networks yield superior performance when confronted with datasets containing prolonged time spans.

LSTM consists of three gates: the input, forgetting, and output gates [23]. Additionally, it maintains a cell state for retaining the long-term memory of flow information. The memory cell of the LSTM is shown in Figure 5.

In the above figure, ht denotes the hidden layer information at time t, Ct1 represents the cell information at time t, and Xt signifies the input traffic information at time t [24]. The specific calculation steps are as follows:

ft=sigmoid(Wf[ht1,xt]+bf),

it=sigmoid(Wi[ht1,xt]+bi),

c˜t=tanh(Wc˜[ht1,xt]+bc),

ct=ftct1+itc˜t,

ot=sigmoid(Wo[ht1,xt]+bo),

ht=ottanh(ct),

where ft, it, ct, and ot, are the forgetting gates, input gates, cell states, and output gates at time t, respectively; Wf, Wi, Wc, and Wo are the weights; and bf, bi, bc, bo are the bias terms.

The unidirectional LSTM solely retrieves the data from historical records at a specific time in the future. However, in network traffic prediction, time series forecasting necessitates not only referencing historical data from the current moment, but also considering future traffic information to achieve long-term traffic prediction. Hence, the BiLSTM model is proposed as an enhancement to the LSTM model in terms of improving prediction accuracy. The BiLSTM model comprises two independent LSTM models, which intake input traffic sequences in both forward and reverse orders to extract features, subsequently merging the output vectors of the two LSTM models to form the final output. Experimental results have demonstrated the efficacy and performance superiority of this model for traffic feature extraction compared to a single LSTM structural model [25]. The architecture of the model is depicted in Figure 6.

The BiLSTM enhances the efficacy of traffic feature extraction and prediction performance by incorporating both historical and future traffic information.

3.4. Self-Attention Mechanism

The self-attention mechanism, which is inspired by human cognitive processes, is a deep learning technique designed to prioritize critical information while reducing focus on less relevant data, thereby alleviating the issue of information overload caused by excessive network parameters [26]. In this study, we integrated the SA mechanism with BiLSTM to investigate its efficacy in extracting global traffic features at various positions before and after BiLSTM, as well as in uncovering deep feature correlations. Firstly, the input traffic data point I was multiplied with the corresponding weight matrix to obtain the query vector Q, key vector K, and value vector V, where Q=Wq·I, K=Wk·I, and V =Wv·I. Subsequently, Correlation A between Q and K was computed, followed by a softmax normalization to yield A. A denotes the attentional weight of each query vector for each item of input data, respectively, which is given by the following:

A=QKTdk,

A=softmax(A)=exp(A)k=1nexp(A).

Finally, Value Vector V was multiplied with A to obtain the final output O. The formula is O=A·V. The complete SA mechanism is formulated as follows:

Attention(Q,K,V)=softmax(QKTdk)v.

The augmented emphasis of the SA mechanism on the critical traffic features extracted by the input BiLSTM contributes to the heightened precision of the traffic model predictions.

3.5. Crested Porcupine Optimizer

CPO is a novel meta-heuristic algorithm that proposes a strategy for cyclic population reduction (CPR) techniques by using four different defense mechanisms from Crested Porcupine (CP), which improves the speed of convergence and population diversity. Like other meta-heuristic population-based algorithms, CPO initiates the search process by initializing a set of individuals as follows:

where n denotes the population size; Xi represents the ith candidate solution within the search space; L and R are the upper and lower bounds of the search range, respectively; and r is a random number ranging between [0,1]. The initialized population can be expressed as follows:

X=[X1X2XiXn]=[x1,1x1,2x1,jx1,mx2,1x2,1x2,jx1,mxi,1xi,2xi,jxi,mxn,1xn,2xn,jxn,m],

where xi,j denotes the jth position of the ith solution, n denotes the number of candidate solutions, and d represents the given problem dimension.

CPO simulates the idea of the CP technique in accelerating the convergence rate while preserving various forms of diversity, and only CPs subjected to a threat activate the defense mechanism.

Therefore, certain CPs were incorporated into the optimization process population strategy to expedite the convergence and to subsequently reintegrate it into the population to enhance diversity and avert local minima. The mathematical model for cyclic reduction in population size is illustrated in Equation (13), where T is the number of determined cycles, t denotes the function evaluation, Tmax is the maximum value of the evaluation function, and Nmin is the minimum number of newly generated population individuals.

N=Nmin+(NNmin)×(1(t%TmaxTTmaxT)).

As mentioned above, CP has four different defense mechanisms, which will be described in detail next.

(1) First defense mechanism

When the CP detects a predator, it responds by agitating its feathers. Subsequently, the predator has two options: it can either advance toward the CP, thereby promoting exploration of the vicinity between the predator and the CP to expedite convergence; or it can retreat from the CP, thereby facilitating exploration of more distant areas to identify unexplored territories. This behavior is mathematically modeled using Equation (14):

xit+1=xit+τ1×|2×τ2×xCPtyit|,

where xCPt denotes the best solution of the evaluation function t, yit denotes the position of the predator at the time of iteration t, τ1 denotes a normally distributed random value, and τ2 is a random value in [0,1].

(2) Second defense mechanism

In this mechanism, the CP makes sounds and threatens the predator. When the predator approaches the CP, the CP’s moans become louder. Mathematically modeling this behavior is achieved in Equation (15):

xit+1=(1U1)×xit+U1×(y+τ3×(xr1txr2t)),

where r1 and r2 are two random integers between [1,N], and t is a random number between [0,1].

(3) Third defense mechanism

In this mechanism, the CP releases a foul odor that permeates its surroundings, thus deterring predators from approaching. The behavior is modeled using Equation (16).

xit+1=(1R1)×xit+R1×(xr1t+Sit×(xr2txr3t)τ3×δ×γt×Si),

where xit denotes the position of the ith individual at iteration t, γt represents the defense factor, r3 denotes a random value between [1,N], δ controls the search direction. τ3 is a random value between [0,1], and Sit signifies the odor diffusion factor.

(4) Fourth defense mechanism

This mechanism is a physical retaliation. When a predator approaches the CP to initiate an attack, the CP responds with physical force. This interaction is modeled as a one-dimensional non-elastic collision, and it is mathematically represented by Equation (17).

xit+1=xCPt+(α(1τ4)+τ4)×(δ×xCPtxit)τ5×δ×γt×Fit,

where xCPt denotes the optimal solution obtained, τ4 is a random value between [1,4], α is the convergence rate factor, and Fit represents the average force affecting the CP.

The four defense mechanisms of CPO outlined above fall into two distinct categories: exploration and exploitation phases. During the exploration phase, a trade-off between the first and second defense mechanisms is executed when τ6<τ7, with τ6 and τ7 representing two randomly generated numbers between 0 and 1. In the exploitation phase, the third defense mechanism is employed when τ8 < Tf; otherwise, the fourth defense mechanism is activated. Tf is a constant ranging from 0 to 1, and it determines the trade-off between the third and fourth defense mechanisms. We present the simplified mathematical formula for updating the CP position as follows:

xit+1={{Applyeq.(14)τ6<τ7Applyeq.(15),Else.τ8<τ9(Exploration){Applyeq.(16)τ10<TfApplyeq.(17),Else.τ9τ8(Exploitation)},

where τ6, τ7, τ8, τ9, and τ10 are generated within the range of 0 to 1, which is subsequently normalized to derive the desired result xit+1.

CPO plays a crucial role in optimizing the hyperparameters of the flow prediction model that is proposed in this study, thereby minimizing the need for manual tuning.

The flowchart of CPO-BiTCN-BiLSTM-SA is shown in Figure 7. This process can be delineated into the following five steps:

(1) Initialization: We take the learning rate, the number of neurons, the regularization parameter, and the number of filters of the BiTCN-BiLSTM-SA network as the target hyperparameters for CPO optimization. After setting the value ranges of the relevant hyperparameters, the parameters such as population size, etc., are initialized.

(2) Fitness value: The fitness value of each CP is determined using the RMSE of the network model as the fitness function of the CPO.

(3) Update: The CP positions and population size are continuously updated to obtain the fitness value of each CP and to record both global and local optimal positions.

(4) Iteration: If the maximum number of iterations is completed, the loop and the optimal solution obtained during iterations is output. Otherwise, return to Step 3.

(5) Optimization output: After obtaining the optimal hyperparameters of the BiTCN-BiLSTM-SA model using CPO, the network is re-trained and evaluated.

4. Experiment

4.1. Experimental Environment

The experimental setup utilized Windows 11 as the operating system, which was complemented with an NVIDIA GeForce RTX 2050 graphics card. MATLAB version 2023b served as the primary tool for simulating the network traffic prediction. The hyperparameters, such as the learning rate, number of filters, regularization factor, and number of hidden layer units for the BiTCN-BiLSTM-SA network, were optimized using the CPO technique. Specifically, the hyperparameters are searched as follows: the learning rate was defined between [1×104,0.01], the number of hidden layer was between [10,100], the number of filters was between [20,120], the regularization factor was between [0.005, 1×105], and the other hyperparameters were set as shown in Table 1.

4.2. Datasets

The data sources utilized for traffic prediction in this study were established based on the ON/OFF data source overlay proposed by Bell Labs [27]. The ON/OFF model indicates that data generation occurs during the ON period, while no data are generated during the OFF period [28]. In Figure 8, the ON/OFF source overlay process is illustrated.

As shown in Figure 8, suppose that there are N independent data sources xit,i=1,2,,N, then the ON/OFF time of each data source is subject to Pareto distribution. Furthermore, xit=1 indicates that the data source is in the ON period and generates data, whereas xit=0 indicates that the data source is in the OFF period and no data are generated. At time t, the data of N data sources are superimposed to obtain the data size of the current system.

Regarding the ON/OFF cycle duration, it has been shown that when the ON and OFF times adhere to a heavy-tailed distribution, the resulting superimposed data exhibit self-similarity properties. The Pareto distribution aligns with the characteristics of heavy-tailed distribution, with its distribution function taking the following form:

F(x)=1(kx)α,

where α is a shape parameter with values in the range (1, 2], and k is a location parameter that determines the lower bound of the random number generated by the Pareto distribution. Each ON/OFF data source requires two sets of parameters, (kon,αon) and (kon,αoff), to represent it. For the self-similar flow data generated from the superposition of the N ON/OFF sources, the self-similarity parameter H is related to the α, and it satisfies the following equation:

H=12(3min{α1,α2,,αN}).

Assuming there are N ON/OFF data sending sources in an ON/OFF traffic mode, each source generates a flow at time t denoted as X1(t),X1(t),,XN(t), where t is a discrete time. The total flow X(t) is synthesized by the N sources as follows:

X(t)=i=1NXi(t).

In an FTTR network, since each room or area has its own ONU-AP, the traffic generated by multiple terminals under one ONU-AP is superimposed at that access point. Since each user’s traffic conforms to the self-similarity property, its superimposed traffic also conforms to this property, and the ON/OFF model is able to simulate the data sending of the service source in different states; as such, traffic with self-similarity properties is generated by adjusting the packet sending rate, packet sending interval, etc., in the ON/OFF model. In this paper, the data traffic size was set to 10 Gbps, which is also in line with the traffic size in the actual situation of FTTR usage.

Based on the aforementioned ON/OFF model, the traffic generated by the user roaming indoors was modeled using OPNET Modeler 14.5 software, as shown in Figure 9.

As can be seen from Figure 9, this OPNET modeling superimposes ON/OFF data sources, where the ON/OFF time satisfies pareto(1,1.4) and pareto(1,1.2) distributions, respectively. During the ON period, packet sizes are uniformly distributed between (64,1518). The 32 data sources access a cache queue and are then received by a statistical node module that is responsible for traffic counting and data processing. To verify the self-similarity characteristics of the generated traffic, two sets of traffic data with time scales (Time Scale) of 0.5 s and 2 s, each with a length of 10,000, are collected from the simulation system. After R/S calculation, the self-similarity coefficients H for these two sets of data were determined to be 0.7955 and 0.7984, respectively, both of which are close to the theoretical value of 0.8, thus satisfying our needs and making them sufficient for use as the dataset for traffic prediction (represented by Dataset A and Dataset B, respectively). Figure 10 illustrates the changing conditions of some traffic rates intercepted by the simulation.

4.3. Evaluation Criteria

To comprehensively evaluate the prediction performance of each model, this paper primarily employed the following four metrics to verify the accuracy of the proposed method: RMSE, MAE, MAPE, and R2. These evaluation formulas were used in the same manner as in (22)–(25), where xi denotes the true value of the flow, xi^ denotes the predicted value of the flow, xi¯ is the mean value of the flow data sample, and n indicates the length of the flow time series.

RMSE=1ni=1n(xix^i)2,

MAE=1ni=1n|xix^i|,

MAPE=100%ni=1n|x^ixixi|,

R2=1i=1n(x^ixi)2i=1n(xi¯xi)2.

In the aforementioned evaluation indices, smaller values of RMSE, MAE, and MAPE indicate more accurate prediction data. When the R2 is approaching 1, it indicates a stronger fit between the predicted and actual values [29], thus demonstrating a superior performance.

4.4. Experimental Results

The CPO-BiTCN-BiLSTM-SA model proposed in this chapter uses a CPO algorithm to optimize the BiTCN-BiLSTM-SA network to obtain the best network parameters. Figure 11 shows the optimal fitness change curve of C-BTLS in Dataset A. It can be seen that the CPO algorithm reaches the optimal solution when it is iterated to the 9th, 6th, and 6th generation in the 1-step, 2-step, and 3-step flow predictions, respectively. On this basis, the best parameters of the BiTCN-BiLSTM-SA network in Dataset A were obtained, as shown in Table 2.

To assess the performance of the individual prediction models, experiments were conducted comparing CPO-BiTCN-BiLSTM-SA against seven other models, including LSTM, TCN, and BiLSTM. To ensure the reliability of the experiments, each model underwent thirty trials, with the average value taken as the performance metric. The results for both datasets are summarized in Table 3.

As depicted in Table 3, and Figure 12 and Figure 13, the accuracy of the multi-step prediction was found to be lower compared to the single-step prediction, and it diminished progressively with an increase in the number of prediction steps. This decline can be attributed to the fact that the value output from the first step of the multi-step prediction is used as the input for the next step of prediction, thus causing error accumulation over successive steps and resulting in reduced accuracy.

In order to show the performance enhancement of the proposed traffic model more clearly, XGBoost was selected for the purposes of plotting and comparison, of which the performance on both datasets is shown in Figure 12, Figure 13 and Figure 14 (where each color denotes the predicted or actual values for both the datasets). When combining the information from Table 3 and Figure 12, Figure 13 and Figure 14, it becomes evident that CPO-BiTCN-BiLSTM-SA outperforms the other 8 models. Moreover, from Table 3, it can be observed that the BiTCN-BiLSTM-SA model exhibits superior prediction performance compared to the BiTCN-BiLSTM model. Taking Dataset A as an example, the RMSE decreased by 2.52%, one step; 6.35%, two steps; and 3.00%, three steps. The MAE decreased by 5.92%, one step; 4.10%, two steps; and 2.27%, three steps. The MAPE decreased by 8.92%, one step; 6.10%, two steps; and 4.09%, three steps. The R2 was elevated by 1.49%, one step; 0.60%, two steps; and 0.21%, three steps. These results indicate that the SA mechanism effectively enhances the performance by prioritizing the crucial information during training. Furthermore, when contrasting the BiTCN-BiLSTM model to the TCN-LSTM model, it is evident that the bidirectional structure excels in extracting profound insights compared to the unidirectional structure. Specifically, considering Dataset A, the RMSE, MAE, and MAPE decreased by 6.10%, 14.96%, 6.35%, respectively, for one step; by 8.04%, 5.15%, 13.50%, respectively, for two step; and by 15.35%, 5.90%,2.20%, respectively, for three step. The R2 increased by 2.17%, one step; 1.10%, two steps; and 2.05%, three steps. These findings underscore the efficacy of bidirectional structures in extracting comprehensive information for improved prediction outcomes.

Figure 15 and Figure 16 depict the evaluation metrics of each model for Dataset A and Dataset B. It is evident that the BiTCN-BiLSTM model is superior to TCN-LSTM model, primarily due to its bidirectional structure, as well as it facilitation of bidirectional feature extraction. The causally inflated convolutional structure of BiTCN enables the model to capture information over a broader time range owing to its larger field of view. Meanwhile, the BiLSTM integrates forward and backward LSTMs, enhancing the model’s capacity to retain long-term information and grasp the dynamics of data from both temporal directions. Moreover, the SA layer, positioned after the BiTCN-BiLSTM layer, leverages the self-attention mechanism to capture the global dependencies in time series data. This enables the model to focus on the critical segments of the series, thus fostering enhanced feature interactions and facilitating the learning of complex and abstract features. Utilizing CPO for hyperparameter optimization in the BiTCN-BiLSTM-SA model yields superior results across all evaluation criteria compared to manually defined hyperparameter configurations. For instance, the CPO-BiTCN-BiLSTM-SA model reduces the RMSE, MAE, and MAPE by 11.71%, 4.13%, and 3.71%, respectively, while enhancing R2 by 0.82% over the BiTCN-BiLSTM-SA model for single-step prediction on Dataset A. This underscores the effectiveness of CPO in enhancing the model prediction accuracy across various evaluation metrics.

5. Conclusions

In addressing the need for high-performance networks to facilitate seamless roaming in FTTR, this paper considers, from the perspective of traffic prediction, improving resource allocation. Acknowledging the limitations of conventional traffic prediction methods, which often overlook backward information, we propose the CPO-BiTCN-BiLSTM-SA prediction model. This model leverages BiTCN-BiLSTM architecture to comprehensively extract traffic sequence features by causally inflating convolution and processing information through bidirectional time series analysis. Additionally, we introduced a SA mechanism to dynamically adjust the weights of each time step, thereby capturing key information within the sequence and enhancing prediction accuracy. Furthermore, the proposed model undergoes optimization via CPO to attain optimal hyperparameter settings that are tailored to its structure, thus enabling both single-step and multi-step flow prediction. The method was compared with other models five times, and the following conclusions were obtained:

(1) In multi-step prediction scenarios, the accumulation of prediction errors correlates with an increasing number of prediction steps, consequently diminishing the prediction accuracy of the model.

(2) The bidirectional structure inherent in the BiTCN-BiLSTM model can fully fetch the forward and backward information of the flow sequence, which improves the prediction accuracy.

(3) Integration of the SA mechanism into the BiTCN-BiLSTM model facilitates the acquisition of weighted information, thus aiding in the capture of crucial model insights.

(4) In comparison with the traditional XGBoost model, the proposed model has an average reduction of 29.50%, 25.43%, and 25.00% in the RMSE, MAE, and MAPE metrics, respectively, with a 6.70% improvement in the R2.

The study presented herein demonstrates the efficacy of the CPO-BiTCN-BiLSTM-SA prediction model in significantly enhancing traffic prediction performance. Furthermore, the integration of multi-step traffic prediction offers a solution to the resource allocation challenges encountered in FTTR networks. In subsequent FTTR research, single-step prediction can be used to predict the newly arrived traffic data of ONU-AP during the waiting period of transmission so as to solve the problem that the waiting time of this part of data packets is too long (which results in an increase in the overall delay of the FTTR network). By using multi-step prediction to forecast the request of traffic data of multiple polling cycles in the future, the channel resources are occupied by control frame transmission and the delay increase caused by frequent dynamic bandwidth allocation calculation can be optimized; thus, the utilization of network bandwidth can be improved. Therefore, in order to reduce the impact of the traffic prediction model’s performance on subsequent FTTR resource allocation in the future, it is necessary to continuously improve the accuracy of traffic model prediction, which is also the research significance of this paper.

Author Contributions

Conceptualization, B.C. and J.Z.; methodology, J.Z.; validation, J.Z. and Y.H.; formal analysis, B.C.; investigation, J.Z.; writing—original draft preparation, J.Z. and B.C.; writing—review and editing, J.Z. and Y.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key Research and Development Program of China (grant number 2021YFB2900800) and the Science and Technology Commission of Shanghai Municipality (grant number 22511100902).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in this study.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding authors.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. ITU-T Recommendations. GSTP-FTTR—Use Cases and Requirements of Fibre-to-the-Room (FTTR); ITU: Geneva, Switzerland, 2021. [Google Scholar]
  2. Hu, Y.; Li, J.; Zhang, Z.; Jing, W.; Gao, C. Parametric modeling of hypersonic ballistic data based on time varying auto-regressive model. Sci. China Technol. Sci. 2020, 63, 1396–1405. [Google Scholar] [CrossRef]
  3. Prilliman, M.; Stein, J.S.; Riley, D.; Tamizhmani, G. Transient Weighted Moving-Average Model of Photovoltaic Module Back-Surface Temperature. IEEE J. Photovolt. 2020, 10, 1053–1060. [Google Scholar] [CrossRef]
  4. Moon, J.; Hossain, B.; Chon, K.H. AR and ARMA model order selection for time-series modeling with ImageNet classification. Signal Process. 2021, 183, 108026. [Google Scholar] [CrossRef]
  5. Zhang, X.; Zhou, Q.; Weng, S.; Zhang, H. ARIMA Model-Based Fire Rescue Prediction. Sci. Program. 2021, 2021, 3212138. [Google Scholar] [CrossRef]
  6. Uddin, S.; Haque, I.; Lu, H.; Moni, M.A.; Gide, E. Comparative performance analysis of K-nearest neighbour (KNN) algorithm and its different variants for disease prediction. Sci. Rep. 2022, 12, 6256. [Google Scholar] [CrossRef] [PubMed]
  7. Li, S. Traffic Congestion Analysis and Prediction Method based on XGBoost and LightGBM. In Proceedings of the 2022 International Conference on Intelligent Transportation, Big Data & Smart City (ICITBS), Hengyang, China, 26–27 March 2022; pp. 344–347. [Google Scholar] [CrossRef]
  8. Ma, J.; Wang, Y.; Niu, X.; Jiang, S.; Liu, Z. A comparative study of mutual information-based input variable selection strategies for the displacement prediction of seepage-driven landslides using optimized support vector regression. Stoch. Environ. Res. Risk Assess. 2022, 36, 3109–3129. [Google Scholar] [CrossRef]
  9. Kim, M. The generalized extreme learning machines: Tuning hyperparameters and limiting approach for the Moore-Penrose generalized inverse. Neural Netw. Off. J. Int. Neural Netw. Soc. 2021, 144, 591–602. [Google Scholar] [CrossRef]
  10. Deng, T.; Wan, M.; Shi, K.; Zhu, L.; Wang, X.; Jiang, X. Short term prediction of wireless traffic based on tensor decomposition and recurrent neural network. SN Appl. Sci. 2021, 3, 779. [Google Scholar] [CrossRef]
  11. Misbha, D.S. Detection of Attacks using Attention-based Conv-LSTM and Bi-LSTM in Industrial Internet of Things. In Proceedings of the 2022 International Conference on Automation, Computing and Renewable Systems (ICACRS), Pudukkottai, India, 13–15 December 2022; pp. 402–407. [Google Scholar]
  12. Deng, Y.; Zhang, Y.; Lv, H.; Yang, Y.; Wang, Y. Prediction of freeway self-driving traffic flow based on bidirectional GRU recurrent neural network. In Proceedings of the 2022 International Conference on Culture-Oriented Science and Technology (CoST), Lanzhou, China, 18–21 August 2022; pp. 60–63. [Google Scholar] [CrossRef]
  13. Abdel-Basset, M.; Mohamed, R.; Abouhawwash, M. Crested Porcupine Optimizer: A new nature-inspired metaheuristic. Knowl.-Based Syst. 2023, 284, 111257. [Google Scholar] [CrossRef]
  14. Zhuo, Q.; Li, Q.; Yan, H.; Qi, Y. Long short-term memory neural network for network traffic prediction. In Proceedings of the 2017 12th International Conference on Intelligent Systems and Knowledge Engineering (ISKE), Nanjing, China, 24–26 November 2017; pp. 1–6. [Google Scholar] [CrossRef]
  15. Zhang, F.; Lu, H.; Guo, F.; Gu, Z. Traffic Prediction Based VNF Migration with Temporal Convolutional Network. In Proceedings of the 2021 IEEE Global Communications Conference (GLOBECOM), Madrid, Spain, 7–11 December 2021; pp. 1–6. [Google Scholar]
  16. Liu, M. Research on TCP Service Optimization in WIFI Home Network; Huazhong University of Science and Technology: Wuhan, China, 2021. [Google Scholar]
  17. Wu, X.; Zeng, Y.; Liu, J.; Chanclou, P. Real-Time Demonstration of Fiber-to-the-Room for >1 Gb/s Home Networking with Guaranteed QoS and Fast Roaming. In Proceedings of the 2021 Asia Communications and Photonics Conference (ACP), Shanghai, China, 24–27 October 2021; pp. 1–3. [Google Scholar]
  18. Zhang, D.; Zhu, J.; Liu, X.; Wu, X.; Li, J.; Zeng, Y.; Si, X.; Li, H. Fiber-to-the-room: A key technology for F5G and beyond. J. Opt. Commun. Netw. 2023, 15, D1–D9. [Google Scholar] [CrossRef]
  19. Hewage, P.; Behera, A.; Trovati, M.; Pereira, E.; Ghahremani, M.; Palmieri, F.; Liu, Y. Temporal convolutional neural (TCN) network for an effective weather forecasting using time-series data from the local weather station. Soft Comput. 2020, 24, 16453–16482. [Google Scholar] [CrossRef]
  20. Zhang, D.; Chen, B.; Zhu, H.; Goh, H.; Dong, Y.; Wu, T. Short-term wind power prediction based on two-layer decomposition and BiTCN-BiLSTM-attention model. Energy 2023, 285, 128762. [Google Scholar] [CrossRef]
  21. Yu, Y.; Si, X.; Hu, C.; Zhang, J. A Review of Recurrent Neural Networks: LSTM Cells and Network Architectures. Neural Comput. 2019, 31, 1235–1270. [Google Scholar] [CrossRef] [PubMed]
  22. Liu, Q.; Yu, M.; Bai, M. A study on a recommendation algorithm based on spectral clustering and GRU. iScience 2023, 27, 108660. [Google Scholar] [CrossRef] [PubMed]
  23. Siami-Namini, S.; Tavakoli, N.; Namin, A.S. The Performance of LSTM and BiLSTM in Forecasting Time Series. In Proceedings of the 2019 IEEE International Conference on Big Data (Big Data), Los Angeles, CA, USA, 9–12 December 2019; pp. 3285–3292. [Google Scholar] [CrossRef]
  24. Hu, S.; Wang, Y.; Cai, W.; Yu, Y.; Chen, C.; Yang, J.; Zhao, Y.; Gao, Y. A Combined Method for Short-Term Load Forecasting Considering the Characteristics of Components of Seasonal and Trend Decomposition Using Local Regression. Appl. Sci. 2024, 14, 2286. [Google Scholar] [CrossRef]
  25. Sherstinsky, A. Fundamentals of Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM) Network. Phys. D Nonlinear Phenom. 2020, 404, 132306. [Google Scholar] [CrossRef]
  26. Hao, X.; Liu, Y.; Pei, L.; Li, W.; Du, Y. Atmospheric Temperature Prediction Based on a BiLSTM-Attention Model. Symmetry 2022, 14, 2470. [Google Scholar] [CrossRef]
  27. Willinger, W.; Taqqu, M.S.; Sherman, R.; Wilson, D.V. Self-similarity through high-variability: Statistical analysis of Ethernet LAN traffic at the source level. IEEE/ACM Trans. Netw. 1997, 5, 71–86. [Google Scholar] [CrossRef]
  28. Zhao, Y.; Zhang, B.; Li, C.; Chen, C. ON/OFF Traffic Shaping in the Internet: Motivation, Challenges, and Solutions. IEEE Netw. 2017, 31, 48–57. [Google Scholar] [CrossRef]
  29. Zhang, Y.; Ma, T.; Li, T.; Wang, Y. Short-Term Load Forecasting Based on DBO-LSTM Model. In Proceedings of the 2023 3rd International Conference on Energy Engineering and Power Systems (EEPS), Dali, China, 28–30 July 2023; pp. 972–977. [Google Scholar] [CrossRef]

Research on the Fiber-to-the-Room Network Traffic Prediction Method Based on Crested Porcupine Optimizer Optimization (1)

Figure 1.FTTR scene diagram.

Figure 1.FTTR scene diagram.

Research on the Fiber-to-the-Room Network Traffic Prediction Method Based on Crested Porcupine Optimizer Optimization (2)

Research on the Fiber-to-the-Room Network Traffic Prediction Method Based on Crested Porcupine Optimizer Optimization (3)

Figure 2.The whole process of the CPO-BiTCN-BiLSTM-SA model.

Figure 2.The whole process of the CPO-BiTCN-BiLSTM-SA model.

Research on the Fiber-to-the-Room Network Traffic Prediction Method Based on Crested Porcupine Optimizer Optimization (4)

Research on the Fiber-to-the-Room Network Traffic Prediction Method Based on Crested Porcupine Optimizer Optimization (5)

Figure 3.Bidirectional inflated causal convolutional network structure.

Figure 3.Bidirectional inflated causal convolutional network structure.

Research on the Fiber-to-the-Room Network Traffic Prediction Method Based on Crested Porcupine Optimizer Optimization (6)

Research on the Fiber-to-the-Room Network Traffic Prediction Method Based on Crested Porcupine Optimizer Optimization (7)

Figure 4.Structure of BiTCN residual block.

Figure 4.Structure of BiTCN residual block.

Research on the Fiber-to-the-Room Network Traffic Prediction Method Based on Crested Porcupine Optimizer Optimization (8)

Research on the Fiber-to-the-Room Network Traffic Prediction Method Based on Crested Porcupine Optimizer Optimization (9)

Figure 5.LSTM network diagram.

Figure 5.LSTM network diagram.

Research on the Fiber-to-the-Room Network Traffic Prediction Method Based on Crested Porcupine Optimizer Optimization (10)

Research on the Fiber-to-the-Room Network Traffic Prediction Method Based on Crested Porcupine Optimizer Optimization (11)

Figure 6.BiLSTM structure.

Figure 6.BiLSTM structure.

Research on the Fiber-to-the-Room Network Traffic Prediction Method Based on Crested Porcupine Optimizer Optimization (12)

Research on the Fiber-to-the-Room Network Traffic Prediction Method Based on Crested Porcupine Optimizer Optimization (13)

Figure 7.CPO-BiTCN-BiLSTM-SA flowchart.

Figure 7.CPO-BiTCN-BiLSTM-SA flowchart.

Research on the Fiber-to-the-Room Network Traffic Prediction Method Based on Crested Porcupine Optimizer Optimization (14)

Research on the Fiber-to-the-Room Network Traffic Prediction Method Based on Crested Porcupine Optimizer Optimization (15)

Figure 8.The ON/OFF source superposition process.

Figure 8.The ON/OFF source superposition process.

Research on the Fiber-to-the-Room Network Traffic Prediction Method Based on Crested Porcupine Optimizer Optimization (16)

Research on the Fiber-to-the-Room Network Traffic Prediction Method Based on Crested Porcupine Optimizer Optimization (17)

Figure 9.The node model.

Figure 9.The node model.

Research on the Fiber-to-the-Room Network Traffic Prediction Method Based on Crested Porcupine Optimizer Optimization (18)

Research on the Fiber-to-the-Room Network Traffic Prediction Method Based on Crested Porcupine Optimizer Optimization (19)

Figure 10.Simulated traffic segment.

Figure 10.Simulated traffic segment.

Research on the Fiber-to-the-Room Network Traffic Prediction Method Based on Crested Porcupine Optimizer Optimization (20)

Research on the Fiber-to-the-Room Network Traffic Prediction Method Based on Crested Porcupine Optimizer Optimization (21)

Figure 11.Optimal fitness curve for CPO-BiTCN-BiLSTM-SA in Dataset A.

Figure 11.Optimal fitness curve for CPO-BiTCN-BiLSTM-SA in Dataset A.

Research on the Fiber-to-the-Room Network Traffic Prediction Method Based on Crested Porcupine Optimizer Optimization (22)

Research on the Fiber-to-the-Room Network Traffic Prediction Method Based on Crested Porcupine Optimizer Optimization (23)

Figure 12.Plot of the single-step prediction results for Dataset A.

Figure 12.Plot of the single-step prediction results for Dataset A.

Research on the Fiber-to-the-Room Network Traffic Prediction Method Based on Crested Porcupine Optimizer Optimization (24)

Research on the Fiber-to-the-Room Network Traffic Prediction Method Based on Crested Porcupine Optimizer Optimization (25)

Figure 13.Plot of the two-step prediction results for Dataset A.

Figure 13.Plot of the two-step prediction results for Dataset A.

Research on the Fiber-to-the-Room Network Traffic Prediction Method Based on Crested Porcupine Optimizer Optimization (26)

Research on the Fiber-to-the-Room Network Traffic Prediction Method Based on Crested Porcupine Optimizer Optimization (27)

Figure 14.Plot of the single-step prediction results for Dataset B.

Figure 14.Plot of the single-step prediction results for Dataset B.

Research on the Fiber-to-the-Room Network Traffic Prediction Method Based on Crested Porcupine Optimizer Optimization (28)

Research on the Fiber-to-the-Room Network Traffic Prediction Method Based on Crested Porcupine Optimizer Optimization (29)

Figure 15.Evaluation metrics for the eight models on Dataset A.

Figure 15.Evaluation metrics for the eight models on Dataset A.

Research on the Fiber-to-the-Room Network Traffic Prediction Method Based on Crested Porcupine Optimizer Optimization (30)

Research on the Fiber-to-the-Room Network Traffic Prediction Method Based on Crested Porcupine Optimizer Optimization (31)

Figure 16.Evaluation metrics for the eight models on Dataset B.

Figure 16.Evaluation metrics for the eight models on Dataset B.

Research on the Fiber-to-the-Room Network Traffic Prediction Method Based on Crested Porcupine Optimizer Optimization (32)

Research on the Fiber-to-the-Room Network Traffic Prediction Method Based on Crested Porcupine Optimizer Optimization (33)

Table 1.Model hyperparameter settings.

Table 1.Model hyperparameter settings.

HyperparametersValue
Batch size32
OptimizerAdam
Number of hidden layer nodes12
CPO population size5
CPO iterations15

Research on the Fiber-to-the-Room Network Traffic Prediction Method Based on Crested Porcupine Optimizer Optimization (34)

Table 2.BiTCN-BiLSTM-SA Network optimum parameters in Dataset A.

Table 2.BiTCN-BiLSTM-SA Network optimum parameters in Dataset A.

Parameters1-Step2-Step3-Step
Optimum number of neurons25893
Optimal initial learning rate5.07×1041.85×1032.33×103
Optimum regularization coefficient4.09×1051.05×1042.27×103
Optimum number of filters592236

Research on the Fiber-to-the-Room Network Traffic Prediction Method Based on Crested Porcupine Optimizer Optimization (35)

Table 3.Model predictions.

Table 3.Model predictions.

DatasetsModelsRMSE(×102)MAE(×102)
1-Step2-Step3-Step1-Step2-Step3-Step
AXGBoost0.920 ± 0.0211.41 ± 0.0621.736 ± 0.0530.723 ± 0.0230.8046 ± 0.0231.168 ± 0.025
LSTM0.711 ± 0.0111.377 ± 0.0241.627 ± 0.0340.67 ± 0.0250.777 ± 0.0171.159 ± 0.056
TCN0.654 ± 0.0321.325 ± 0.0251.599 ± 0.0430.658 ± 0.0340.757 ± 0.0221.132 ± 0.027
TCN-LSTM0.63 ± 0.0261.208 ± 0.0341.551 ± 0.0510.609 ± 0.0230.717 ± 0.0111.105 ± 0.021
BiLSTM0.612 ± 0.0231.117 ± 0.0311.505 ± 0.0350.579 ± 0.0220.707 ± 0.0131.053 ± 0.036
BiTCN0.605 ± 0.0151.073 ± 0.0261.503 ± 0.0310.565 ± 0.0220.705 ± 0.0251.007 ± 0.024
BiTCN-BiLSTM0.591 ± 0.0131.025 ± 0.0221.452 ± 0.0290.56 ± 0.0130.68 ± 0.0060.956 ± 0.004
BiTCN-BiLSTM-SA0.575 ± 0.0030.963 ± 0.0101.409 ± 0.0130.527 ± 0.0040.652 ± 0.0090.935 ± 0.008
CPO-BiTCN-BiLSTM-SA0.57 ± 0.0010.93 ± 0.0031.374 ± 0.0090.505 ± 0.0040.629 ± 0.0020.906 ± 0.005
BXGBoost0.376 ± 0.0050.7 ± 0.0100.831 ± 0.0060.292 ± 0.0090.493 ± 0.0150.652 ± 0.007
LSTM0.359 ± 0.0010.665 ± 0.0090.813 ± 0.0020.261 ± 0.0010.465 ± 0.010.632 ± 0.001
TCN0.351 ± 0.0100.654 ± 0.0110.801 ± 0.0110.258 ± 0.0090.46 ± 0.0050.636 ± 0.003
TCN-LSTM0.334 ± 0.0130.624 ± 0.0060.783 ± 0.0030.237 ± 0.0100.445 ± 0.0040.615 ± 0.005
BiLSTM0.316 ± 0.0110.604 ± 0.0100.762 ± 0.0030.229 ± 0.0050.437 ± 0.0030.603 ± 0.002
BiTCN0.29 ± 0.0100.578 ± 0.0060.753 ± 0.0010.223 ± 0.0020.431 ± 0.0010.595 ± 0.003
BiTCN-BiLSTM0.256 ± 0.0040.551 ± 0.0080.751 ± 0.0100.202 ± 0.0060.42 ± 0.0030.578 ± 0.009
BiTCN-BiLSTM-SA0.231 ± 0.0050.521 ± 0.0050.732 ± 0.0080.182 ± 0.0110.395 ± 0.0050.574 ± 0.003
CPO-BiTCN-BiLSTM-SA0.215 ± 0.0030.503 ± 0.0050.728 ± 0.0010.169 ± 0.0100.385 ± 0.0060.556 ± 0.002
DatasetsModelsMAPE (×102)R2(×102)
1-step2-step3-step1-step2-step3-step
AXGBoost0.585 ± 0.0090.805 ± 0.0111.0626 ± 0.01092.886 ± 0.10290.059 ± 0.03587.775 ± 0.068
LSTM0.56 ± 0.0050.738 ± 0.0050.954 ± 0.00693.622 ± 0.01191.623 ± 0.02189.868 ± 0.028
TCN0.555 ± 0.0010.724 ± 0.0070.947 ± 0.01094.266 ± 0.03493.957 ± 0.03590.894 ± 0.023
TCN-LSTM0.525 ± 0.0010.697 ± 0.0050.935 ± 0.00795.253 ± 0.02294.59 ± 0.01091.763 ± 0.016
BiLSTM0.524 ± 0.0090.685 ± 0.0030.925 ± 0.00696.153 ± 0.01094.832 ± 0.01692.379 ± 0.023
BiTCN0.507 ± 0.0100.673 ± 0.0120.919 ± 0.001196.695 ± 0.02195.443 ± 0.01293.639 ± 0.042
BiTCN-BiLSTM0.445 ± 0.0050.655 ± 0.0030.916 ± 0.01097.322 ± 0.03295.631 ± 0.02193.646 ± 0.011
BiTCN-BiLSTM-SA0.405 ± 0.0030.615 ± 0.0100.875 ± 0.00998.775 ± 0.02396.208 ± 0.01893.845 ± 0.021
CPO-BiTCN-BiLSTM-SA0.393 ± 0.0010.599 ± 0.0040.867 ± 0.00399.58 ± 0.01196.819 ± 0.01494.195 ± 0.003
BXGBoost0.25 ± 0.0040.388 ± 0.0040.515 ± 0.00695.945 ± 0.01091.479 ± 0.01288.095 ± 0.033
LSTM0.211 ± 0.020.357 ± 0.0050.496 ± 0.00396.871 ± 0.02494.406 ± 0.00989.78 ± 0.010
TCN0.204 ± 0.0060.35 ± 0.0030.495 ± 0.00597.041 ± 0.03294.645 ± 0.03991.299 ± 0.019
TCN-LSTM0.195 ± 0.0020.334 ± 0.0030.489 ± 0.01098.059 ± 0.01195.066 ± 0.03392.482 ± 0.021
BiLSTM0.174 ± 0.0120.313 ± 0.0070.487 ± 0.00298.361 ± 0.02195.723 ± 0.02392.997 ± 0.054
BiTCN0.179 ± 0.0070.311 ± 0.0060.482 ± 0.00598.454 ± 0.02796.407 ± 0.01293.263 ± 0.036
BiTCN-BiLSTM0.154 ± 0.0100.327 ± 0.0010.472 ± 0.00398.741 ± 0.02396.612 ± 0.01094.105 ± 0.013
BiTCN-BiLSTM-SA0.148 ± 0.0050.321 ± 0.0040.468 ± 0.00598.838 ± 0.03396.825 ± 0.02694.227 ± 0.022
CPO-BiTCN-BiLSTM-SA0.14 ± 0.0020.316 ± 0.0030.462 ± 0.00199.566 ± 0.01197.516 ± 0.02594.994 ± 0.029

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.


© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Research on the Fiber-to-the-Room Network Traffic Prediction Method Based on Crested Porcupine Optimizer Optimization (2024)

References

Top Articles
Latest Posts
Article information

Author: Cheryll Lueilwitz

Last Updated:

Views: 5747

Rating: 4.3 / 5 (54 voted)

Reviews: 85% of readers found this page helpful

Author information

Name: Cheryll Lueilwitz

Birthday: 1997-12-23

Address: 4653 O'Kon Hill, Lake Juanstad, AR 65469

Phone: +494124489301

Job: Marketing Representative

Hobby: Reading, Ice skating, Foraging, BASE jumping, Hiking, Skateboarding, Kayaking

Introduction: My name is Cheryll Lueilwitz, I am a sparkling, clean, super, lucky, joyous, outstanding, lucky person who loves writing and wants to share my knowledge and understanding with you.