In this paper, we conduct a measurement study on operational 5G networks deployed across different frequency bands (mmWave and sub-6GHz) and server locations (mobile edge and Internet cloud). Specifically, we assess 5G performance in both uplink and downlink across multiple operators' networks. We then carry out extensive comparisons of transport-layer protocols using ten different algorithms in full-fledged 5G networks, including an edge computing environment. Finally, we evaluate representative mobile applications over the 5G network with and without edge servers. Our comprehensive measurements provide several insights that affect the experience of 5G users: (i) With a 5G edge server, existing TCP congestion control algorithms can achieve throughput up to 1.8Gbps with only a single flow. (ii) The maximum TCP receive buffer size, which is set by off-the-shelf 5G phones, can limit the throughput performance of 5G networks, which is not observed in 4G LTE-A networks. (iii) Despite significant latency gains in download-centric applications, the 5G edge service provides limited benefits to CPU-intensive tasks or those that use significant uplink bandwidth. To our knowledge, this is the first measurement-driven understanding of 5G edge computing 'in the wild,' which can provide an answer to how edge computing would perform in real 5G networks.
CITATION STYLE
Lim, H., Lee, J., Lee, J., Sathyanarayana, S. D., Kim, J., Nguyen, A., … Ha, S. (2024). An Empirical Study of 5G: Effect of Edge on Transport Protocol and Application Performance. IEEE Transactions on Mobile Computing, 23(4), 3172–3186. https://doi.org/10.1109/TMC.2023.3274708
Mendeley helps you to discover research relevant for your work.