Independent Researcher.
Received on 05 February 2021; revised on 19 March 2021; accepted on 22 March 2021
While containerization and Kubernetes have made cloud-native application deployment almost ubiquitous, Java-based applications are faced with unique challenges when running in the containerized world. There is great scalability, portability, and resource efficiency powered by Kubernetes; however, Java’s memory management and garbage collection processes often make it a performance bottleneck. In this study, we investigate how to best optimize containerized Java-based applications in Kubernetes environments.
Baseline and optimized setup configurations were compared in a controlled experimental method. The key optimization strategies were: JVM tuning, resource allocation policies, and using Kubernetes native tools such as horizontal pod autoscaling. Under various traffic conditions, performance metrics—response time, throughput, and resource utilization—were benchmarked. The results showed significant improvements: Throughput increased by 30%, CPU and memory utilization dropped by 15 % and 18%, respectively, and response time decreased by 25%.
But there is a takeaway that will show enterprises that when it comes to scaling and handling resources, Kubernetes is better than traditional VM-based deployments, and there are actionable insights from those findings. Future research could study advanced scaling techniques and production environments larger than those of conventional PICs.
Containerized Java applications; Kubernetes ecosystems; JVM optimization; cloud-native performance; resource efficiency; horizontal pod autoscaling
Preview Article PDF
Nagaraj Parvatha. Containerized solutions for high performance java-based applications in Kubernetes ecosystems. International Journal of Science and Research Archive, 2021, 02(01), 186-194. Article DOI: https://doi.org/10.30574/ijsra.2021.2.1.0042






