0% found this document useful (0 votes)
96 views12 pages

Experimental Study of The Performance of Liquid Cooling Tank Used For Single Phase Data Center

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
96 views12 pages

Experimental Study of The Performance of Liquid Cooling Tank Used For Single Phase Data Center

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Case Studies in Thermal Engineering 63 (2024) 105386

Contents lists available at ScienceDirect

Case Studies in Thermal Engineering


journal homepage: www.elsevier.com/locate/csite

Experimental study of the performance of liquid cooling tank used


for single-phase immersion cooling data center
Xueqiang Li a,b , Shentong Guo a , Haiwang Sun b, Shen Yin a , Shengchun Liu a,c,*,
Zhiming Wang a,b
a
Key Laboratory of Refrigeration Technology of Tianjin, International Centre in Fundamental and Engineering Thermophysics, Tianjin University of
Commerce, Tianjin, 300134, PR China
b
Tianjin Tier Technology Co., Ltd., Tianjin, 300450, PR China
c
Key Laboratory of Efficient Utilization of Low and Medium Grade Energy, Tianjin University, Tianjin, 300350, PR China

A R T I C L E I N F O A B S T R A C T

Keywords: High efficiency cooling method in data center (DC), such as single-phase immersion cooling
Liquid cooling tank system, has received more attention. For practical application, serval servers would be placed in a
Temperature uniformity liquid cooling tank where the coolant should be evenly transferred to every server. Therefore, the
Single-phase immersion cooling
performance of liquid cooling tank is curial to the operation of DC, especially for the temperature
Data center
uniformity among GPUs. In this study, the liquid cooling tank was designed and the impact of
inlet temperature, coolant flowrate, and the load ratio of server on the performance was exper-
imentally conducted. Results showed that, decreasing inlet temperature of coolant and the load
ratio of GPUs, and increasing the flowrate are benefit to decrease the average GPU temperature.
The decrease of dynamic viscosity and the increase of thermal conductivity is also helpful to the
heat exchange. When the inlet temperature increased from 20 ◦ C to 50 ◦ C, the outlet temperature
was increased by 30 ◦ C; however, the average GPU temperature was only increased by 21 ◦ C. The
designed liquid cooling tank showed a good temperature uniformity, where the temperature
deviation was within 5 ◦ C and the square deviation was in the range of 1.0–1.5.

1. Introduction

The swift evolution of technologies like big data and cloud computing has spurred a noticeable migration towards high-density and
integrated data centers (DCs). However, this trend exacerbates energy challenges [1,2]. Globally, DCs emerge as formidable entities in
the realm of energy consumption [3]. According to IEA report, it consumed the global electricity consumption by 1.15 % at 2021 [4].
Notably, the energy consumed by cooling systems constitutes a substantial portion, ranging from 30 to 50 % of DCs’ total electricity
usage for air-based cooling data center [5]. Therefore, enhancing the cooling efficiency for DCs grows increasingly imperative.
Regardless of the type of cooling system, maintaining the temperature uniformity for high heat flux components, such as CPUs and
GPUs, is vitally important. A lack of temperature uniformity can lead to the occurrence of hot spots, resulting in server shutdowns, or
even worse, server burnouts [6]. Many efforts have been made to improve temperature uniformity, which, however, is conducted
based on the air-based cooling data centers currently [7]. The data center with bad temperature uniformity primarily stems from three

* Corresponding author. Tianjin Key Laboratory of Refrigeration Technology, International Centre in Fundamental and Engineering Thermo-
physics, Tianjin University of Commerce, 300134, Tianjin, PR China.
E-mail address: [email protected] (S. Liu).

https://doi.org/10.1016/j.csite.2024.105386
Received 26 April 2024; Received in revised form 11 September 2024; Accepted 28 October 2024
Available online 30 October 2024
2214-157X/© 2024 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license
(http://creativecommons.org/licenses/by/4.0/).
Table 1

X. Li et al.
Current studies for the performance of single-phase immersion cooling data center.
Research content Coolant Heat load Performance Parameters/variables Remarks Reference
indicators

Coolant S5X; SmartCoolant; Server: 620 W Average Flowrate: 0.35–0.55 The dynamic viscosity of the coolant increased from 4.6 to 9.8 mPa s, and the Hnayno et al. [18]
ThermaSafe R temperature LPM; average temperature of the IT equipment increased by 6 %.
Inlet temperature of
coolant: 30–45 ◦ C.
Coolant Three different CPU: 270W Nusser number; Flowrate: 5.1 L/s; The average Nusser number and pressure drop of single-phase immersion liquid Chen et al. [2]
coolants × 2; Pressure drop Inlet temperature of cooling tank in electronic fluorinated liquids with low dynamic viscosity increased
GPU: 170W coolant: 20 ◦ C by 57.3 % and decreased by 59 %, respectively.
× 2;
Server:
1421.5 W

Coolant Mineral oil; CPU: 200W Nusser number; Flowrate: 67.36 g/s; Compared to mineral oil and synthetic oil, the single-phase immersion cooling Wang et al. [23]
Synthetic oil; × 2; system using silicone oil with higher dynamic viscosity exhibits the largest
Silicone oil Server: 860 W Pressure drop Inlet temperature of reduction of 24.4 % in average Nusselt number, while experiencing a significant
coolant: 30 ◦ C. increase of 7.4 times in flow resistance losses.
Coolant Mineral oil based SiC CPU: 60W; Convective heat Flow velocity: 0.3 m/s; Under the same working conditions, the convective heat transfer coefficient of SiC Luo et al. [24]
nanofluids Server: 60W transfer coefficient Inlet temperature of nanofluid is increased by about 30 % compared with that of pure mineral oil
coolant: 25 ◦ C.
Performance Electronic CPU: 270W PUE; Flowrate: 3.7–67.82 Compared to buoyancy-driven SPIC systems, the average temperature of Huang et al. [25]
optimization fluorinated liquids × 2; LPM immersion coolants and PUE in pump-driven single-phase immersion cooling
– server level Server: Coolant average Inlet temperature of systems reduces by 55.5 and 11.6 %, respectively.
1331.9 W temperature coolant: 20–40 ◦ C.
Performance Mineral oil CPU: 95W × pPUE; Flowrate: 0.5–2.5 LPM Comparing with results from baseline tests performed with traditional air cooling, Eiland et al. [26]
2

optimization 2; the technology shows a 34.4 % reduction in the thermal resistance of the system.
– server level Server: 190 W Thermal resistance Inlet temperature of Overall, the cooling loop was able to achieve partial power usage effectiveness
coolant: 30–50 ◦ C. (pPUECooling) values as low as 1.03.
Performance Novec 7100 CPU: 95W; CPU temperature Flow velocity: 0.4–1.2 1. Aluminum and copper heat sinks are compatible with the working fluid 2. A Cheng et al. [16]
optimization m/s. higher flowrate caused a more uniform temperature distribution.
– server level Server: 95W
Performance FC-40 CPU: Thermal resistance; Flowrate: 1–3 LPM; The thermal resistance and temperature of the T configuration are reduced by 12.6 Muneeshwaran
optimization 200–600W; % and 0.5–2.8 ◦ C, respectively, compared to the Z configuration. et al. [20]
– server level Server: Temperature Inlet temperature of
200–600W coolant: 15–35 ◦ C

Case Studies in Thermal Engineering 63 (2024) 105386


Performance EC-110 CPU: 120W CPU maximum Flowrate: 0.1–0.5 m3/ After optimization, the maximum temperature rise of the server is reduced from Li et al. [21]
optimization × 2; temperature; h; 45.6 ◦ C to 35.2 ◦ C, and the power consumption is also reduced from 1.2 kW to 0.9
– server level Server: 270W Power Inlet temperature of kW.
coolant: 34 ◦ C
Performance Noah@3000 A, CPU: 300W; Thermal resistances Flowrate: 1–3 LPM; Immersing the cold plate in liquid enhances convective heat transfer, reducing Zhang et al. [27]
optimization thermal resistances by 7.2 % and 9.4 % for Noah@3000A and PAO-4 at low flow
– server level PAO-4 Server: 300W Inlet temperature of rates.
coolant: 35 ◦ C
Performance FC-40 CPU: Temperature of Flowrate: 1–3 LPM; The thermal resistance and temperature of the server will rise monotonically with Shrigondekar et al.
optimization 200–600W; server; the increase of the by-pass fluid. Without bypass, the cooling performance of the [28]
– server level PAO-6 Server: Thermal resistances Inlet temperature of system is improved.
200–600W coolant: 15–35 ◦ C
Performance HFE-7100 CPU: Temperature Flowrate: 0.3–2.8 LPM The embedded gradient distributed microneedle fin array is designed and Feng et al. [29]
optimization 20–40W/cm2 difference of chip fabricated, which can reduce the surface temperature difference by about 30 %.
(continued on next page)
X. Li et al.
Table 1 (continued )
Research content Coolant Heat load Performance Parameters/variables Remarks Reference
indicators

– server level Inlet temperature of


coolant: 20 ◦ C
Performance EC-100 CPU: 95W × CPU temperature Flowrate: 0.5–1 GPM; Using an oil inlet temperature of 45 ◦ C and a volume flow rate of 1GPM keeps the Gandhi et al. [30]
optimization 2 CPU temperature around 70 ◦ C and enables a more even distribution of the
–server level Server: 500W Inlet temperature of temperature field inside the server.
coolant: 15–25 ◦ C
Performance Opticool 872,552 CPU: Average Inlet temperature of The average temperature of the server increased drastically with the Bansode et al. [31]
optimization 100–110W temperature of the coolant: 30–60 ◦ C; environmental temperature however, the power consumption of the server
3

-Server level Server: server Environmental remains unchanged.


100–110W temperature: 25–55 ◦ C
Performance Electronic CPU 350W × Chip temperature; Flowrate: 2–8 LPM; The anti-gravity flow scheme reduced the chip housing temperature and thermal Huang et al. [32]
optimization fluorinated liquids 2; Thermal resistance; resistance by 33.8 % and 55.6 %, respectively, while reducing the PUE by 1.4 %.
-Tank level Server: 850W PUE Inlet temperature of
×2 coolant: 18 ◦ C
Performance Silicone oil 20; CPU: 140W PUE Flow velocity: 0.005–1 This cooling technology reduces the system PUE below 1.04 and improves space Matsuoka et al.
optimization Silicone oil 50; ×2 m/s; utilization and system stability. [33]
-Tank level Soybean oil; Server: 440W Inlet temperature of
FC3283; × 32 coolant: 15–35 ◦ C
FC43

Case Studies in Thermal Engineering 63 (2024) 105386


X. Li et al. Case Studies in Thermal Engineering 63 (2024) 105386

Fig. 1. Studied server structure and the simplified process.

aspects: non-uniform airflow distribution, unbalanced airflow supply and demand, and airflow loss [8]. Non-uniform airflow distri-
bution can be addressed by modifying the layout of the Computer Room Air Conditioners (CRACs), and optimizing the height of the
plenum. For instance, VanGilder et al. [9] compared different CRAC layouts and found that placing two CRAC units opposite each other
yielded the best performance. Beitelmal et al. [10] determined that maintaining the height of the underfloor plenum within the range
of 0.76 m–0.91 m resulted in superior uniformity and minimized hotspot temperatures. The issue of unbalanced airflow supply and
demand can be mitigated by increasing the total volume of air supply and by managing the obstructions in the plenum. For example,
Macedo and Alkharabaheh [11,12] noted that increasing the total air supply volume contributed to enhancing the uniformity of
airflow distribution. In their research, they observed CPU temperatures changed up to 10 ◦ C with increased air flowrate. Alissa et al.
[13] suggested that obstructions should be placed in the “safe path” area, which can reduce the decay of airflow pressure by 16 % and
lower temperature by 12.7 % compared to placing them in the “critical path” area. Airflow loss can be improved by reducing bypass
loss and leakage loss. For instance, Zhou et al. [14] arranged rack baffles and server terminal baffles to decrease bypass loss, resulting
in a reduction of hotspot temperature by 0.7 ◦ C. Makwana et al. [15] placed sealing grommets under the racks, which led to a 15.2 %
increase in supply airflow.
Enhancing temperature uniformity among high heat flux components is also crucial in single-phase immersion cooling systems.
Cheng et al. [16] found that a faster coolant flow speed resulted in a more uniform temperature distribution around the heat sink. For
the practical system, several servers are placed in a liquid cooling tank to dissipate heat [17]. However, current research in this area
mainly concerns the coolants and performance optimization. Table 1 shows the current literature reviews about the single-phase
immersion cooling system. For the coolant, the impact of thermophysical properties and the nanofluids are discussed. For example,
Hnayno et al. [18] found that an increase in coolant viscosity from 4.6 to 9.8 mPa s led to a decrease in cooling performance of about 6
%. Rafati et al. [19] found that alumina nanofluid displayed superior thermal performance, with temperatures 5.5 ◦ C lower than those

4
X. Li et al. Case Studies in Thermal Engineering 63 (2024) 105386

Table 2
Detailed parameters for the server.
(3) Design of liquid cooling tank

Item Type Description Number

Server ​ Size: 340 × 177 × 683 mm 1


​ ​ Maximum heat load 2800 W
GPU RTX 4090 24 GB Size: 26.5 × 24 × 0.6 mm 8
Maximum heat load 350W
CPU Intel Platinum 8352V Size: 40 × 35 × 9 mm 2
GPU heat sink Base plate Size: 200 × 122 × 5 mm 8
Fins Size: 200 × 28 × 0.4 mm 107
Memory bank Samsung DDR4 ECC Size: 140 × 30 × 2 mm 16
System drive Gen3 Samsung PM983 Size: 100.2 × 69.85 × 6.8 mm 1
Digital hard drive Gen 4 Samsung PM9A3 Size: 22 × 110 × 4 mm 3
Power supply CRPS 2400W Size: 185 × 73.5 × 39 mm 4

achieved using a pure base fluid under identical operating conditions. For the performance optimization, different structures are
discussed and evaluated. For example, Muneeshwaran et al. [20] found the thermal resistance and temperature of the T configuration
were reduced by 12.6 % and 0.5–2.8 ◦ C, respectively, compared to the Z configuration. Li et al. [21] discussed the use of heat sinks in
single-phase immersion cooling systems. They suggested optimal structures, including fin number, fin thickness, fin height, and
substrate thickness, which led to a 19 % reduction in heat sink mass and only a 4.5 % increase in temperature. Kuncoro et al. [22] used
the Taguchi method to identify key parameters affecting immersion cooling performance. Their results indicated that cooling fans
impacted CPU temperature by 71.3 %, and energy consumption by 75.0 %. Even though some works are conducted based on serval
servers in the tank, as shown in Table 1, the temperature uniformity among the CPUs or GPUs are ignored.
In the open literatures, the temperature difference among CPUs or GPUs could vary significantly, from as little as 1.5 ◦ C to as much
as 17 ◦ C under different operating conditions [30,34]. For the practical application, the temperature uniformity should be clarified
with the fact that there have more CPUs or GPUs with high power density in the liquid cooling tank, and a bad temperature uniformity
would lead to the shutdown or even burnout of server; in addition, the flow distribution of coolant in the tank would largely affect the
temperature uniformity. To bridge this knowledge gap, the performance of liquid cooling tank is designed and experimentally dis-
cussed, where 16 high power density servers are arranged (700–2800 W per server). The impacts of inlet flowrate, inlet temperature,
and load ratio of server on the average GPU temperature, outlet temperature, and deviation and square deviation of GPU temperatures
are explored. The conclusion obtained in this study could provide the guidance for the temperature uniformity characteristics and
designing the liquid cooling tank.

2. Design of the liquid cooling tank and system description

2.1. Design of the liquid cooling tank

To design the liquid cooling tank, the performance of single server is firstly conducted through experiment to obtain the tem-
perature variation and pressure variation along with the coolant flowrate. Then the simplified CFD model of single server is established
and validated. Finally, the structure of the tank, as well as the system, is designed and manufactured.

(1) Experimental study for the single server

The server with a high-power density, equipped with eight GPUs (Model: RTX 4090 24 GB) and two CPUs (Model: Intel Platinum
8352V), is used, as depicted in Fig. 1(a). The total power for one server is as high as 2800 W. To enhance the distribution of coolant
flow, baffles have been integrated into the design. Additionally, the pin fin heat sinks on the CPU and GPU are used to facilitate more
effective heat dissipation [21]. The detailed specifications of the server’s components are listed in Table 2. The experiment is firstly
conducted to explore the operational characteristics for such server. During the experiment, the inlet temperature and the load ratio
are set as 25 ◦ C and 100 %, respectively. And the flowrate of coolant ranges from 0.13 to 2.19 m3/h. The average temperature for eight
GPUs and the pressure loss within the server are monitored and employed to evaluate the server performance.

(2) Server simplification

Since there have many components in server, it is hard to regard all the components when design the liquid cooling tank. In this
study, 16 servers would be arranged in the liquid cooling tank. Therefore, the server simplification is necessary. Based on the
experimental data, the relationship between the flowrate and the average temperature for eight GPUs and pressure loss within the
server are fitted as empirical equation, as shown in Eqn (1). Then, the server is simplified as the equivalent model in Computational
Fluid Dynamics (CFD) simulations, as shown in Fig. 1(b). Such model allows to simulate temperature variations and pressure losses
within the system under different flowrates. The simulation results have been validated with the experimental data and are illustrated
in Fig. 1(c). This validated model enables the streamlined design of liquid cooling tanks by providing a more manageable

5
X. Li et al. Case Studies in Thermal Engineering 63 (2024) 105386

Table 3
Detailed parameters for the liquid cooling tank.
Item Type Size (mm) Number

Tank Length × width × height 1494 × 926 × 850 1


Porous panel Length × width × height 1488 × 802 × 2 ​
Holes R15 128
Delivery pipe Manifold R26 × 1344 2
Branch pipes R16 × 708 2
Inlet R54 1
Outlets R12 216

Fig. 2. Schematic of liquid cooling tank.

Fig. 3. Schematic of the experimental system.

6
X. Li et al. Case Studies in Thermal Engineering 63 (2024) 105386

Table 4
Detailed parameter for the experimental system.
Item Type Range Uncertainty

Thermocouples TT-K-30 − 60-300 ◦ C ±0.1 ◦ C


Temperature for GPUs [36] Digital sensor − 40-125 ◦ C ±1 ◦ C
Flowmeter KL-LUXB 0–46 m3/h ±0.01m3/h
Cooling unit COF-40AP − 30-100 ◦ C ±0.1 ◦ C
Coolant pump TYPE40-30 5–32m3/h ​
Data acquisition Yokogawa-GP10 / /

Table 5
Thermophysical parameter of coolant.
Temperature (◦ C) Density (kg/m3) Heat capacity (J/(kg⋅◦ C)) Dynamic viscosity (Pa⋅s) Thermal conductivity (W/(m⋅◦ C))

20 794 1965 0.0117 0.34


40 786 2051 0.0061 0.36
60 775 2154 0.0036 0.38
80 762 2253 0.0024 0.42

representation of the server’s thermal behavior.

T = 0.0697q2 − 1.3246q + 61.257


(1)
ΔP = 0.03q2 + 0.538q + 0.14

where q is the coolant flowrate, m3/h; T is the average temperature of eight GPUs; ΔP is the pressure loss within the server.
The liquid cooling tank’s primary function is to ensure that the coolant is distributed evenly across all servers within the cooling
system. In this study, 16 servers are organized into two rows in the liquid cooling tank. By using the simplified server model, the impact
of different parameters on the flow distribution in the liquid cooling tank is conducted, including the manifold, the branch pipe, the
porous panel, the separation chamber, and the flowrate on the flow distribution, ensuring that the coolant effectively reaches all parts
of each server. More detailed analysis process can be found in our previous study [35]. Based on the analysis, the detailed parameters
for the tank can be found in Table 3. In addition, the designed tank and the serials for server and GPUs can be found in Fig. 2. Since
there have 128 GPUs in the liquid cooling tank, the variation of GPU temperatures is employed to assess the performance of liquid
cooling tank.

2.2. System description

Based on the designed liquid cooling tank, the system is also established, which comprises of the liquid cooling tank, 16 high power
density servers, the coolant pump, the flowmeter, and the cooling unit, as shown in Fig. 3. During the operation, the pump drives the
coolant to the liquid cooling tank where the coolant would be evenly transferred to each server through the delivery pipe and porous
panel. Then the heat produced in server could be removed timely by the circulation of coolant. Thus, the heat in coolant with high
temperature could be transferred to the ambient in coolant cooling unit. The detailed parameter for each component, as well as its
uncertainty, can be found in Table 4. The thermocouples are arranged at the inlet and outlet pipes to obtain the coolant temperature.
For the GPU temperatures, they can be obtained in computer since the temperature sensor are embedded on GPUs. During the
experiment, the impact of the coolant flowrate, the inlet temperature of the coolant, and the load ratio of the server on the average GPU
temperature is conducted.
The coolant used in this study is “BINGYI 797”, which is produced by Tianjin Tier Technology Co. Ltd. The thermophysical
properties of the coolant used in this study can be found in Table 5.

2.3. Key performance indicators

To assess the performance of liquid cooling tank in the experiment, the average GPU temperature, the outlet temperature of the
liquid cooling tank, and the deviation and the standard deviation of the GPU temperatures are used.
The average GPU temperature (T) is the mean temperature of 128 GPUs in the experiment, which could be directly obtained in the
computer.
The outlet temperature could be obtained through the data acquisition by using thermocouples.
The temperature deviation is the temperature difference between the maximum temperature and minimum. It indicates the spread
in temperature across different GPUs in the system, which can be calculated as follows:
Td = max(Ti |i = 1, 2, 3 · · · , 128) − min(Ti |i = 1, 2, 3 · · · , 128) (2)
The square deviation of the GPU temperature could assess the temperature uniformity in the liquid cooling tank. It is a measure of
how much the GPU temperatures vary from the average value. The relatively low deviation and square deviation suggest that the

7
X. Li et al. Case Studies in Thermal Engineering 63 (2024) 105386

Fig. 4. Temperature variation during the operation (Inlet temperature = 25 ◦ C, flowrate = 10 m3/h, load ratio = 100 %).

Fig. 5. Impact of inlet temperature on the performance of liquid cooling tank (Flowrate = 10 m3/h, load ratio = 100 %).

cooling system maintains uniform temperature across all GPUs, which is desirable for consistent performance and reliability. The
square deviation can be calculated as follows:
N
1 ∑
σS.D = (Ti − T)2 (n = 128, i = 1, 2, ...128) (3)
N i=1

where N is the number of GPU, Ti is the temperature of GPU i. T is the mean of all temperature of GPU

3. Results and discussion

3.1. Result for the system operation

Fig. 4 illustrates the temperature variation of the liquid cooling system during operation. The average GPU temperature rapidly
increased at the beginning of the system’s operation, indicating the heat was produced on GPUs. Meanwhile, the outlet temperature
changed slightly within 150 s which was due to the coolant circulation. After this initial increase, the average GPU temperature
stabilized at around 57 ◦ C after 200 s. This plateau indicated that the heat exchange by the coolant became balanced with the heat
generation by the GPUs, reaching a steady-state condition. However, the outlet temperature was increased more slowly compared to
the GPU temperature and stabilized at 36 ◦ C at around 1000 s. This slower increase was likely due to the thermal mass of the coolant
and the system’s ability to continuously remove heat. The temperature difference between the inlet and outlet (around 11 ◦ C)
demonstrated the amount of heat absorbed by the coolant as it passed through the system. For the temperature uniformity, the
maximum temperature deviation lied in the start-up process (within 200 s), which was due to the different heat loads of GPUs during

8
X. Li et al. Case Studies in Thermal Engineering 63 (2024) 105386

Fig. 6. Impact of coolant flowrate on the performance of liquid cooling tank (Inlet temperature = 25 ◦ C, load ratio = 100 %).

Fig. 7. Impact of load ratio on the performance of liquid cooling tank (Inlet temperature = 25 ◦ C, flowrate = 10 m3/h).

this stage. If all the GPUs reached the maximum heat load, the temperature deviation was within 4.8 ◦ C, with a square deviation of
1.44, indicating such liquid cooling tank showed a good temperature uniformity.

3.2. Impact of different parameters on tank performance

Fig. 5 shows the impact of inlet temperature on the liquid cooling tank. Apparently, the outlet temperature linearly increased with
the increase of inlet temperature, but their temperature difference was maintained around 11 ◦ C, indicating the heat could also be
removed timely. However, the temperature variation for average GPU temperature was different. When the inlet temperature
increased from 20 ◦ C to 50 ◦ C, the outlet temperature was increased by 30 ◦ C; however, the average GPU temperature was only
increased by 21 ◦ C. This phenomenon was mainly due to the variation of thermophysical of the coolant. Through Tables 5 and it can be
found the dynamic viscosity decreased by 79.5 % and thermal conductivity increased by 23.5 % when the temperature increased from
20 ◦ C to 80 ◦ C, which was benefit to the coolant flow and heat exchange. Moreover, it can be found the average GPU temperature was
73.5 ◦ C even when the inlet temperature reached as high as 50 ◦ C, indicating such cooling system could take full advantage of free
cooling and the energy consumption should be largely saved. For the temperature uniformity, it can be found the temperature de-
viation was within 4.8 ◦ C and the square deviation was around 1.0–1.5, indicating such cooling tank showed a good flow distribution
and the inlet temperature had slight impact on the temperature uniformity.
Fig. 6 shows the impact of coolant flowrate on the performance of liquid cooling tank. With the increase of coolant flowrate, the
heat produced in server can be removed timely, and therefore, both the average GPU temperature and the outlet temperature
decreased. It should be note that, when the flowrate increased from 5 m3/h to 10 m3/h, the average GPU temperature sharply
decreased from 59.8 ◦ C to 54.9 ◦ C, by 4.9 ◦ C, indicating the increase of flowrate was an effective way to decrease the GPU temperature.

9
X. Li et al. Case Studies in Thermal Engineering 63 (2024) 105386

Fig. 8. Temperature distribution in the liquid cooling tank for different conditions.

However, if the flowrate was large enough, the reduction of GPU temperature was not obvious. In this study, the average GPU
temperature was only decreased by 2.2 ◦ C when the flowrate increased from 10 m3/h to 32 m3/h. Similar results can also be found in
Luo’s work [37]. Moreover, the large coolant flowrate indicated more energy would be consumed. In addition, a large flowrate was
benefit to enhance the temperature uniformity. When the flowrate was 5 m3/h, the temperature deviation can be as high as 6.3 ◦ C, with
a square deviation of 1.58; while, these values were decreased to 4.1 ◦ C and 1.15, respectively when the flowrate increased to 32 m3/h.
Therefore, from the aspect of average GPU temperature, 10–15 m3/h of flowrate is recommended for such system.
Fig. 7 shows the impact of load ratio of GPUs on the performance of liquid cooling tank. Apparently, both the average GPU
temperature and outlet temperature showed linearly increasing tendency with the increase of load ratio of server. The variation of
average GPU temperature was more obvious, which was due to the heat produced was significantly different with different load ratio.
For example, the average GPU temperature increased by 21.0 ◦ C when the load ratio increased from 25 % to 100 %; while, this value
was only 7.9 ◦ C, which was mainly due to the unchanged flowrate of coolant. For temperature uniformity, the temperature deviation
and the square deviation were within 5 ◦ C and 1.5, respectively, indicating the load ratio had slight impact on the temperature
uniformity.

3.3. Characteristics of the temperature distribution

Fig. 8 shows the temperature distribution about the 128 GPUs. Overall, the temperature deviation for each condition were all
within 5 ◦ C, indicating such liquid cooling tank showed a good performance on flow distribution. However, the “hot spot” can also be
found on some GPUs, which could be divided into two categories. One is the higher temperature can be occurred for all different

10
X. Li et al. Case Studies in Thermal Engineering 63 (2024) 105386

operational conditions, such as Server-2-VI, Server-15-VI, etc. The reason can be attributed to the insufficient flowrate here and
irrationality design of the tank. Another kind of “hot spot” occurred in some cases, such as Server-7-VI, Server-14-II, Server-9-III, etc.
These may be due to the overclocking operation of GPU and the low accuracy of the digital temperature sensor from the GPU cores.
These two different hot spots should be carefully considered when designing the liquid cooling tank.

3.4. Discussion

This study experimentally discussed the performance of liquid cooling tank used in data center, especially for the temperature
uniformity of the GPUs. However, there have some works should be considered in the future: (1) the energy consumption analysis and
optimization: in the system, the coolant pump, the water pump, and the fans for cooling device would be used. To maintain the safety
operation of server and obtain the lowest PUE, the energy consumption would be conducted and optimized regarding the different
zones and different control strategies. (2) Since the digital temperature sensor at GPU cores are used to obtain the temperature, more
accurate temperature measurement methods would be explored based on ensuring the safe operation of server and data center.

4. Conclusions

The performance of the liquid cooling tank used in single-phase immersion cooling data center is experimental discussed. The
impacts of inlet temperature, coolant flowrate, and the load ratios of server on the performance of liquid cooling tank are discussed,
during which the average GPU temperature, the outlet temperature, the deviation and square deviation for GPU temperatures are
employed as key performance indicators. Through the results, it can be concluded as follows:

(1) Decreasing inlet temperature of coolant and the load ratio, and increasing the flowrate are benefit to decrease the average GPU
temperature.
(2) The decrease of dynamic viscosity and the increase of thermal conductivity is also helpful to the heat exchange. When the inlet
temperature increased from 20 ◦ C to 50 ◦ C, the outlet temperature was increased by 30 ◦ C; however, the average GPU tem-
perature was only increased by 21 ◦ C.
(3) The designed liquid cooling tank showed a good temperature uniformity, in which the temperature deviation was within 5 ◦ C
and the square deviation was in the range of 1.0–1.5.

CRediT authorship contribution statement

Xueqiang Li: Writing – review & editing, Writing – original draft, Methodology, Formal analysis, Data curation. Shentong Guo:
Writing – review & editing, Data curation, Investigation. Haiwang Sun: Writing – review & editing, Investigation, Conceptualization.
Shen Yin: Writing – review & editing. Shengchun Liu: Methodology, Conceptualization, Supervision. Zhiming Wang: Investigation,
Data curation.

Declaration of competing interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to
influence the work reported in this paper.

Acknowledgement

This work is supported by the Tianjin Science and Technology Commissioner Program (24YDTPJC00650), National Natural Sci-
ence Foundation of China (52476085), Tianjin Natural Science Foundation (23JCZDJC00250), and the Science and Technology
Program of Tianjin (No. 2021ZD031). The financial supports are sincerely appreciated.

Data availability

Data will be made available on request.

References

[1] H. Zhang, S. Shao, H. Xu, H. Zou, C. Tian, Free cooling of data centers: a review, Renew. Sustain. Energy Rev. 35 (2014) 171–182, https://doi.org/10.1016/j.
rser.2014.04.017.
[2] X. Chen, Y. Huang, S. Xu, C. Bao, Y. Zhong, Y. Chen, C. Zhang, Thermal performance evaluation of electronic fluorinated liquids for single-phase immersion
liquid cooling, Int. J. Heat Mass Tran. 220 (2024) 124951, https://doi.org/10.1016/j.ijheatmasstransfer.2023.124951.
[3] Y. Lin, T.H. Wang, C.C. Wang, Pool boiling performance for GaldenR HT 55 subject to smooth, pin fin, and sintered pin fin heat sink, Int. J. Therm. Sci. 202
(2024) 109090, https://doi.org/10.1016/j.ijthermalsci.2024.109090.
[4] G. Kamiya, Data Centres and Data Transmission Networks, IEA, Paris, 2022. https://www.iea.org/reports/data-centres-and-data-transmission-networks.
[5] K. Haghshenas, S. Taheri, M. Goudarzi, Infrastructure aware heterogeneous-workloads scheduling for data center energy cost minimization, IEEE Trans. Cloud
Comput. 10 (2022) 972–983, https://doi.org/10.1109/TCC.2020.2977040.

11
X. Li et al. Case Studies in Thermal Engineering 63 (2024) 105386

[6] T.F. Yang, Y.C. Chen, B.L. Chen, C.H. Li, W.M. Yan, Numerical study of fluid flow and temperature distributions in a data center, Case Stud. Therm. Eng. 28
(2021) 101405, https://doi.org/10.1016/j.csite.2021.101405.
[7] M. Kuzay, A. Dogan, S. Yilmaz, O. Herkiloglu, A.S. Atalay, A. Cemberci, C. Yilmaz, E. Demirel, Retrofitting of an air-cooled data center for energy efficiency,
Case Stud. Therm. Eng. 36 (2022) 102228, https://doi.org/10.1016/j.csite.2022.102228.
[8] R. Zhao, Y. Du, X. Yang, Z. Zhou, W. Wang, X. Yang, A critical review on the thermal management of data center for local hotspot elimination, Energy Build. 297
(2023) 113486, https://doi.org/10.1016/j.enbuild.2023.113486.
[9] J.W. VanGilder, Z.R. Sheffer, X.S. Zhang, C.T. O’Kane, The effect of under-floor obstructions on data center perforated tile airflow. International Electronic
Packaging Technical Conference and Exhibition, 2012, pp. 505–510, https://doi.org/10.1115/IPACK2011-52127.
[10] A.H. Beitelmal, Numerical investigation of data center raised-floor plenum, ASME Int. Mech. Eng. Congress Exposition (2015), https://doi.org/10.1115/
IMECE2015-50884. V08AT10A046.
[11] D. Macedo, R. Godina, P. Dinis Gaspar, P.D. da Silva, M. Trigueiros Covas, A parametric numerical study of the airflow and thermal performance in a real data
center for improving sustainability, Appl. Sci. 9 (2019) 3850, https://doi.org/10.3390/app9183850.
[12] S.A. Alkharabsheh, B.G. Sammakia, S. Shrivastava, R. Schmidt, A numerical study for contained cold aisle data center using CRAC and server calibrated fan
curves, ASME Int. Mech. Eng. Congress Exposition (2013), https://doi.org/10.1115/IMECE2013-65145. V010T11A085.
[13] H. Alissa, S. Alkharabsheh, S. Bhopte, B. Sammakia, Numerical investigation of underfloor obstructions in open-contained data center with fan curves.
Fourteenth Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronic Systems (ITherm), 2014, pp. 771–777, https://doi.org/
10.1109/ITHERM.2014.6892359.
[14] X. Zhou, X. Yuan, X. Xu, J. Liu, R. Kosonen, C. Liu, Research on the thermal performance of rack-level composite baffle diversion system for data centre, Energy
Efficiency 13 (2020) 1245–1262, https://doi.org/10.1007/s12053-020-09881-5.
[15] Y.U. Makwana, A.R. Calder, S.K. Shrivastava, Benefits of properly sealing a cold aisle containment system. Fourteenth Intersociety Conference on Thermal and
Thermomechanical Phenomena in Electronic Systems (ITherm), 2014, pp. 793–797, https://doi.org/10.1109/ITHERM.2014.6892362.
[16] C.C. Cheng, P.C. Chang, H.C. Li, F.I. Hsu, Design of a single-phase immersion cooling system through experimental and numerical analysis, Int. J. Heat Mass
Tran. 160 (2020) 120203.
[17] B.B. Kanbur, C. Wu, S. Fan, F. Duan, System-level experimental investigations of the direct immersion cooling data center units with thermodynamic and
thermoeconomic assessments, Energy 217 (2021) 119373, https://doi.org/10.1016/j.energy.2020.119373.
[18] M. Hnayno, A. Chehade, H. Klaba, G. Polidori, C. Maalouf, Experimental investigation of a data-centre cooling system using a new single-phase immersion/
liquid technique, Case Stud. Therm. Eng. 45 (2023) 102925, https://doi.org/10.1016/j.csite.2023.102925.
[19] M. Rafati, A.A. Hamidi, M. Shariati Niaser, Application of nanofluids in computer cooling systems (heat transfer performance of nanofluids), Appl. Therm. Eng.
45–46 (2012) 9–14, https://doi.org/10.1016/j.applthermaleng.2012.03.028.
[20] M. Muneeshwaran, Y.C. Lin, C.C. Wang, Performance analysis of single-phase immersion cooling system of data center using FC-40 dielectric fluid, Int.
Commun. Heat Mass Tran. 145 (2023) 106843, https://doi.org/10.1016/j.icheatmasstransfer.2023.106843.
[21] X. Li, Z. Xu, S. Liu, X. Zhang, H. Sun, Server performance optimization for single-phase immersion cooling data center, Appl. Therm. Eng. 224 (2023) 120080,
https://doi.org/10.1016/j.applthermaleng.2023.120080.
[22] I.W. Kuncoro, N.A. Pambudi, M.K. Biddinika, C.W. Budiyanto, Optimization of immersion cooling performance using the taguchi method, Case Stud. Therm.
Eng. 21 (2020) 100729, https://doi.org/10.1016/j.csite.2020.100729.
[23] H. Wang, X. Yuan, K. Zhang, X. Lang, H. Chen, H. Yu, S. Li, Performance evaluation and optimization of data center servers using single-phase immersion
cooling, Int. J. Heat Mass Tran. 221 (2024) 125057, https://doi.org/10.1016/j.ijheatmasstransfer.2023.125057.
[24] Q. Luo, C. Wang, H. Wen, L. Liu, Research and optimization of thermophysical properties of sic oil-based nanofluids for data center immersion cooling, Int.
Commun. Heat Mass Tran. 131 (2022) 105863, https://doi.org/10.1016/j.icheatmasstransfer.2021.105863.
[25] Y. Huang, J. Ge, Y. Chen, C. Zhang, Natural and forced convection heat transfer characteristics of single-phase immersion cooling systems for data centers, Int. J.
Heat Mass Tran. 207 (2023) 124023, https://doi.org/10.1016/j.ijheatmasstransfer.2023.124023.
[26] R. Eiland, J. Edward Fernandes, M. Vallejo, A. Siddarth, D. Agonafer, V. Mulay, Thermal performance and efficiency of a mineral oil immersed server over
varied environmental operating conditions, J. Electron. Packag. 139 (2017) 041005, https://doi.org/10.1115/1.4037526.
[27] Y.D. Zhang, Y.C. Lin, C.C. Wang, Investigation of the single-phase immersion cold plate amid PAO-4 and Noah@3000A – an experimental approach and its
numerical verification, Int. Commun. Heat Mass Tran. 155 (2024) 107509, https://doi.org/10.1016/j.icheatmasstransfer.2024.107509.
[28] H. Shrigondekar, Y.C. Lin, C.C. Wang, Investigations on performance of single-phase immersion cooling system, Int. J. Heat Mass Tran. 206 (2023) 123961,
https://doi.org/10.1016/j.ijheatmasstransfer.2023.123961.
[29] S. Feng, Y. Yan, H. Li, L. Zhang, S. Yang, Thermal management of 3D chip with non-uniform hotspots by integrated gradient distribution annular-cavity micro-
pin fins, Appl. Therm. Eng. 182 (2021) 116132, https://doi.org/10.1016/j.applthermaleng.2020.116132.
[30] D. Gandhi, U. Chowdhury, T. Chauhan, P. Bansode, S. Saini, J.M. Shah, D. Agonafer, Computational analysis for thermal optimization of server for single phase
immersion cooling. ASME 2019 International Technical Conference and Exhibition on Packaging and Integration of Electronic and Photonic Microsystems,
2019, https://doi.org/10.1115/IPACK2019-6587. V001T02A013.
[31] P.V. Bansode, J.M. Shah, G. Gupta, D. Agonafer, H. Patel, D. Roe, R. Tufty, Measurement of the thermal performance of a single-phase immersion cooled server
at elevated temperatures for prolonged time. International Electronic Packaging Technical Conference and Exhibition, 2018, https://doi.org/10.1115/
IPACK2018-8432. V001T02A010.
[32] Y. Huang, B. Liu, S. Xu, C. Bao, Y. Zhong, C. Zhang, Experimental study on the immersion liquid cooling performance of high-power data center servers, Energy
297 (2024) 131195, https://doi.org/10.1016/j.energy.2024.131195.
[33] M. Matsuoka, K. Matsuda, H. Kubo, Liquid immersion cooling technology with natural convection in data center. IEEE 6th International Conference on Cloud
Networking (CloudNet), 2017, pp. 1–7, https://doi.org/10.1109/CloudNet.2017.8071539.
[34] A. Chhetri, D. Kashyap, A. Mali, C. Agarwal, C. Ponraj, N. Gobinath, Numerical simulation of the single-phase immersion cooling process using a dielectric fluid
in a data server, Mater. Today: Proc. 51 (2022) 1532–1538, https://doi.org/10.1016/j.matpr.2021.10.325.
[35] S.C. Liu, Z.M. Xu, Z.M. Wang, X.Q. Li, H.W. Sun, X.Y. Zhang, H.R. Zhang, Optimization and comprehensive evaluation of liquid cooling tank for single-phase
immersion cooling data center, Appl. Therm. Eng. 245 (2024) 122864, https://doi.org/10.1016/j.applthermaleng.2024.122864.
[36] D.C. Price, M.A. Clark, B.R. Barsdell, R. Babich, L.J. Greenhill, Optimizing performance-per-watt on GPUs in high performance computing, Comput. Sci. Res.
Dev. 31 (2016) 185–193, https://doi.org/10.1007/s00450-015-0300-5.
[37] Q. Luo, C. Wang, C. Wu, Study on heat transfer performance of immersion system based on SiC/white mineral oil composite nanofluids, Int. J. Therm. Sci. 187
(2023) 108203, https://doi.org/10.1016/j.ijthermalsci.2023.108203.

12

You might also like