0% found this document useful (0 votes)
15 views6 pages

1 s2.0 S0893965924004348 Main

This paper presents a new quadrature formula for triangular domains using a constrained mock-Waldron least squares approximation based on simplex points. The method's effectiveness is validated through numerical experiments. The approach aims to improve polynomial interpolation accuracy by utilizing a specific selection of nodes and regression techniques.

Uploaded by

Jacques Kazaku
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views6 pages

1 s2.0 S0893965924004348 Main

This paper presents a new quadrature formula for triangular domains using a constrained mock-Waldron least squares approximation based on simplex points. The method's effectiveness is validated through numerical experiments. The approach aims to improve polynomial interpolation accuracy by utilizing a specific selection of nodes and regression techniques.

Uploaded by

Jacques Kazaku
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Applied Mathematics Letters 163 (2025) 109414

Contents lists available at ScienceDirect

Applied Mathematics Letters


journal homepage: www.elsevier.com/locate/aml

Regular article

A quadrature formula on triangular domains via an


interpolation-regression approach
Francesco Dell’Accio a , Francisco Marcellán b , Federico Nudo c ,∗
a
Department of Mathematics and Computer Science, University of Calabria, Rende (CS), Italy
b
Departamento de Matemáticas, Universidad Carlos III de Madrid, Spain
c
Department of Mathematics ‘‘Tullio Levi-Civita’’, University of Padova, Italy

ARTICLE INFO ABSTRACT

Keywords: In this paper, we present a quadrature formula on triangular domains based on a set of simplex
Weighted quadrature formula points. This formula is defined via the constrained mock-Waldron least squares approximation.
Mock-Waldron points Numerical experiments validate the effectiveness of the proposed method.
Triangular domains
Constrained least squares

1. Constrained mock-Waldron least squares approximation

Let 𝑇 ⊂ R2 be the triangle with vertices 𝒗1 , 𝒗2 , 𝒗3 and a non-zero signed area. Let 𝑓 be an unknown function defined on 𝑇 , and
assume that we only know its evaluations on the set of simplex points [1] of degree 𝑛
{ }
∑3
𝛼𝑖
𝑆𝑛 ∶= 𝒙𝛼 ∶= 𝒗𝑖 ∶ |𝛼| = 𝑛 , 𝛼 ∶= (𝛼1 , 𝛼2 , 𝛼3 ) ∈ N30 , (1)
𝑖=1
𝑛
( )
where |𝛼| ∶= 𝛼1 + 𝛼2 + 𝛼3 . We denote by P𝑛 R2 the space of all bivariate polynomials of degree 𝑛 with real coefficients. Notably,
2
there exists a unique polynomial 𝑝 ∈ P𝑛 (R ) such that
𝑝(𝒙𝛼 ) = 𝑓 (𝒙𝛼 ), 𝒙𝛼 ∈ 𝑆𝑛 . (2)

Explicit formulas for the Lagrange polynomial basis in barycentric coordinates 𝝀 = (𝜆1 , 𝜆2 , 𝜆3 ) ∈ R3 , such that 𝜆1 + 𝜆2 + 𝜆3 = 1, are
given in [2]. In particular, we define
∏3 𝛼∏𝑖 −1 ( )
∏3 ( )
𝑗 𝛼𝑖 − 𝑗
𝓁𝛼 (𝝀) = 𝐶𝛼 𝜆𝑖 − , 𝐶𝛼 = 𝑛−𝑛 .
𝑖=1 𝑗=0
𝑛 𝑖=1
𝑛

Thus, the polynomial interpolation 𝑝 on the set 𝑆𝑛 can be expressed in Lagrange form as [1,3]

𝑝(𝝀) = 𝑓 (𝒙𝛼 )𝓁𝛼 (𝝀). (3)
|𝛼|=𝑛

Approximating the function 𝑓 by its polynomial interpolation at all points of the set 𝑆𝑛 may lead to unsatisfactory results, analogous
to the well-known issues associated with polynomial interpolation at equispaced nodes. In this context, a new interpolation-
regression method for triangular domains was introduced in [4]. This method generalizes the constrained mock-Chebyshev least
squares approximation [5,6] defined on the domains [−1, 1]𝑑 , 𝑑 ∈ N. The key idea of this method, in the univariate case, is to

∗ Corresponding author.
E-mail addresses: [email protected] (F. Dell’Accio), [email protected] (F. Marcellán), [email protected] (F. Nudo).

https://doi.org/10.1016/j.aml.2024.109414
Received 28 October 2024; Received in revised form 28 November 2024; Accepted 29 November 2024
Available online 6 December 2024
0893-9659/© 2024 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license
(http://creativecommons.org/licenses/by/4.0/).
F. Dell’Accio et al. Applied Mathematics Letters 163 (2025) 109414

interpolate the function 𝑓 on a proper subset of the equispaced nodes, selecting only those nodes that are close to the Chebyshev–
Lobatto nodes of suitable order 𝑚, also known as mock-Chebyshev nodes of degree 𝑚 [7]. The remaining nodes are then used to
improve the accuracy of the approximation through a simultaneous regression. In the approach proposed in [4], the Chebyshev–
Lobatto nodes in the interval [−1, 1] are replaced by the so-called Waldron points [1]. These points are defined by introducing an
increasing function 𝑊 ∶ [0, 1] → [0, 1] such that:

• 𝑊 (0) = 0, 𝑊 (1) = 1;

3
• For any 𝜉𝑖 ≥ 0, 𝑖 = 1, 2, 3, in such a way that 𝜉𝑖 = 1, it holds that
𝑖=1

3
( )
𝑊 𝜉𝑖 ≤ 1.
𝑖=1
An example of such a function, as proved in [1], is
1 − cos(𝜋 𝑥)
𝑊 (𝑥) = . (4)
2
The Waldron points of degree 𝑚 for the triangle 𝑇 relative to the function 𝑊 are defined as
{ }
∑3
W W
𝑃𝑚 ∶= 𝒙𝛾 = 𝑊𝑗 𝒗𝑗 ∶ |𝛾| = 𝑚 , 𝛾 = (𝛾1 , 𝛾2 , 𝛾3 ) ∈ N30 , (5)
𝑗=1

where
( ) ( )
𝛾𝑗 ∑
3 (𝛾 )
1 𝑖
𝑊𝑗 ∶= 𝑊 + 1− 𝑊 , 𝑗 = 1, 2, 3.
𝑚 3 𝑖=1
𝑚
Denoting by #(⋅) the cardinality operator, we notice that
( ) ( )
( ( )) ( ) 𝑛+2 ( ( )) ( ) 𝑚+2
dim P𝑛 R2 = # 𝑆𝑛 = , dim P𝑚 R2 = # 𝑃𝑚W = ,
2 2
allowing us to rewrite
{ } { }
𝑆𝑛 = 𝒙1 , … , 𝒙𝑁 , 𝑃𝑚W = 𝒙W
1
, … , 𝒙W
𝑀 , (6)
where
( )
𝑛+2
𝑁 = 𝑁(𝑛) ∶= , (7)
2
( )
𝑚+2
𝑀 = 𝑀(𝑚) ∶= . (8)
2
With this notation, we can write formula (3) as

𝑁
𝑝(𝝀) = 𝑓 (𝒙𝑖 )𝓁𝑖 (𝝀). (9)
𝑖=1
In order to define the constrained mock-Waldron least squares approximation, we need to find a suitable 𝑚 < 𝑛 such that we
can uniquely identify 𝑀 = 𝑀(𝑚) distinct points from 𝑆𝑛 that are close to those of 𝑊𝑚 , see Fig. 1. To this aim, we observe that the
Waldron points associated to the weight function (4) exhibit the same density as the
⌊ √Chebyshev–Lobatto
⌋ points on each side of the
triangle [1,4]. Therefore, we adopt the same value proposed in [5,7], that is 𝑚 = 𝜋 2𝑛 .
We denote the set of mock-Waldron points as
( )
{ } ( W) 𝑚+2
𝑃̃𝑚W = 𝒙̃ W
1
, … , 𝒙
̃ W
𝑀 ⊂ 𝑆𝑛 , # 𝑃̃
𝑚 = 𝑀 = , (10)
2
where
‖ W ‖ ‖ ‖
‖𝒙̃ 𝑖 − 𝒙W
𝑖 ‖ = min ‖𝒙 − 𝒙W𝑖 ‖ , 𝑖 = 1, … , 𝑀 ,
‖ ‖2 𝑗=1,…,𝑁 ‖ 𝑗 ‖2
and ‖⋅‖2 is the standard 2-norm in R2 . Then, in analogy to [5], we set
⌊ √ ⌋
𝑛
𝑞 ∶= 𝜋 , 𝑟 ∶= 𝑚 + 𝑞 + 1, (11)
12
and
( )
𝑟+2
𝑅 = 𝑅(𝑟) ∶= . (12)
2
( )
We consider a basis of the polynomial space P𝑟 R2 given by
{ }
𝑟 = 𝑒1 , … , 𝑒𝑅 , (13)
such that
{ } ( )
span 𝑒1 , … , 𝑒𝑀 = P𝑚 R2 . (14)

2
F. Dell’Accio et al. Applied Mathematics Letters 163 (2025) 109414

Fig. 1. Plot of the simplex points of degree 𝑛 = 20 (o), Waldron points of degree 𝑚 = 9 (+) and the relative mock-Waldron points (*).

For simplicity, we assume that 𝑆𝑛 has been rearranged so that 𝒙𝑖 = 𝒙̃ W


𝑖 , 𝑖 = 1, … , 𝑀. Under this assumption, the constrained
mock-Waldron least squares operator is defined as [4]

𝑅
𝑃̂𝑟,𝑛
W ∶ 𝑓 (𝒙) ∈ 𝐶(𝑇 ) → 𝑃̂𝑟,𝑛
W [𝑓 ](𝒙) = 𝑎̂𝑖 𝑒𝑖 (𝒙) ∈ P𝑟 (R2 ), 𝒙 ∈ 𝑇, (15)
𝑖=1

where 𝒂̂ = [𝑎̂1 , … , 𝑎̂𝑅 ]𝑇 is the solution of the KKT linear equations (named in honor to Karush, Kuhn and Tucker) [8]
[ 𝑇 ][ ] [ 𝑇 ]
2𝑉 𝑉 𝐶 𝑇 𝒂̂ 2𝑉 𝒃
= , (16)
𝐶 0 𝒛̂ 𝒅

𝑉 ∶= [𝑒𝑗 (𝒙𝑖 )] 𝑖=1,…,𝑁 , 𝐶 ∶= [𝑒𝑗 (𝒙𝑖 )] 𝑖=1,…,𝑀 ,


𝑗=1,…,𝑅 𝑗=1,…,𝑅

𝒃 ∶= [𝑓 (𝒙1 ), … , 𝑓 (𝒙𝑁 )]𝑇 , 𝒅 ∶= [𝑓 (𝒙1 ), … , 𝑓 (𝒙𝑀 )]𝑇 ,

and 𝒛̂ is the Lagrange multipliers vector.

2. Quadrature formula on triangular domains via the constrained mock-Waldron least squares approximation

Let 𝑓 be a continuous function defined on 𝑇 , and let 𝜔 ∈ 𝐿1 (𝑇 ) be a weight function. Our goal is to introduce a quadrature
formula to approximate the integral
𝐼[𝑓 ] ∶= 𝑓 (𝒙)𝜔(𝒙)𝑑𝒙 (17)
∫𝑇
assuming that 𝑓 is known only at the simplex points 𝑆𝑛 . To this end, we follow the approach used in [9,10] to construct stable
quadrature formulas for domains of the form [−1, 1]𝑑 , 𝑑 ∈ N. Specifically, let
{ } { }
𝐾G = 𝝃 1 , … , 𝝃 𝐾 , 𝐾G = 𝜔1 , … , 𝜔𝐾 , 𝐾 ∈ N,

a set of Gaussian quadrature nodes and weights, respectively, on the triangle 𝑇 [11–14]. Then, we approximate the integral (17) as
follows

𝐾
W W
𝑓 (𝒙)𝜔(𝒙)𝑑𝒙 ≈ 𝑃̂𝑟,𝑛 [𝑓 ](𝒙)𝜔(𝒙)𝑑𝒙 = 𝜔𝑗 𝑃̂𝑟,𝑛 [𝑓 ](𝝃 𝑗 ) + 𝑅𝐾 [𝑓 ] = 𝑄̂ W
𝑟,𝑛,𝐾 [𝑓 ] + 𝑅𝐾 [𝑓 ], (18)
∫𝑇 ∫𝑇
𝑗=1

where

𝐾
𝑄̂ W
𝑟,𝑛,𝐾 [𝑓 ] ∶=
W
𝜔𝑗 𝑃̂𝑟,𝑛 [𝑓 ](𝝃 𝑗 ). (19)
𝑗=1

Although formula (19) does not contain any explicit reference to the nodes of the set 𝑆𝑛 , it can be shown that this can be rewritten
in terms of evaluations of the function 𝑓 at all nodes of 𝑆𝑛 . In fact the following theorem holds.

Theorem 2.1. The quadrature formula (19) can be written as



𝑁 ∑
𝐾
𝑄̂ W
𝑟,𝑛,𝐾 [𝑓 ] = 𝑤̂ 𝑖 𝑓 (𝒙𝑖 ), where 𝑤̂ 𝑖 = W
𝜔𝑗 𝑃̂𝑟,𝑛 [𝓁𝑖 ](𝝃 𝑗 ), 𝑖 = 1, … , 𝑁 . (20)
𝑖=1 𝑗=1

3
F. Dell’Accio et al. Applied Mathematics Letters 163 (2025) 109414

Proof. By construction, the operator 𝑃̂𝑟,𝑛 W is not injective since it depends only on the evaluations of the function 𝑓 on the simplex

points 𝑆𝑛 . Therefore, using (9), we get


[𝑁 ]
∑ ∑
𝑁
W W W
𝑃̂𝑟,𝑛 [𝑓 ] = 𝑃̂𝑟,𝑛 𝑓 (𝒙𝑖 )𝓁𝑖 = 𝑓 (𝒙𝑖 )𝑃̂𝑟,𝑛 [𝓁𝑖 ]. (21)
𝑖=1 𝑖=1

Substituting (21) in (19), we have


(𝑁 )

𝐾 ∑
𝐾 ∑
𝑄̂ W
𝑟,𝑛,𝐾 [𝑓 ] =
W
𝜔𝑗 𝑃̂𝑟,𝑛 [𝑓 ](𝝃 𝑗 ) = 𝜔𝑗 W
𝑓 (𝒙𝑖 )𝑃̂𝑟,𝑛 [𝓁𝑖 ](𝝃 𝑗 ) ,
𝑗=1 𝑗=1 𝑖=1

and by changing the order of summation, we obtain


(𝐾 )

𝑁 ∑
𝑄̂ W
𝑟,𝑛,𝐾 [𝑓 ] = 𝜔 𝑃̂ W
[𝓁
𝑗 𝑟,𝑛 𝑖 ](𝝃 𝑗 ) 𝑓 (𝒙𝑖 ).
𝑖=1 𝑗=1

The result follows. ■

Remark 2.2. There are several numerical methods for solving constrained least squares problems, which can be used to compute
the coefficient vector 𝒂̂ of the operator (15), see [15]. One of them is the direct elimination method [16,17]. This method is based
on partitioning the vector 𝒂̂ as
[ ]𝑇
𝒂̂ = 𝒂̂ 𝑇1 , 𝒂̂ 𝑇2 , 𝒂̂ 1 ∈ R𝑀 , 𝒂̂ 2 ∈ R𝑅−𝑀 , (22)

where 𝑀 and 𝑅 are defined in 8 and 12, respectively. It can be shown that
( 𝑇 )
𝒂̂ 1 = 𝑅−1
11
𝑄𝐶 𝒅 − 𝑅12 𝒂̂ 2 , 𝐶 = 𝑄𝐶 [𝑅11 , 𝑅12 ], 𝑅11 ∈ R𝑀×𝑀 , (23)

where 𝑄𝐶 is orthogonal and 𝑅11 is a nonsingular upper triangular matrix. To compute the vector 𝒂̂ 2 , the matrix 𝑉 is partitioned as
𝑉 = [𝑉1 , 𝑉2 ], 𝑉1 ∈ R𝑁×𝑀 , 𝑉2 ∈ R𝑁×𝑅−𝑀 .

Then, the vector 𝒂̂ 2 can be written as


𝒂̂ 2 = (𝐴𝑇1 𝐴1 )−1 𝐴𝑇1 𝒃1 , (24)

where
𝐴1 ∶= 𝑉2 − 𝑉1 𝑅−1 𝑅 ,
11 12
𝒃1 ∶= 𝒃 − 𝑉1 𝑅−1 𝑄𝑇 𝒅.
11 𝐶
(25)

By following the same strategy of [18], in the next theorem we compute a bound for the norm of the operator 𝑃̂𝑟,𝑛
W defined

in (15). Equipping 𝐶 (𝑇 ) and P𝑟 (R2 ) with the sup-norm


‖𝑓 ‖∞ = max |𝑓 (𝒙)| ,
𝒙∈𝑇

we seek an upper bound for


‖ ̂ W‖ ‖ W ‖
‖𝑃𝑟,𝑛 ‖ ∶= sup ‖𝑃̂𝑟,𝑛 [𝑓 ]‖ . (26)
‖ ‖∞ 𝑓 ∈𝐶(𝑇 ) ‖ ‖∞
‖𝑓 ‖∞ ≤1

Theorem 2.3. The norm of the constrained mock-Waldron least squares operator satisfies
‖ ̂ W‖ ( )
‖𝑃𝑟,𝑛 ‖ ≤ 𝐿 1 + 2 , (27)
‖ ‖∞
where
( )
‖ ‖ ‖ 𝑇‖
𝐿 ∶= max ‖ ‖
‖𝑒𝑖 ‖∞ , 1 = 1 (𝑁 , 𝑀) ∶= ‖𝑅−1 ‖ ‖𝑄 ‖ 𝑀 + ‖ ‖
‖𝑅12 ‖1 2 , (28)
𝑖=1,…,𝑅 ‖ 11 ‖1 ‖ 𝐶 ‖1
( )
‖ ‖ ‖ ‖
2 = 2 (𝑁 , 𝑀) ∶= ‖(𝐴𝑇1 𝐴1 )−1 𝐴𝑇1 ‖ 𝑁 + ‖𝑉1 𝑅−1 𝑄𝑇𝐶 ‖ 𝑀 (29)
‖ ‖1 ‖ 11 ‖1
and 𝐴1 is defined in (25).

Proof. Let 𝑓 ∈ 𝐶 (𝑇 ) be such that ‖𝑓 ‖∞ ≤ 1. By applying the triangular inequality, we get


|𝑅 | ∑
| |∑
𝑅
| ̂W |
|𝑃𝑟,𝑛 [𝑓 ](𝒙)| = || 𝑎̂𝑖 (𝑓 )𝑒𝑖 (𝒙)|| ≤ |𝑎̂𝑖 (𝑓 )| |𝑒𝑖 (𝒙)| ,
| || | 𝒙 ∈ 𝑇, (30)
| | | | 𝑖=1
| 𝑖=1 |
which leads to
‖ ̂W ‖ ∑ 𝑅
‖𝑃𝑟,𝑛 [𝑓 ]‖ ≤ 𝐿 |𝑎̂𝑖 (𝑓 )| = 𝐿 ‖𝒂(𝑓
̂ )‖1 . (31)
‖ ‖∞ | |
𝑖=1

We need to bound the discrete 𝐿1 -norm of the vector 𝒂̂ = 𝒂(𝑓


̂ ). To this end we use the decomposition (22) and we have

4
F. Dell’Accio et al. Applied Mathematics Letters 163 (2025) 109414

‖[𝒂̂ ]‖
‖ ‖
̂ 1 = ‖ 1 ‖ = ‖𝒂̂ 1 ‖1 + ‖𝒂̂ 2 ‖1 .
‖𝒂‖
‖ 𝒂̂ 2 ‖
‖ ‖1
From (23) and (24), we obtain
( )
‖𝒂̂ ‖ ≤ ‖ 𝑅−1
‖ ‖ 𝑇‖
‖ ‖𝑄𝐶 ‖ ‖𝒅‖1 + ‖ 𝑅12 ‖ ‖𝒂̂ 2 ‖
‖ 1 ‖1 ‖ ‖ 11 ‖1 ‖ ‖1 ‖ ‖1 ‖ ‖1 (32)

and
‖𝒂̂ 2 ‖ ≤ ‖ ‖
(𝐴𝑇 𝐴 )−1 𝐴𝑇1 ‖ ‖ 𝒃 ‖ .
‖ ‖1 ‖ ‖ 1 1 ‖1 ‖ 1 ‖1
(33)

By hypothesis, we get

𝑁 ∑
𝑀
‖𝒃‖1 = |𝑓 (𝒙𝑖 )| ≤ 𝑁 , ‖𝒅‖1 = |𝑓 (𝑥𝑖 )| ≤ 𝑀 . (34)
| | | |
𝑖=1 𝑖=1

Finally, substituting the definition of 𝒃1 from (25) into (33) and taking into account (34), the result follows. ■
In the following, we prove an upper bound for the error produced by the operator (15) in 𝐿∞ -norm.

Theorem 2.4. Let 𝑓 ∈ 𝐶(𝑇 ), then


‖ W ‖ ( ( ))
‖𝑓 − 𝑃̂𝑟,𝑛 [𝑓 ]‖ ≤ 1 + 𝐿 1 + 2 𝑟 (𝑓 ), (35)
‖ ‖∞
( )
where 𝑟 (𝑓 ) is the error of best uniform approximation on 𝑇 by polynomials of P𝑟 R2 .
( 2)
Proof. Let 𝑝⋆
𝑟 [𝑓 ] ∈ P𝑟 R be the polynomial of best uniform approximation to 𝑓 on 𝑇 . Then, we get
‖ W ‖ ‖ ̂W ‖
‖𝑓 − 𝑃̂𝑟,𝑛 [𝑓 ]‖ = ‖𝑓 − 𝑝⋆ ⋆
𝑟 [𝑓 ] + 𝑝𝑟 [𝑓 ] − 𝑃𝑟,𝑛 [𝑓 ]‖
‖ ‖∞ ‖ ‖∞
‖ ̂ W [𝑓 − 𝑝⋆ [𝑓 ]]‖
= ‖𝑓 − 𝑝⋆ [𝑓 ] − 𝑃 ‖

(
𝑟
)
𝑟,𝑛 𝑟 ‖∞
‖ ̂ W‖ ‖ ⋆ ‖
≤ 1 + ‖𝑃𝑟,𝑛 ‖ ‖𝑓 − 𝑝𝑟 [𝑓 ]‖ .
‖ ‖∞ ‖ ‖∞
From Theorem 2.3, the result follows. ■
3. Numerical experiments

In this section, we consider the test functions


1 1
𝑓1 (𝑥, 𝑦) = , 𝑓2 (𝑥, 𝑦) = , 𝑓3 (𝑥, 𝑦) = cos(20𝜋(𝑥 + 𝑦))𝑒𝑥+𝑦 ,
1 + 8(𝑥2 + 𝑦2 ) 1 + 25(𝑥2 + 𝑦2 )

𝑓4 (𝑥, 𝑦) = cos(10𝜋(𝑥 + 𝑦)) sin(10𝜋(𝑥 + 𝑦)),

1
𝑓5 (𝑥, 𝑦) = .
(1 + 100(𝑥 − 1∕3)2 )(1 + 100(𝑦 − 1∕3)2 )
The function 𝑓5 (𝑥, 𝑦) was used in [19] to generalize the Runge function on a triangle.
We evaluate the accuracy of the quadrature formula (19) for approximating the integrals
𝐼[𝑓 ] = 𝑓𝑖 (𝑥, 𝑦)𝑑 𝑥𝑑 𝑦, 𝑖 = 1, 2, 3, 4, 5, (36)
∫𝑇
where 𝑇 is the triangle with vertices (0, 0), (1, 0), (0, 1). More precisely, we compute the errors
| |
𝑒𝑛,𝐾 [𝑓𝑖 ] = |𝐼[𝑓𝑖 ] − 𝑄̂ W
𝑟,𝑛,𝐾 [𝑓𝑖 ]|| , 𝑖 = 1, 2, 3, 4, 5,
|
for simplex points of degree 𝑛 = 50, 100, 150, 200 and 𝐾 = 10000 quadrature points.
For these experiments the approximation operator 𝑃̂𝑟,𝑛 W [𝑓 ] is expressed in terms of the Koornwinder–Dubiner polynomial basis,

constituting an orthogonal basis on the triangle 𝑇 [20–22]. The software to compute the Koornwinder–Dubiner basis is available at
the website https://www.math.unipd.it/~alvise/sets.html.
The results are presented in Table 1.
From this table, we observe that the error decreases as the number of simplex points increases. This trend aligns with the behavior
of the approximation error produced by the operator 𝑃̂𝑟,𝑛 W , see [4].

4. Conclusions and future works

In this work, we introduced a quadrature formula for triangular domains based on the constrained mock-Waldron least squares
approximation. Numerical experiments demonstrated the effectiveness of the proposed method, highlighting its accuracy across
various test cases, with notable success for oscillatory functions. An open question remains regarding the stability of the proposed
quadrature formula, which is inherently linked to the convergence properties of the constrained mock-Waldron least squares
approximation. Future investigations could focus on theoretical analyses to derive rigorous stability bounds, complemented by
numerical experiments to assess their practical implications. Addressing these aspects would not only provide deeper insights

5
F. Dell’Accio et al. Applied Mathematics Letters 163 (2025) 109414

Table 1
Approximation error produced by approximating the integrals (36) through the quadrature formula 𝑄̂ W
𝑟,𝑛,𝐾
on
𝐾 = 10000 Gaussian quadrature nodes.
degree of Simplex points 𝑛 = 50 𝑛 = 100 𝑛 = 150 𝑛 = 200
𝑓1 7.94e−10 1.77e−13 3.83e−13 1.33e−14
𝑓2 5.43e−08 8.25e−11 3.45e−11 4.07e−14
𝑓3 6.38e−01 7.15e−02 3.59e−03 6.23e−06
𝑓4 1.62e−01 2.54e−02 1.50e−02 4.75e−06
𝑓5 1.50e−03 1.74e−04 4.91e−05 1.33e−05

into the behavior of the method but also pave the way for extending its applicability to more complex and irregular domains.
Additionally, a promising direction for future work is the identification of a suitable polynomial basis to derive numerical estimates
of the bound (27), ensuring it grows linearly, consistent with the behavior of the standard constrained mock-Chebyshev least squares
operator in the univariate domain.

Acknowledgments

The authors are grateful to the anonymous reviewers for carefully reading the manuscript and for their precise and helpful
suggestions which allowed to improve the work. This research has been achieved as part of RITA ‘‘Research ITalian network on
Approximation’’ and as part of the UMI group ‘‘Teoria dell’Approssimazione e Applicazioni’’. The research was supported by GNCS-
INdAM 2024 project ‘‘Metodi kernel e polinomiali per l’approssimazione e l’integrazione: teoria e software applicativo’’. Project
funded by the EuropeanUnion – NextGenerationEU under the National Recovery and Resilience Plan (NRRP), Mission 4 Component
2 Investment 1.1 - Call PRIN 2022 No. 104 of February 2, 2022 of Italian Ministry of University and Research; Project 2022FHCNY3
(subject area: PE - Physical Sciences and Engineering) ‘‘Computational mEthods for Medical Imaging (CEMI)’’. The work of F.
Marcellán has been supported by the research project [PID2021- 122154NB-I00], Ortogonalidad 𝑦 Aproximación con Aplicaciones
en Machine Learning 𝑦 Teoría de la Probabilidad funded by MICIU/AEI/10.13039/501100011033 and by ‘‘ERDF A Way of making
Europe’’.

Data availability

No data was used for the research described in the article.

References

[1] L. Bos, S. Ma’u, S. Waldron, On Waldron interpolation on a Simplex in R𝑑 , 2023, arXiv preprint arXiv:2306.08392.
[2] L. Bos, Bounding the Lebesgue function for Lagrange interpolation in a simplex, J. Approx. Theory 38 (1983) 43–59.
[3] K. Kobayashi, T. Tsuchiya, A priori error estimates for Lagrange interpolation on triangles, Appl. Math. 60 (2015) 485–499.
[4] S. De Marchi, F. Dell’Accio, F. Nudo, A mixed interpolation-regression approximation operator on the triangle, Dolomites Res. Notes Approx. 17 (2024)
33–44.
[5] S. De Marchi, F. Dell’Accio, M. Mazza, On the constrained mock-Chebyshev least-squares, J. Comput. Appl. Math. 280 (2015) 94–109.
[6] F. Dell’Accio, F. Di Tommaso, F. Nudo, Generalizations of the constrained mock-Chebyshev least squares in two variables: Tensor product vs total degree
polynomial interpolation, Appl. Math. Lett. (2021) 107732.
[7] J.P. Boyd, F. Xu, Divergence (Runge phenomenon) for least-squares polynomial approximation on an equispaced grid and mock–Chebyshev subset
interpolation, Appl. Math. Comput. 210 (2009) 158–168.
[8] S. Boyd, L. Vandenberghe, Introduction to Applied Linear Algebra: Vectors, Matrices, and Least Squares, Cambridge University press, Cambridge, 2018.
[9] F. Dell’Accio, F. Di Tommaso, F. Nudo, Constrained mock-Chebyshev least squares quadrature, Appl. Math. Lett. 134 (2022) 108328.
[10] F. Dell’Accio, F. Di Tommaso, E. Francomano, F. Nudo, An adaptive algorithm for determining the optimal degree of regression in constrained
mock-Chebyshev least squares quadrature, Dolomites Res. Notes Approx. 15 (2022) 35–44.
[11] P.C. Hammer, O. Marlowe, A.H. Stroud, Numerical integration over simplexes and cones, Math. Tables Other Aids Comput. 10 (1956) 130–137.
[12] G.R. Cowper, Gaussian quadrature formulas for triangles, Internat. J. Numer. Methods Engrg. 7 (1973) 405–408.
[13] H.T. Rathod, B. Venkatesudu, K.V. Nagaraja, Gauss Legendre quadrature formulas over a tetrahedron, Numer. Methods Partial Differ. Equ. 22 (2006)
197–219.
[14] F. Hussain, M. Karim, R. Ahamad, Appropriate Gaussian quadrature formulae for triangles, Int. J. Appl. Comput. Math. 4 (2012) 24–38.
[15] Å. Björck, Numerical Methods for Least Squares Problems, SIAM, Philadelphia, PA, 1996.
[16] Q. Liu, M. Wei, On direct elimination methods for solving the equality constrained least squares problem, Linear Multilinear Algebra 58 (2010) 173–184.
[17] E. Galligani, L. Zanni, On the stability of the direct elimination method for equality constrained least squares problems, Computing 64 (2000) 263–277.
[18] F. Dell’Accio, D. Mezzanotte, F. Nudo, D. Occorsio, Constrained mock-Chebyshev least squares approximation on quasi-uniform grids, 2025.
[19] M. Blyth, H. Luo, C. Pozrikidis, A comparison of interpolation grids over the triangle or the tetrahedron, J. Engrg. Math. 56 (2006) 263–272.
[20] M. Dubiner, Spectral methods on triangles and other domains, J. Sci. Comput. 6 (1991) 345–390.
[21] R. Pasquetti, F. Rapetti, Spectral element methods on triangles and quadrilaterals: comparisons and applications, J. Comput. Phys. 198 (2004) 349–362.
[22] F. Rapetti, A. Sommariva, M. Vianello, On the generation of symmetric Lebesgue-like points in the triangle, J. Comput. Appl. Math. 236 (2012) 4925–4932.

You might also like