1 Introduction

Non-fungible token is a type of crypto asset developed on blockchain technology. One of the most important features of these tokens is that they are immutable and unique. In other words, each token has its own unique digital content. This provides some benefits for increasing the efficiency of transactions to be carried out in the metaverse (Behl et al. 2024). Thanks to these tokens, it is possible to customize digital assets on the Metaverse platform. This significantly contributes to reducing the uncertainty in the process. Investors who have more confidence in the process pay more attention to this area. Thus, it is possible to increase investments. On the other hand, non-fungible tokens secure digital property rights within the metaverse. This significantly helps to increase investors’ confidence in the process. Moreover, it is possible to determine the value of digital assets on the Metaverse platform by using these tokens (Lee et al. 2024). This situation supports increasing the trade volume in the process. Similarly, non-fungible tokens can be backed by smart contracts. This situation contributes to increasing the transaction volume on the Metaverse platform.

The development of non-fungible tokens is crucial to the effectiveness of the metaverse platform. First, user convenience must be ensured to increase the use of these tokens. Users want to use these tokens easily and quickly. In this context, interfaces must be designed in a user-friendly manner. In this context, it would be appropriate to provide some training to use these tokens effectively (Davies et al. 2024). Ensuring the security of the process is also very important in this context. Thanks to a secure platform, users’ anxiety about using these tokens is reduced. Investors who feel safer will make more transactions on this platform. Thus, it is possible to increase the effectiveness of the metaverse platform. In this context, some actions need to be taken to ensure account confidentiality and payment security. To increase the use of non-fungible tokens on the Metaverse platform, sufficient technological infrastructure must be provided (Brickman et al. 2024). Owing to the use of advanced technologies, the blockchain system can be more successfully integrated into the metaverse platform. This practice can also significantly increase the effectiveness of the process.

To increase the effectiveness of non-fungible tokens on the Metaverse platform, necessary improvements must be made regarding these issues. However, these improvements also cause costs to increase. This situation emerges as the most important barrier in the improvement process. Therefore, making too many improvements will put businesses in a very difficult financial situation. To manage this problem effectively, improvements on more important issues must be made first. Therefore, it is necessary to identify the factors that are most important in this process. However, there are a limited number of studies focusing on this issue in the literature. Instead of this situation, it is seen that most of the studies evaluated the importance of the indicators to increase the effectiveness of NFTs. Nevertheless, a priority analysis has not been conducted to understand the most critical determinants of this condition. This can be considered as an important research gap in the literature. In other words, a new study is needed to determine the most important factors for a more successful integration of non-fungible tokens into this platform.

Accordingly, this study aims to determine the appropriate the identity management choices of non-fungible tokens in the Metaverse. There are three different stages in the proposed novel fuzzy decision-making model. The first stage includes prioritizing the expert choices with artificial intelligence-based decision-making methodology. Secondly, the criteria sets for managing non-fungible tokens are weighted by using Quantum picture fuzzy rough sets-based M-SWARA methodology. Finally, the identity management choices regarding non-fungible tokens in the Metaverse are ranked with Quantum picture fuzzy rough sets oriented VIKOR. The main motivation of making this study is the strong need for a new fuzzy decision-making model. The main reason is that most of the existing models in the literature do not consider expert differentiation (Ecer et al. 2024; Mikhaylov et al. 2024). Considering the opinions of these experts with the same coefficient leads to a decrease in the effectiveness of the analysis results. However, the qualifications of the experts can be different based on their demographic characteristics. Hence, a new model is needed to consider this situation to reach more appropriate results. Owing to this new model in which the weights of experts are calculated, it is possible to obtain more accurate analysis results. Thus, more accurate strategy suggestions can be offered to increase the effectiveness of NFT investments.

The main contributions of this study are demonstrated below. (i) Artificial intelligence methodology is integrated to the fuzzy decision-making modelling to differentiate the experts. With the help of this situation, it can be possible to create clusters for the experts. Hence, the opinions of experts outside this group may be excluded from the scope. This situation allows to reach more realistic analysis results. On the other side, this condition has a powerful contribution to increase methodological originality. (ii) M-SWARA methodology is preferred to weight the determinants. Owing to this issue, the causal directions among these items can be considered. The classical SWARA approach cannot create impact relation map of the criteria. Due to this situation, M-SWARA methodology is generated in the previous studies while making some improvements. As a result, the causal directions of the determinants can be identified. The influencing factors of managing non-fungible tokens may have an impact on each other. Thus, to reach more essential items, causal relationship among them should be considered. Because of this condition, M-SWARA is the most appropriate methodology in this framework.

Literature review is given in the next step. The following step consists of the details of the proposed methodology. Analysis results are explained in the fourth step. The final sections include discussion and conclusion.

2 Literature review

NFTs, digital representations of items stored in the blockchain that cannot be exchanged with each other, feature uniqueness by utilizing smart contracts (Bonnet and Teuteberg 2023a, b). The development and expansion of NFT applications are directly linked to blockchain and Metaverse innovations. In this context, structural innovations in blockchain technologies are carried out on the axis of data collection, storage, sharing, interoperability, and privacy protection (Huynh-The et al. 2023). For example, Wu et al. (2023b) developed a waste material passport as NFT, which allows cross-border trade of construction waste materials using blockchain infrastructure. With these suggestions, they aimed to ensure that waste materials have a digital and unique identity, to ensure information transparency, to increase trade efficiency, and to ensure the security of transaction records. On the other hand, Rani et al. (2023) developed an innovative blockchain-based NFT payment system for academic payment systems. In this way, an online payment model has been introduced to increase security by eliminating the lack of transparency and traceability in traditional payment methods. Ko et al. (2023) emphasize ongoing research on developing innovative NFT technologies, especially fractional NFTs, non-transferable and soul bound tokens.

Security is another issue that will increase the effectiveness of Blockchain and Non-Fungible Tokens. With the increase in its use and value, the concern that NFTs can be used to launder illegal revenues has also increased (Mosna and Soana 2023). As a matter of fact, Rahman and Jin (2023) suggested establishing tax control to develop the Chinese NFT market and eliminate tax irregularities. They emphasized that in this way, the legality of the transactions will be ensured, as well as their security. Schlatt et al. (2023) emphasized that research on the cyber security of blockchain applications focuses on the technical infrastructure, but it is also essential to investigate the socio-technical aspects. On the other hand, since blockchain-based NFT applications provide security, they can be integrated into production processes in many different sectors. For example, Alkhader et al. (2023) found that these applications resist security attacks in their study investigating the use of blockchain and NFTs in digital production systems. Furthermore, Delgado-von-Eitzen et al. (2024) have presented a proposal for publishing and verifying academic information by the General Data Protection Regulation by taking advantage of NFTs’ unique and secure structures.

Increasing the infrastructure efficiency of Non-Fungible Tokens is not enough on its own. It is a process that proceeds in parallel with the strategic establishment of business models and achieving profitability to increase innovations in this field. Thanks to NFTs, new business models can be developed focusing on digital ownership, digital assets that can be moved and combined, and decentralized communities (Li and Chen 2023). At this point, as stated by Chohan and Paschen (2023), marketing professionals need to be guided in persuading consumers to increase their interest in NFTs. Belk et al. (2022) contributed to the field by theorizing new forms of ownership so that NFT and web 3.0 applications can become widespread, especially in the art world, and artists can earn money. Similarly, Mancuso et al. (2023) presented a framework for companies to create added value by supporting their investments in Metaverse and Web 3.0 technologies with new business models. On the other hand, Lee et al. (2024) evaluated the effectiveness of the opportunities provided by the platforms in their research on the effect of NFT use on customer loyalty. Accordingly, they emphasized that customer loyalty increases thanks to the opportunities provided by NFT platforms and that it is essential to design a user-centered experience.

Monetization and business strategy are key to increasing the effectiveness of NFT investments. Monetization strategies include plans and practices for determining how a product or service will be used to generate revenue (Bonnet and Teuteberg 2023a, b). In this context, licensing of NFTs has a very important role (Li and Chen 2023). In this way, it is possible to generate income by granting usage rights to other businesses (Chohan and Paschen 2023). Bordel Sánchez et al. (2024) stated that developing a subscription system for users is another monetization strategy that can be taken into consideration in this process. In this way, customers can be charged a certain amount for NFT use. On the other hand, according to Paul and Malik (2024), it is also possible to give NFTs a competitive advantage with a creative business model. This significantly contributes to increasing the efficiency of investments. Deep and Verma (2024) identified that with the effectiveness of monetization and business strategies, investors’ confidence in NFTs is increasing. This contributes to more investors showing interest in these products. Another important issue in this process is the implementation of correct marketing strategies. Thanks to effective advertising campaigns, it is possible to inform a wide audience about products.

As mentioned above, another criterion that increases the effectiveness of NFTs is user experience. Ali et al. (2023) stated that these tools will be adopted more as the interfaces of NFT platforms are designed to be user-friendly. For instance, Lee (2023) investigated the factors affecting users’ adoption of NFTs. Accordingly, usability is among the technical factors that affect users. Moreover, the research conducted by Sun (2023) revealed that one of the leading reasons for choosing NFT platforms is ease of use. However, Wu et al. (2023a) emphasized in their research on end NFT users that usability is one of the biggest challenges of current applications. Finally, Meyns and Dalipi (2022) investigated the concerns of end NFT users about the platform in their research. According to research findings, users are concerned about access to NFT platforms by special invitation, verification processes, restricting account access, or removing their own NFTs. As a result, eliminating access barriers and usage concerns of end users is one of the crucial criteria for increasing NFT effectiveness.

The diversity and effectiveness of Web 3.0 applications is increasing day by day. It is seen that these technologies change the way different sectors do business. Blockchain-based NFT applications are among the most frequently emphasized and researched investments in recent years. In this respect, it is essential to understand and develop NFT technology, which also appeals to end consumers with its structures and the opportunities it offers. Therefore, developing technical infrastructure and blockchain technology ensures NFT activity. Various model and system suggestions for these technologies, used in different sectors such as education, construction, and health, are shared above. On the other hand, security and compliance are also issues that NFTs offer and that users may still have concerns about. Among the prominent issues are ensuring tax control, introducing a socio-technical perspective on cyber security issues, using it as a security tool in digital production systems, and verifying personal information. Diffusion of innovations is possible by integrating them into corporate strategies and business models. At this point, issues such as developing business models specific to NFT technology and adapting them to the system as an active income source also come to the fore. Finally, the literature research showed that user experience also plays a role in NFT activity. Providing convenience to the user experience, addressing user concerns, and providing ease of system use may increase NFT activity. No studies have evaluated the prominent criteria for making NFT activity more effective by integrating other Web 3.0 technologies. In other words, it has been determined that a model is needed to evaluate the prominent criteria in studies to increase NFT effectiveness. The literature review results demonstrate that most of the studies related to the NFT investments focused on the important variables in this regard. It is also seen that necessary improvements should be made to these indicators to increase the effectiveness of these investments. However, making these improvements also increase the cost of the companies. Due to this condition, it is very difficult for a company to make lots of improvements. Hence, companies should mainly give importance to the most significant determinants. With the help of this issue, specific investment strategies can be provided to the companies so that the performance improvements of these projects can be obtained without having high costs. It is seen that a priority analysis is needed in this framework. However, there are limited studies in the literature in this context. This situation can be accepted as the main literature gap for this subject. To address this gap in the literature, a model proposal has been developed that helps identify critical factors for increasing NFT activity.

3 Methodology

The aim of the study is to determine the most appropriate strategy in choosing identity management for NFT in Metaverse. For this purpose, first, criteria for NFT management need to be determined and weighted. The criteria determined in the literature review are weighted with the M-SWARA method. In the second stage of the study, strategies are evaluated based on the criteria. This evaluation is carried out by the VIKOR method (Sahoo et al., 2024). The two-stage proposed method is carried out based on expert opinions. Therefore, it is recommended to consider fuzzy numbers in order to include uncertainty in linguistic expressions in the analysis (Chusi et al., 2024; Aslan and Özüpak, 2024). Therefore, Quantum picture fuzzy rough sets (QPFR) have been integrated into these two methods. The most current version for QPFR numbers is the form in which the golden ratio is used. In this way, it will be possible to handle uncertainty in the most realistic way. In addition, since the selection of experts and their equal importance is a matter criticized in the literature, artificial intelligence-based expert selection is recommended. However, the mathematical derivations of the proposed methodology are summarized as follows.

The introduction of the proposed algorithm is given by using the Eqs. (9)–(11). These equations combine the principles of quantum mechanics, the golden ratio, and fuzzy numbers to the complex decision-making process and scenarios. The Eqs. (12)–(14) are presented for illustrating the essentials of conventional, intuitionistic and picture fuzzy sets. The Eqs. (15)–(19) introduce the operational rules of picture fuzzy sets. In the next step, the rough numbers with the Eqs. (20)–(25) and its extension with quantum mechanics is presented by using the formulas (26)–(44). However, the use of golden cut is adapted to the proposed technique for establishing ideal weighting between member and non-membership degrees with the formulas (45)–(49). Finally, the introduction of this proposed technique to the multi-criterion decision making is given by using the Eqs. (50)–(56). The details of the proposed methodology are defined at the following sections respectively. Also, the details of the proposed model are shown in Fig. 1.

Fig. 1
figure 1

The flowchart of hybrid model

3.1 AI-based decision-making for expert prioritization

In the dynamics of decision-making process, diversity in the demographic information and experience of experts has become important in recent years to be taken into account in the analysis process. However, difficulty in prioritizing experts and diversity also create complexity of analysis (Imran et al., 2024). The proposed solution for this complexity is the AI-based k-means clustering algorithm (Yazdi and Komasi, 2024). Clusters are created based on basic demographic characteristics of experts such as education, salary, age and sector. In determining the optimal number of clusters, the elbow method is preferred. The elbow method is a strategic visualization technique in finding the effective point in determining the effectiveness of the model. After this, the weights of the clusters are defined. These weights reflect the sizes and diversity of the clusters. Finally, the weights of individual experts in each cluster are calculated proportionally based on their proximity to the cluster centre. With an artificial intelligence-based methodology, an innovative approach to expert prioritization is achieved. Artificial intelligence-based expert prioritization steps are presented below.

In the Step 1, the number of optimal clusters are determined using the elbow method. The elbow method, an important component of methodology of study, serves as a guidance for defining the optimal number of clusters in decision-maker grouping (Liu et al. 2024). According to plotting the Within-Cluster Sum of Squares (WCSS) against the number of clusters (k), this method systematically reveals the point at which additional clusters cease to significantly increase the effectiveness of the model, elbow-shaped in the graph. On the basis, the elbow technique enables us to reconcile the granularity of clustering with the declining rewards of further complexity by serving as a strategic navigator (Hussain and Ullah 2024). This ensures that the selected number of clusters encapsulates the essential patterns within the demographic data, providing a robust foundation for subsequent steps in our prioritization methodology.

In Step 2, the Within-Cluster Sum of Squares (WCSS) is computed for the different values of k using Eq. (1).

$$WCSS=\sum_{j=1}^{k}\sum_{{x}_{i}\in {C}_{j}}d({x}_{i},{c}_{j}{)}^{2}$$
(1)

where \(k\) is the number of clusters, \({C}_{j}\) is the set of data points in cluster \(j\), \({x}_{i}\) is a data point, \({c}_{j}\) is the cluster center of cluster \(j\), and \(d({x}_{i},{c}_{j})\) is the Euclidean distance between \({x}_{i}\) and\({c}_{j}\). The elbow is determined as an optimal k value where the reduction in WCSS slows down by plotting the values of WCSS and k.

In the Step 3 of the AI-based expert prioritization, the K-means clustering algorithm is used for clustering decision makers. The optimal k value is applied for initial cluster centres as \({c}_{1},{c}_{2},...,{c}_{k}\) and each data point \({x}_{i}\) to the nearest cluster center is assigned to define the cluster assignments with Eq. (2).

$$d({x}_{i},{x}_{j})=\sqrt{\sum_{l=1}^{n}({x}_{il}-{x}_{jl}{)}^{2}}$$
(2)

where \(n\) represents the number of dimensions of the data. The cluster assignment of each data point \({x}_{i}\) is denoted by \({a}_{i}\), where \({a}_{i}=j\) is that \({x}_{i}\) belongs to cluster \(j\) (Yang 2024; Kong et al. 2024). Cluster centers are updated by taking the average of data points in each cluster by Eq. (3).

$${c}_{j}=\frac{1}{\mid {C}_{j}\mid }\sum_{{x}_{i}\in {C}_{j}}{x}_{i}$$
(3)

where \({C}_{j}\) means the set of data points in cluster \(j\), and \(\mid {C}_{j}\mid\) is the number of data points in cluster\(j\). Equations (2) and (3) are applied iteratively until no data point changes its cluster assignment, or until a maximum number of iterations is reached.

In Step 4, the weights of the decision makers are calculated by considering the cluster weights of the decision makers. The mean standard deviations of each cluster are computed with the help of Eqs. (4)–(6).

$${s}_{j}=\frac{1}{n}\sum_{l=1}^{n}{\sigma }_{jl}$$
(4)
$$\sigma _{{jl}} = \sqrt {\frac{1}{{\left| {C_{j} } \right|}}\sum\nolimits_{{x_{i} \in C_{j} }} {(x_{{il}} - \overline{x} _{{jl}} )^{2} } }$$
(5)
$$\overline{x} _{{jl}} = \frac{1}{{\left| {C_{j} } \right|}}\sum\nolimits_{{x_{i} \in C_{j} }} {x_{{il}} }$$
(6)

where, \({s}_{j}\) is the standard deviation of cluster \(j\). \(n\) is the number of features or dimensions of the data, and \({\sigma }_{jl}\) is the standard deviation of feature \(l\) in cluster \(j\). \(\overline{x}_{jl}\) is the mean of feature \(l\) in cluster \(j\). Then, the cluster weights \({w}_{j}\) are obtained with Eq. (7).

$${w}_{j}=\mid {C}_{j}\mid \times {s}_{j}$$
(7)

where \(\mid {C}_{j}\mid\) is the magnitude of cluster \(j\) (Lee 2024). The weights of the decision makers are calculated with Eq. (8).

$${w}_{tj}=\frac{1}{\mid {C}_{j}\mid }\frac{{w}_{j}}{\sum_{{w}_{j}\in {C}_{j}}{w}_{j}}$$
(8)

where, \(t\) means the number of decision makers \({w}_{tj}\) represents the weight of decision maker \(t\) in cluster \(j\).

3.2 Neuro computing with facial expressions

Facial expressions such as anger, happiness and sadness are important elements in human communication. For decision makers, this is also an effective factor in their decision making. The network of facial muscles forms our judgments about individuals based on their emotional expressions. Neuro Decision Making Concept is a new decision-making methodology for classifying facial expressions and emotions (Göller et al. 2024). The basis of the neurodecision-making method lies in Darwin’s research on the non-verbal display of emotions. Facial action coding system (FACS) is a classification algorithm consisting of 46 different action units developed in the 1970s (Rahadian et al. 2024). The FACS algorithm is used as a reliable and valid method in areas requiring nonverbal behavior and social interaction (Westermann et al. 2024; Cakir and Arica 2024).

3.3 Modelling uncertainty with quantum picture fuzzy rough sets with golden cuts

Quantum picture fuzzy rough sets with gold dashes are used to determine the lower and upper limits in the subjective evaluations of experts. Quantum mechanics and the golden ratio are used in the boundary determination process used here. Quantum mechanics is the branch of physics that investigates matter and energy at the subatomic level. The theory takes into account wave functions, quantum states (Du et al. 2024; Hou 2024). Additionally, fuzzy number sets in mathematics are one of the set theories that take uncertainty into account. Fuzzy sets can create structures that contain uncertainty in analysis by defining the belonging of the elements with numbers between 0 and 1. In quantum mechanics, the uncertainty in the position of a massless particle is integrated into fuzzy number systems. The particle’s ownership of momentum is proportional to the size of the wave function (Kou et al. 2023; Fan et al. 2023). The algorithm combining quantum mechanics, golden ratio and fuzzy numbers is given by Eqs. (9)–(11).

$$Q\left(\left|u>\right.\right)=\varphi {e}^{j\theta }$$
(9)
$$\left|C>\right.=\left\{\left|{u}_{1}>\right.,\left|{u}_{2}>\right.,\dots ,\left|{u}_{n}>\right.\right\}$$
(10)
$$\sum_{\left|u>\subseteq \right|C>}\left|Q(\left|u>)\right.\right|=1$$
(11)

In Eqs. (9)–(11), \(C\) represents a collection of exhaustive events represented with \(\left|{u}_{i}>\right.\). The squared magnitude of the wave function, \(\left|Q(\left|u>)\right.\right|={\varphi }^{2}\), provides the amplitude-based result for the probability of occurrence of event \(\left|u>\right.\) as determined. The value of \({\varphi }^{2}\) must lie within the range of 0 to 1, and \({\theta }^{2}\) represents the phase angle of event \(\left|u>\right.\). Furthermore, \({\left|\varphi \right|}^{2}\) serves as the degree of belief in event \(\left|u>\right.\), with \(\theta\) representing its phase angle, which can range from 0 to 360° (Carayannis et al. 2023).

Picture fuzzy sets (PFSs) is one of fuzzy sets and the intuitionistic fuzzy sets are determined in 1980s (Rani et al. 2024). This method is determined by Cuong and his colleague in the last decade to provide more comprehensive fuzzy-based evaluations for the complex decision-making problems (Ahmad et al. 2024). For this purpose, PFSs considers the positive, neutral, negative, refusal membership degrees of the fuzzy sets on a universe. The conventional fuzzy sets in Eq. (12) include the membership function.

$$A=\left\{\langle x,{\mu }_{A}(x)\rangle \left|x\right.\in X\right\}$$
(12)

where A is a fuzzy set, X is a universe of discourse and \({\mu }_{A}\) represents the membership degree of x in the fuzzy set A. \({\mu }_{A}:X\to \left[\text{0,1}\right]\). However, intuitionistic fuzzy sets in Eq. (13) are formulated with membership and non-membership functions.

$$A=\left\{\langle x,{\mu }_{A}\left(x\right),{v}_{A}\left(x\right)\rangle \left|x\right.\in X\right\}$$
(13)

where \({v}_{A}\) is the non-membership function of fuzzy set, \(0\le {\mu }_{A}\left(x\right)+{v}_{A}\left(x\right)\le 1, \forall x\in X\). Picture fuzzy sets are shown in Eq. (14). The sets are by considering the additional function parameters of the universe X with the following statement.

$$A=\left\{\langle x,{\mu }_{A}\left(x\right),{n}_{A}\left(x\right),{v}_{A}\left(x\right),{h}_{A}\left(x\right)\rangle \left|x\right.\in X\right\}$$
(14)

where \({n}_{A}\) represents the neutral and \({h}_{A}\) means the refusal degrees of membership function of x in A. \({\mu }_{A}\left(x\right)+{{n}_{A}\left(x\right)+v}_{A}\left(x\right)+{h}_{A}\left(x\right)=1, \forall x\in X\). Picture fuzzy sets also answers the complex models including several types of expert opinions entitled the membership value \({\mu }_{A}\) for ‘yes’, neutral value \({n}_{A}\) for ‘abstain’, non-membership value \({v}_{A}\) for ‘no’, the refusal value \({h}_{A}\) for ‘ignoring’. So that, it is possible to obtain results that are more consistent with the real world (Kahraman 2024). Some operations of picture fuzzy set A and B are shown with Eqs. (15)–(19).

$$A\subseteq B\, \text{if }{\mu }_{A}\left(x\right)\le {\mu }_{B}\left(x\right)\, \text{and}\, {n}_{A}\left(x\right)\le {n}_{B}\left(x\right)\, \text{and} {v}_{A}\left(x\right)\ge {v}_{B}\left(x\right), \forall x\in X$$
(15)
$$A=B\, \text{if}\, A\subseteq B\, \text{and}\, B\subseteq A$$
(16)
$$A\cup B=\left\{\left(x, max\left({\mu }_{A}\left(x\right),{\mu }_{B}\left(x\right)\right),min\left({n}_{A}\left(x\right),{n}_{B}\left(x\right)\right),min\left({v}_{A}\left(x\right),{v}_{B}\left(x\right)\right)\right)\left|x\right.\in X\right\}$$
(17)
$$A\cap B=\left\{\left(x, min\left({\mu }_{A}\left(x\right),{\mu }_{B}\left(x\right)\right),min\left({n}_{A}\left(x\right),{n}_{B}\left(x\right)\right),max\left({v}_{A}\left(x\right),{v}_{B}\left(x\right)\right)\right)\left|x\right.\in X\right\}$$
(18)
$$coA=\overline{A }=\left\{\left(x,{v}_{A}\left(x\right),{n}_{A}\left(x\right),{\mu }_{A}\left(x\right)\right)\left|x\right.\in X\right\}$$
(19)

The purpose of the rough number scope is to reduce the subjective and ambiguous assessments of decision-making analysis. It comprises a rough boundary interval, upper and lower bounds. Lower \((\underline{Apr}\left({C}_{i}\right))\), upper \((\overline{Apr}\left({C}_{i}\right))\) approximation, and boundary region \((Bnd\left({C}_{i}\right))\) of \({C}_{i}\) are presented in Eqs. (20)–(22).

$$\underline{Apr}\left({C}_{i}\right)=\cup \left\{Y\in X/R(Y)\le {C}_{i}\right\}$$
(20)
$$\overline{Apr}\left({C}_{i}\right)=\cup \left\{Y\in X/R(Y)\ge {C}_{i}\right\}$$
(21)
$$Bnd\left({C}_{i}\right)=\cup \left\{Y\in X/R(Y)\ne {C}_{i}\right\}$$
(22)

where Y means an arbitrary object of the universe X, R is the set of N classes (\({C}_{1},..{C}_{N})\). C represents the objects in X. \(\forall Y\in X\), \({C}_{i}\in R\). However, lower \((\underline{Lim}\left({C}_{i}\right))\), upper \((\overline{Lim}\left({C}_{i}\right))\) limits and the rough number \((RN\left({C}_{i}\right))\) of \({C}_{i}\) are given by Eqs. (23)–(25).

$$\underline{Lim}\left({C}_{i}\right)=\sqrt[{N}_{L}]{{\prod }_{i=1}^{{N}_{L}}Y\in }\underline{Apr}\left({C}_{i}\right)$$
(23)
$$\overline{Lim}\left({C}_{i}\right)=\sqrt[{N}_{U}]{{\prod }_{i=1}^{{N}_{U}}Y\in }\overline{Apr}\left({C}_{i}\right)$$
(24)
$$RN\left({C}_{i}\right)=\lceil\underline{Lim}\left({C}_{i}\right),\overline{Lim}\left({C}_{i}\right)\rfloor$$
(25)

where \({N}_{L}\) and \({N}_{U}\) are the number of objects for \(\underline{Apr}\left({C}_{i}\right)\) and \(\overline{Apr}\left({C}_{i}\right)\). In this study, golden-cut fuzzy rough numbers based on quantum mechanics are proposed to obtain more comprehensive and consistent results than classical fuzzy-based modelling. One of the important advantages of these numbers is that they can take into account both the evaluations of different experts and the opinions of experts with specific characteristics. QPFR sets represent the degree of membership of an element by a complex number. Formulations of membership value are expressed by Eqs. (26)–(44).

$$\left|{C}_{A}>\right.=\left\{\begin{array}{c}\langle u,(\lceil\underline{Lim}\left({C}_{i{\mu }_{A}}\right),\overline{Lim}\left({C}_{i{\mu }_{A}}\right)\rfloor \left(u\right),\lceil\underline{Lim}\left({C}_{i{n}_{A}}\right),\overline{Lim}\left({C}_{i{n}_{A}}\right)\rfloor\left(u\right),\\ \lceil\underline{Lim}\left({C}_{i{v}_{A}}\right),\overline{Lim}\left({C}_{i{v}_{A}}\right)\rfloor\left.\left(u\right),\lceil\underline{Lim}\left({C}_{i{h}_{A}}\right),\overline{Lim}\left({C}_{i{h}_{A}}\right)\rfloor\left(u\right))\right|u\in {2}^{\left|{C}_{A}>\right.}\end{array}\right\}$$
(26)

where, \({C}_{i{\mu }_{A}}\) defines the membership, \({C}_{i{n}_{A}}\) gives information about the neutral degree, \({C}_{i{v}_{A}}\) is about the non-membership, and \({C}_{i{h}_{A}}\) explains the refusal degrees and their definitions in the picture fuzzy rough numbers are given as follows

$$\underline{Lim}\left({C}_{i{\mu }_{A}}\right)=\frac{1}{{N}_{L{\mu }_{A}}}\sum_{i=1}^{{N}_{L{\mu }_{A}}}Y\in \underline{Apr}\left({C}_{i{\mu }_{A}}\right)$$
(27)
$$\underline{Lim}\left({C}_{i{n}_{A}}\right)=\frac{1}{{N}_{L{n}_{A}}}\sum_{i=1}^{{N}_{L{n}_{A}}}Y\in \underline{Apr}\left({C}_{i{n}_{A}}\right)$$
(28)
$$\underline{Lim}\left({C}_{i{v}_{A}}\right)=\frac{1}{{N}_{L{v}_{A}}}\sum_{i=1}^{{N}_{L{v}_{A}}}Y\in \underline{Apr}\left({C}_{i{v}_{A}}\right)$$
(29)
$$\underline{Lim}\left({C}_{i{h}_{A}}\right)=\frac{1}{{N}_{L{\pi }_{A}}}\sum_{i=1}^{{N}_{L{\pi }_{A}}}Y\in \underline{Apr}\left({C}_{i{h}_{A}}\right)$$
(30)
$$\overline{Lim}\left({C}_{i{\mu }_{A}}\right)=\frac{1}{{N}_{U{\mu }_{A}}}\sum_{i=1}^{{N}_{U{\mu }_{A}}}Y\in \overline{Apr}\left({C}_{i{\mu }_{A}}\right)$$
(31)
$$\overline{Lim}\left({C}_{i{n}_{A}}\right)=\frac{1}{{N}_{U{n}_{A}}}\sum_{i=1}^{{N}_{U{n}_{A}}}Y\in \overline{Apr}\left({C}_{i{n}_{A}}\right)$$
(32)
$$\overline{Lim}\left({C}_{i{v}_{A}}\right)=\frac{1}{{N}_{U{v}_{A}}}\sum_{i=1}^{{N}_{U{v}_{A}}}Y\in \overline{Apr}\left({C}_{i{v}_{A}}\right)$$
(33)
$$\overline{Lim}\left({C}_{i{h}_{A}}\right)=\frac{1}{{N}_{U{\pi }_{A}}}\sum_{i=1}^{{N}_{U{\pi }_{A}}}Y\in \overline{Apr}\left({C}_{i{h}_{A}}\right)$$
(34)

where \({N}_{L{\mu }_{A}}\),\({N}_{L{n}_{A}}\), \({N}_{L{v}_{A}}\), \({N}_{L{h}_{A}}\) are the number of elements in \(\underline{Apr}\left({C}_{i{\mu }_{A}}\right)\), \(\underline{Apr}\left({C}_{i{n}_{A}}\right)\), \(\underline{Apr}\left({C}_{i{v}_{A}}\right)\), \(\underline{Apr}\left({C}_{i{h}_{A}}\right)\) respectively while \({N}_{U{\mu }_{A}}\),\({N}_{U{n}_{A}}\), \({N}_{U{v}_{A}}\), \({N}_{U{h}_{A}}\) are defined for \(\overline{Apr}\left({C}_{i{\mu }_{A}}\right)\), \(\overline{Apr}\left({C}_{i{n}_{A}}\right)\), \(\overline{Apr}\left({C}_{i{v}_{A}}\right)\), \(\overline{Apr}\left({C}_{i{h}_{A}}\right)\).

$$\underline{Apr}\left({C}_{i{\mu }_{A}}\right)=\cup \left\{Y\in X/\widetilde{R}(Y)\le {C}_{i{\mu }_{A}}\right\}$$
(35)
$$\underline{Apr}\left({C}_{i{n}_{A}}\right)=\cup \left\{Y\in X/\widetilde{R}(Y)\le {C}_{i{n}_{A}}\right\}$$
(36)
$$\underline{Apr}\left({C}_{i{v}_{A}}\right)=\cup \left\{Y\in X/\widetilde{R}(Y)\le {C}_{i{v}_{A}}\right\}$$
(37)
$$\underline{Apr}\left({C}_{i{h}_{A}}\right)=\cup \left\{Y\in X/\widetilde{R}(Y)\le {C}_{i{h}_{A}}\right\}$$
(38)
$$\overline{Apr}\left({C}_{i{\mu }_{A}}\right)=\cup \left\{Y\in X/\widetilde{R}(Y)\le {C}_{i{\mu }_{A}}\right\}$$
(39)
$$\overline{Apr}\left({C}_{i{n}_{A}}\right)=\cup \left\{Y\in X/\widetilde{R}(Y)\le {C}_{i{n}_{A}}\right\}$$
(40)
$$\overline{Apr}\left({C}_{i{v}_{A}}\right)=\cup \left\{Y\in X/\widetilde{R}(Y)\le {C}_{i{v}_{A}}\right\}$$
(41)
$$\overline{Apr}\left({C}_{i{h}_{A}}\right)=\cup \left\{Y\in X/\widetilde{R}(Y)\le {C}_{i{h}_{A}}\right\}$$
(42)

where \(\widetilde{{C}_{i}}=\left({C}_{i{\mu }_{A}},{C}_{i{n}_{A}},{C}_{i{v}_{A}},{C}_{i{h}_{A}}\right)\) and \(\widetilde{R}\) is the collection of \(\left\{\widetilde{{C}_{1}},\widetilde{{C}_{2}},\dots ,\widetilde{{C}_{n}}\right\}\). \({C}_{i{\mu }_{A}},{C}_{i{n}_{A}},{C}_{i{v}_{A}},{C}_{i{h}_{A}}\) are the picture fuzzy sets of class \(\widetilde{{C}_{i}}\). However, the general formulation of the QPFR with the amplitude and the angle results of event is presented below.

$$C=\left[{C}_{\mu }.{e}^{j2\pi .\alpha },{C}_{n}.{e}^{j2\pi .\gamma },{C}_{v}.{e}^{j2\pi .\beta },{C}_{h}.{e}^{j2\pi .T}\right]$$
(43)
$${\varphi }^{2}=\left|{C}_{\mu }\left(\left|{u}_{i}>\right.\right)\right|$$
(44)

The amplitudes of quantum membership, neutral, non-membership, and refusal degrees, symbolized by \({C}_{\mu }\), \({C}_{n}\), \({C}_{v}\) and \({C}_{h}\) respectively, are given with terms of phase angles \(\alpha\), \(\gamma\), \(\beta\), \(T\). The amplitude of the membership value \({C}_{\mu }\) of QFS’s is symbolized by \({\varphi }^{2}\). The utilization of the golden ratio in multi-objective optimization problems allows for a harmonious balance between two opposing objectives, such as maximizing profit while minimizing risk. The golden ratio is approximately 1.618 and the value is symbolized by φ (phi). In the context of the membership and non-membership degrees, the golden ratio can be utilized to establish the ideal weighting between these two objectives. Therefore, a compromise is achieved between the components of fuzzy sets. The representation of the degrees by the golden ratio could be expressed through amplitude with Eqs. (45) and (46).

$${C}_{n}=\frac{{C}_{\mu }}{G}$$
(45)
$${C}_{h}=\frac{{C}_{v}}{G}$$
(46)

In conclusion, the phase angle for the suggested sets is obtained. The phase angle of the membership in realm of QPFR is symbolized by the symbol α with the Eqs. (47)–(49).

$$\alpha =\left|{C}_{\mu }\left(\left|{u}_{i}>\right.\right)\right|$$
(47)
$$\gamma =\frac{\alpha }{G}$$
(48)
$$T=\frac{\beta }{G}$$
(49)

\({X}_{1}\) and \({X}_{2}\) mean two universes, and \({\widetilde{A}}_{c}\) and \({\widetilde{B}}_{c}\), respectively represented by \(\left( {\begin{array}{ll} {\Big\lceil {\underline{{Lim}} \left( {C_{{\mu _{{\tilde{A}}} }} } \right),}\, {\overline{{Lim}} \left( {C_{{\mu _{{\tilde{A}}} }} } \right)} \Big\rfloor ~e^{{j2\pi .\Big\lceil {\left( {\frac{{\underline{\alpha } _{{\tilde{A}}} }}{{2\pi }}} \right),} {\left( {\frac{{\bar{\alpha }_{{\tilde{A}}} }}{{2\pi }}} \right)} \Big\rfloor }},\,~\Big\lceil {\underline{{Lim}} \left( {C_{{n_{{\tilde{A}}} }} } \right),}\, {\overline{{Lim}} \left( {C_{{n_{{\tilde{A}}} }} } \right)} \Big\rfloor ~e^{{j2\pi .\Big\lceil {\left( {\frac{{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{\gamma } _{{\tilde{A}}} }}{{2\pi }}} \right),} {\left( {\frac{{\bar{\gamma }_{{\tilde{A}}} }}{{2\pi }}} \right)} \Big\rfloor }},~} \\ {\Big\lceil {\underline{{Lim}} \left( {C_{{v_{{\tilde{A}}} }} } \right),}\, {\overline{{Lim}} \left( {C_{{v_{{\tilde{A}}} }} } \right)} \Big\rfloor ~e^{{j2\pi .\Big\lceil {\left( {\frac{{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{\beta } _{{\tilde{A}}} }}{{2\pi }}} \right),} {\left( {\frac{{\bar{\beta }_{{\tilde{A}}} }}{{2\pi }}} \right)} \Big\rfloor }},~\Big\lceil {\underline{{Lim}} \left( {C_{{h_{{\tilde{A}}} }} } \right),}\, {\overline{{Lim}} \left( {C_{{h_{{\tilde{A}}} }} } \right)} \Big\rfloor ~e^{{j2\pi .\Big\lceil {\left( {\frac{{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{T} _{{\tilde{A}}} }}{{2\pi }}} \right),} {\left( {\frac{{\bar{T}_{{\tilde{A}}} }}{{2\pi }}} \right)} \Big\rfloor }} } \\ \end{array} } \right)\),\(\left( {\begin{array}{ll} {\Big\lceil {\underline{{Lim}} \left( {C_{{\mu _{{\tilde{B}}} }} } \right),} \,{\overline{{Lim}} \left( {C_{{\mu _{{\tilde{B}}} }} } \right)} \Big\rfloor e^{{j2\pi .\Big\lceil {\left( {\frac{{\underline{\alpha } _{{\tilde{B}}} }}{{2\pi }}} \right),} {\left( {\frac{{\bar{\alpha }_{{\tilde{B}}} }}{{2\pi }}} \right)} \Big\rfloor }},\,\Big\lceil {\underline{{Lim}} \left( {C_{{n_{{\tilde{B}}} }} } \right),} {\overline{{Lim}} \left( {C_{{n_{{\tilde{B}}} }} } \right)} \Big\rfloor ~e^{{j2\pi .\Big\lceil {\left( {\frac{{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{\gamma } _{{\tilde{B}}} }}{{2\pi }}} \right),} {\left( {\frac{{\bar{\gamma }_{{\tilde{B}}} }}{{2\pi }}} \right)} \Big\rfloor }},~} \\ {\Big\lceil {\underline{{Lim}} \left( {C_{{v_{{\tilde{B}}} }} } \right),}\,{\overline{{Lim}} \left( {C_{{v_{{\tilde{B}}} }} } \right)} \Big\rfloor ~e^{{j2\pi .\Big\lceil {\left( {\frac{{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{\beta } _{{\tilde{B}}} }}{{2\pi }}} \right),} {\left( {\frac{{\bar{\beta }_{{\tilde{B}}} }}{{2\pi }}} \right)} \Big\rfloor }},~\Big\lceil {\underline{{Lim}} \left( {C_{{h_{{\tilde{B}}} }} } \right),}\, {\overline{{Lim}} \left( {C_{{h_{{\tilde{B}}} }} } \right)} \Big\rfloor ~e^{{j2\pi .\Big\lceil {\left( {\frac{{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{T} _{{\tilde{B}}} }}{{2\pi }}} \right),} {\left( {\frac{{\bar{T}_{{\tilde{B}}} }}{{2\pi }}} \right)} \Big\rfloor }} } \\ \end{array} } \right)\)

and, they are two QPFRS’s derived from the universes of discourse \({X}_{1}\) and \({X}_{2}\). The operations of QPFR numbers are detailed with Eqs. (50)–(53).

$$\begin{array}{c}\lambda *{\widetilde{A}}_{c}=\left\{\begin{array}{c}\Bigg\lceil\underline{Lim}\left({C}_{{\mu }_{\widetilde{A}}}\right)\lambda,\,\overline{Lim}\left({C}_{{\mu }_{\widetilde{A}}}\right)\lambda \Bigg\rfloor {e}^{j2\pi .\Bigg\lceil\left(\frac{{\underset{\_}{\alpha }}_{\widetilde{A}}}{2\pi }\right)\lambda,\,\left(\frac{{\overline{\alpha }}_{\widetilde{A}}}{2\pi }\right)\lambda \Bigg\rfloor},\,\Bigg\lceil\underline{Lim}\left({C}_{{n}_{\widetilde{A}}}\right)\lambda,\,\overline{Lim}\left({C}_{{n}_{\widetilde{A}}}\right)\lambda \Bigg\rfloor {e}^{j2\pi .\Bigg\lceil\left(\frac{{\underset{\_}{\gamma }}_{\widetilde{A}}}{2\pi }\right)\lambda,\,\left(\frac{{\overline{\gamma }}_{\widetilde{A}}}{2\pi }\right)\lambda \Bigg\rfloor}, \\ \Bigg\lceil\underline{Lim}\left({C}_{{v}_{\widetilde{A}}}\right)\lambda,\,\overline{Lim}\left({C}_{{v}_{\widetilde{A}}}\right)\lambda \Bigg\rfloor {e}^{j2\pi .\Bigg\lceil\left(\frac{{\underset{\_}{\beta }}_{\widetilde{A}}}{2\pi }\right)\lambda ,\left(\frac{{\overline{\beta }}_{\widetilde{A}}}{2\pi }\right)\lambda \Bigg\rfloor},\,\Bigg\lceil\underline{Lim}\left({C}_{{h}_{\widetilde{A}}}\right)\lambda,\,\overline{Lim}\left({C}_{{h}_{\widetilde{A}}}\right)\lambda \Bigg\rfloor {e}^{j2\pi .\Bigg\lceil\left(\frac{{\underline{T}}_{\widetilde{A}}}{2\pi }\right)\lambda,\,\left(\frac{{\overline{T}}_{\widetilde{A}}}{2\pi }\right)\lambda \Bigg\rfloor}\end{array}\right\}, \\ \lambda >0\end{array}$$
(50)
$$\begin{array}{c}{{\widetilde{A}}_{c}}^{\lambda }=\left\{\begin{array}{c}\Bigg\lceil{\underline{Lim}\left({C}_{{\mu }_{\widetilde{A}}}\right)}^{\lambda },\,{\overline{Lim}\left({C}_{{\mu }_{\widetilde{A}}}\right)}^{\lambda }\Bigg\rfloor {e}^{j2\pi .\Bigg\lceil{\left(\frac{{\underset{\_}{\alpha }}_{\widetilde{A}}}{2\pi }\right)}^{\lambda },{\left(\frac{{\overline{\alpha }}_{\widetilde{A}}}{2\pi }\right)}^{\lambda }\Bigg\rfloor},\,\Bigg\lceil{\underline{Lim}\left({C}_{{n}_{\widetilde{A}}}\right)}^{\lambda },\,{\overline{Lim}\left({C}_{{n}_{\widetilde{A}}}\right)}^{\lambda }\Bigg\rfloor {e}^{j2\pi .\Bigg\lceil{\left(\frac{{\underset{\_}{\gamma }}_{\widetilde{A}}}{2\pi }\right)}^{\lambda },{\left(\frac{{\overline{\gamma }}_{\widetilde{A}}}{2\pi }\right)}^{\lambda }\Bigg\rfloor}, \\ \Bigg\lceil{\underline{Lim}\left({C}_{{v}_{\widetilde{A}}}\right)}^{\lambda },\,{\overline{Lim}\left({C}_{{v}_{\widetilde{A}}}\right)}^{\lambda }\Bigg\rfloor {e}^{j2\pi .\Bigg\lceil{\left(\frac{{\underset{\_}{\beta }}_{\widetilde{A}}}{2\pi }\right)}^{\lambda },{\left(\frac{{\overline{\beta }}_{\widetilde{A}}}{2\pi }\right)}^{\lambda }\Bigg\rfloor},\,\Bigg\lceil{\underline{Lim}\left({C}_{{h}_{\widetilde{A}}}\right)}^{\lambda },\,{\overline{Lim}\left({C}_{{h}_{\widetilde{A}}}\right)}^{\lambda }\Bigg\rfloor {e}^{j2\pi .\Bigg\lceil{\left(\frac{{\underline{T}}_{\widetilde{A}}}{2\pi }\right)}^{\lambda },{\left(\frac{{\overline{T}}_{\widetilde{A}}}{2\pi }\right)}^{\lambda }\Bigg\rfloor}, \end{array}\right\},\\ \lambda >0\end{array}$$
(51)
$$\tilde{A}_{c} \cup \tilde{B}_{c} = \left\{ {\begin{array}{ll} {\Bigg\lceil {min\left( {\underline{{Lim}} \left( {C_{{\mu _{{\tilde{A}}} }} } \right)e^{{j2\pi .\left( {\frac{{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{\alpha } _{{\tilde{A}}} }}{{2\pi }}} \right)}},\,\underline{{Lim}} \left( {C_{{\mu _{{\tilde{B}}} }} } \right)e^{{j2\pi .\left( {\frac{{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{\alpha } _{{\tilde{B}}} }}{{2\pi }}} \right)}} } \right),}\,{max\left( {\overline{{Lim}} \left( {C_{{\mu _{{\tilde{A}}} }} } \right)e^{{j2\pi .\left( {\frac{{\bar{\alpha }_{{\tilde{A}}} }}{{2\pi }}} \right)}},\,\overline{{Lim}} \left( {C_{{\mu _{{\tilde{B}}} }} } \right)e^{{j2\pi .\left( {\frac{{\bar{\alpha }_{{\tilde{B}}} }}{{2\pi }}} \right)}} } \right)} \Bigg\rfloor ,} \\ {\Bigg\lceil {min\left( {\underline{{Lim}} \left( {C_{{n_{{\tilde{A}}} }} } \right)e^{{j2\pi .\left( {\frac{{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{\gamma } _{{\tilde{A}}} }}{{2\pi }}} \right)}},\,\underline{{Lim}} \left( {C_{{n_{{\tilde{B}}} }} } \right)e^{{j2\pi .\left( {\frac{{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{\gamma } _{{\tilde{B}}} }}{{2\pi }}} \right)}} } \right),}\,{max\left( {\overline{{Lim}} \left( {C_{{n_{{\tilde{A}}} }} } \right)e^{{j2\pi .\left( {\frac{{\bar{\gamma }_{{\tilde{A}}} }}{{2\pi }}} \right)}},\,\overline{{Lim}} \left( {C_{{n_{{\tilde{B}}} }} } \right)e^{{j2\pi .\left( {\frac{{\bar{\gamma }_{{\tilde{B}}} }}{{2\pi }}} \right)}} } \right)} \Bigg\rfloor ,} \\ {\Bigg\lceil {min\left( {\underline{{Lim}} \left( {C_{{v_{{\tilde{A}}} }} } \right)e^{{j2\pi .\left( {\frac{{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{\beta } _{{\tilde{A}}} }}{{2\pi }}} \right)}},\,\underline{{Lim}} \left( {C_{{v_{{\tilde{B}}} }} } \right)e^{{j2\pi .\left( {\frac{{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{\beta } _{{\tilde{B}}} }}{{2\pi }}} \right)}} } \right),}\,{max\left( {\overline{{Lim}} \left( {C_{{v_{{\tilde{A}}} }} } \right)e^{{j2\pi .\left( {\frac{{\bar{\beta }_{{\tilde{A}}} }}{{2\pi }}} \right)}},\,\overline{{Lim}} \left( {C_{{v_{{\tilde{B}}} }} } \right)e^{{j2\pi .\left( {\frac{{\bar{\beta }_{{\tilde{B}}} }}{{2\pi }}} \right)}} } \right)} \Bigg\rfloor ,} \\ {\Bigg\lceil {min\left( {\underline{{Lim}} \left( {C_{{h_{{\tilde{A}}} }} } \right)e^{{j2\pi .\left( {\frac{{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{T} _{{\tilde{A}}} }}{{2\pi }}} \right)}},\,\underline{{Lim}} \left( {C_{{h_{{\tilde{B}}} }} } \right)e^{{j2\pi .\left( {\frac{{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{T} _{{\tilde{B}}} }}{{2\pi }}} \right)}} } \right),}\,{max\left( {\overline{{Lim}} \left( {C_{{h_{{\tilde{A}}} }} } \right)e^{{j2\pi .\left( {\frac{{\bar{T}_{{\tilde{A}}} }}{{2\pi }}} \right)}},\,\overline{{Lim}} \left( {C_{{h_{{\tilde{B}}} }} } \right)e^{{j2\pi .\left( {\frac{{\bar{T}_{{\tilde{B}}} }}{{2\pi }}} \right)}} } \right)} \Bigg\rfloor ,} \\ \end{array} } \right\}$$
(52)
$$\tilde{A}_{c} \cap \tilde{B}_{c} = \left\{ {\begin{array}{ll} {\Bigg\lceil {max\left( {\underline{{Lim}} \left( {C_{{\mu _{{\tilde{A}}} }} } \right)e^{{j2\pi .\left( {\frac{{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{\alpha } _{{\tilde{A}}} }}{{2\pi }}} \right)}},\,\underline{{Lim}} \left( {C_{{\mu _{{\tilde{B}}} }} } \right)e^{{j2\pi .\left( {\frac{{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{\alpha } _{{\tilde{B}}} }}{{2\pi }}} \right)}} } \right),}\,{min\left( {\overline{{Lim}} \left( {C_{{\mu _{{\tilde{A}}} }} } \right)e^{{j2\pi .\left( {\frac{{\bar{\alpha }_{{\tilde{A}}} }}{{2\pi }}} \right)}},\,\overline{{Lim}} \left( {C_{{\mu _{{\tilde{B}}} }} } \right)e^{{j2\pi .\left( {\frac{{\bar{\alpha }_{{\tilde{B}}} }}{{2\pi }}} \right)}} } \right)} \Bigg\rfloor ,} \\ {\Bigg\lceil {max\left( {\underline{{Lim}} \left( {C_{{n_{{\tilde{A}}} }} } \right)e^{{j2\pi .\left( {\frac{{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{\gamma } _{{\tilde{A}}} }}{{2\pi }}} \right)}},\,\underline{{Lim}} \left( {C_{{n_{{\tilde{B}}} }} } \right)e^{{j2\pi .\left( {\frac{{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{\gamma } _{{\tilde{B}}} }}{{2\pi }}} \right)}} } \right),}\,{min\left( {\overline{{Lim}} \left( {C_{{n_{{\tilde{A}}} }} } \right)e^{{j2\pi .\left( {\frac{{\bar{\gamma }_{{\tilde{A}}} }}{{2\pi }}} \right)}},\,\overline{{Lim}} \left( {C_{{n_{{\tilde{B}}} }} } \right)e^{{j2\pi .\left( {\frac{{\bar{\gamma }_{{\tilde{B}}} }}{{2\pi }}} \right)}} } \right)} \Bigg\rfloor ,} \\ {\Bigg\lceil {max\left( {\underline{{Lim}} \left( {C_{{v_{{\tilde{A}}} }} } \right)e^{{j2\pi .\left( {\frac{{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{\beta } _{{\tilde{A}}} }}{{2\pi }}} \right)}},\,\underline{{Lim}} \left( {C_{{v_{{\tilde{B}}} }} } \right)e^{{j2\pi .\left( {\frac{{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{\beta } _{{\tilde{B}}} }}{{2\pi }}} \right)}} } \right),}\,{min\left( {\overline{{Lim}} \left( {C_{{v_{{\tilde{A}}} }} } \right)e^{{j2\pi .\left( {\frac{{\bar{\beta }_{{\tilde{A}}} }}{{2\pi }}} \right)}},\,\overline{{Lim}} \left( {C_{{v_{{\tilde{B}}} }} } \right)e^{{j2\pi .\left( {\frac{{\bar{\beta }_{{\tilde{B}}} }}{{2\pi }}} \right)}} } \right)} \Bigg\rfloor ,} \\ {\Bigg\lceil {max\left( {\underline{{Lim}} \left( {C_{{h_{{\tilde{A}}} }} } \right)e^{{j2\pi .\left( {\frac{{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{T} _{{\tilde{A}}} }}{{2\pi }}} \right)}},\,\underline{{Lim}} \left( {C_{{h_{{\tilde{B}}} }} } \right)e^{{j2\pi .\left( {\frac{{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{T} _{{\tilde{B}}} }}{{2\pi }}} \right)}} } \right),}\,{min\left( {\overline{{Lim}} \left( {C_{{h_{{\tilde{A}}} }} } \right)e^{{j2\pi .\left( {\frac{{\bar{T}_{{\tilde{A}}} }}{{2\pi }}} \right)}},\,\overline{{Lim}} \left( {C_{{h_{{\tilde{B}}} }} } \right)e^{{j2\pi .\left( {\frac{{\bar{T}_{{\tilde{B}}} }}{{2\pi }}} \right)}} } \right)} \Bigg\rfloor ,} \\ \end{array} } \right\}$$
(53)

3.4 M-SWARA with quantum picture fuzzy rough sets

SWARA method is referred to as the progressive weighting method in the multi-criterion decision-making literature. The basis of the SWARA method is the establishment of a hierarchical proportional structure in the evaluation of criteria (Bouraima et al., 2024). M-SWARA is an extension of the SWARA method, and its details are presented under this title (Rahadian et al. 2024; Mikhaylov et al. 2024).

Step 5 covers defining the criteria set for NFT management. Criteria affecting NFT management are defined from literature. In Step 6, dependency degrees between the criteria are defined from experts. Linguistic opinion is obtained to create relation matrix. Step 7 involves constructing QPFR for the relationship matrix. Quantum picture fuzzy rough relation matrix is formulated by considering the linguistic evaluations of decision makers and the quantum spherical fuzzy numbers. \({C}_{{\mu }_{\widetilde{A}}}\) \(C={\left[{C}_{ij}\right]}_{n\times n}\) are the relationship each criterion. \({C}_{ij}\) is about influence value of i-criterion over the j-criterion. The detail of matrix is given by Eq. (54).

$${C}_{k}= \left[\begin{array}{cccccc}0& {C}_{12}& \cdots & & \cdots & {C}_{1n}\\ {C}_{21}& 0& \cdots & & \cdots & {C}_{2n}\\ \vdots & \vdots & \ddots & & \cdots & \cdots \\ \vdots & \vdots & \vdots & & \ddots & \vdots \\ {C}_{n1}& {C}_{n2}& \cdots & & \cdots & 0\end{array}\right]$$
(54)

where \(C\) determines QPFR direct relation matrix.

\(C_{{ij}} = \left( {\begin{array}{ll} {\Big\lceil {\underline{{Lim}} \left( {C_{{\mu _{{ij}} }} } \right),}\,{\overline{{Lim}} \left( {C_{{\mu _{{ij}} }} } \right)} \Big\rfloor ~e^{{j2\pi .\Big\lceil {\left( {\frac{{\underline{\alpha } _{{ij}} }}{{2\pi }}} \right),} {\left( {\frac{{\bar{\alpha }_{{ij}} }}{{2\pi }}} \right)} \Big\rfloor }},\,\Big\lceil {\underline{{Lim}} \left( {C_{{n_{{ij}} }} } \right),}\,{\overline{{Lim}} \left( {C_{{n_{{ij}} }} } \right)} \Big\rfloor ~e^{{j2\pi .\Big\lceil {\left( {\frac{{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{\gamma } _{{ij}} }}{{2\pi }}} \right),} {\left( {\frac{{\bar{\gamma }_{{ij}} }}{{2\pi }}} \right)} \Big\rfloor }} ,~} \\ {\Big\lceil {\underline{{Lim}} \left( {C_{{v_{{ij}} }} } \right),}\,{\overline{{Lim}} \left( {C_{{v_{{ij}} }} } \right)} \Big\rfloor ~e^{{j2\pi .\Big\lceil {\left( {\frac{{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{\beta } _{{ij}} }}{{2\pi }}} \right),} {\left( {\frac{{\bar{\beta }_{{ij}} }}{{2\pi }}} \right)} \Big\rfloor }},\,\Big\lceil {\underline{{Lim}} \left( {C_{{h_{{ij}} }} } \right),}\,{\overline{{Lim}} \left( {C_{{h_{{ij}} }} } \right)} \Big\rfloor ~e^{{j2\pi .\Big\lceil {\left( {\frac{{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{T} _{{ij}} }}{{2\pi }}} \right),} {\left( {\frac{{\bar{T}_{{ij}} }}{{2\pi }}} \right)} \Big\rfloor }} } \\ \end{array} } \right)\), and k is the number of decision makers. Step 8 is about defining QPFRS’s for the relationship matrix. The \(C\) matrix of the experts is calculated with QPFR numbers in Eq. (55).

$$\text{C}=\left(\begin{array}{c}\Bigg\lceil{min}_{i=1}^{k}\left(\underline{Lim}\left({C}_{{\mu }_{ij}}\right)\right),{max}_{i=1}^{k}\left(\overline{Lim}\left({C}_{{\mu }_{ij}}\right)\right)\Bigg\rfloor {e}^{j2\pi .\Bigg\lceil{min}_{i=1}^{k}\left(\frac{{\underset{\_}{\alpha }}_{ij}}{2\pi }\right),{max}_{i=1}^{k}\left(\frac{{\overline{\alpha }}_{ij}}{2\pi }\right)\Bigg\rfloor}, \\ \Bigg\lceil{min}_{i=1}^{k}\left(\underline{Lim}\left({C}_{{n}_{ij}}\right)\right),{max}_{i=1}^{k}\left(\overline{Lim}\left({C}_{{n}_{ij}}\right)\right)\Bigg\rfloor {e}^{j2\pi .\Bigg\lceil{min}_{i=1}^{k}\left(\frac{{\underset{\_}{\gamma }}_{ij}}{2\pi }\right),{max}_{i=1}^{k}\left(\frac{{\overline{\gamma }}_{ij}}{2\pi }\right)\Bigg\rfloor}, \\ \Bigg\lceil{min}_{i=1}^{k}\left(\underline{Lim}\left({C}_{{v}_{ij}}\right)\right),{max}_{i=1}^{k}\left(\overline{Lim}\left({C}_{{v}_{ij}}\right)\right)\Bigg\rfloor {e}^{j2\pi .\Bigg\lceil{min}_{i=1}^{k}\left(\frac{{\underset{\_}{\beta }}_{ij}}{2\pi }\right),{max}_{i=1}^{k}\left(\frac{{\overline{\beta }}_{ij}}{2\pi }\right)\Bigg\rfloor}, \\ \Bigg\lceil{min}_{i=1}^{k}\left(\underline{Lim}\left({C}_{{h}_{ij}}\right)\right),{max}_{i=1}^{k}\left(\overline{Lim}\left({C}_{{h}_{ij}}\right)\right)\Bigg\rfloor {e}^{j2\pi .\Bigg\lceil{min}_{i=1}^{k}\left(\frac{{\underline{T}}_{ij}}{2\pi }\right),{max}_{i=1}^{k}\left(\frac{{\overline{T}}_{ij}}{2\pi }\right)\Bigg\rfloor}\end{array}\right)$$
(55)

In Step 9, defuzzified values are calculated. The \(Defc\) of QPFRS’s are calculated with Eq. (56).

$${Defc}_{i}=\frac{\left(\begin{array}{c}\underline{Lim}\left({C}_{{\mu }_{i}}\right)-\underline{Lim}\left({C}_{{n}_{i}}\right)+\underline{Lim}\left({C}_{{\mu }_{i}}\right).\left(\underline{Lim}\left({C}_{{v}_{i}}\right)-\underline{Lim}\left({C}_{{h}_{i}}\right)\right)+\left(\frac{{\underset{\_}{\alpha }}_{ij}}{2\pi }\right)-\left(\frac{{\underset{\_}{\gamma }}_{ij}}{2\pi }\right)+\left(\frac{{\underset{\_}{\alpha }}_{ij}}{2\pi }\right).\left(\left(\frac{{\underset{\_}{\beta }}_{ij}}{2\pi }\right)-\left(\frac{{\underline{T}}_{ij}}{2\pi }\right)\right)+\\ \overline{Lim}\left({C}_{{\mu }_{i}}\right)-\overline{Lim}\left({C}_{{n}_{i}}\right)+\overline{Lim}\left({C}_{{\mu }_{i}}\right).\left(\overline{Lim}\left({C}_{{v}_{i}}\right)-\overline{Lim}\left({C}_{{h}_{i}}\right)\right)+\left(\frac{{\overline{\alpha }}_{ij}}{2\pi }\right)-\left(\frac{{\overline{\gamma }}_{ij}}{2\pi }\right)+\left(\frac{{\overline{\alpha }}_{ij}}{2\pi }\right).\left(\left(\frac{{\overline{\beta }}_{ij}}{2\pi }\right)-\left(\frac{{\overline{T}}_{ij}}{2\pi }\right)\right)\end{array}\right)}{2}$$
(56)

In Step 10, the normalized relationship matrix is obtained. Then, in Step 11, it continues with the calculation of \({s}_{j}\), \({k}_{j}\), \({q}_{j}\),and \({w}_{j}\) values for the relationship value of each criterion.

$$k_{j} = \left\{ {\begin{array}{ll} 1 & {j = 1} \\ {s_{j} + 1} & {j > 1} \\ \end{array} } \right.$$
(57)
$$q_{j} = \left\{ {\begin{array}{ll} 1 & {j = 1} \\ {\frac{{q_{{j - 1}} }}{{k_{j} }}} & {j > 1} \\ \end{array} } \right.$$
(58)

\(If\, {s}_{j-1}={s}_{j}, {q}_{j-1}={q}_{j}\); \(If\, {s}_{j}=0, {k}_{j-1}={k}_{j}\)

$${w}_{j}=\frac{{q}_{j}}{\sum_{k=1}^{n}{q}_{k}}$$
(59)

\({s}_{j}\) means the comparative importance rate by QPFRS’s and provides the importance value of the criterion \({c}_{j}\) on the following criterion \({c}_{j+1}\).\({k}_{j}\) represents the coefficient value of the \({s}_{j}\) and \({q}_{j}\) defines the recalculated weight of \({k}_{j}\). \({w}_{j}\) gives knowledge about the weights of the criteria in the fuzzy sets. The values are sorted. Step 12 involves constructing the relation matrix and the directions between the criteria. Stable values of relation matrix with the values of \({w}_{j}\) is obtained by transposing and limiting the matrix to the power of 2t + 1. t is an arbitrarily biggest. Thus, the weighting results of \({w}_{j}\) are obtained with the stabilization process in the M-SWARA method. Impact-relation degrees of the criteria are constructed by the threshold value that is the average value of relation matrix. Greater values than the threshold in the column is influenced by the criteria in the row (Mikhaylov et al. 2023). So, the impact directions of the criteria can be showed properly.

3.5 VIKOR with quantum picture fuzzy rough sets

VIseKriterijumska Optimizacija I Kompromisno Resenje (VIKOR) is introduced for determining the ideal solution based on consensus among alternatives. So, it is aimed to rank alternatives based on the decision matrix. In our proposed methodology, VIKOR is utilized alongside Quantum Picture Fuzzy Rough Sets (QPFR) to handle uncertainty more accurately. The extended VIKOR method involves several steps: first, emotional expressions are collected for alternatives, followed by the construction of QPFR for the decision matrix. Then, QPFRS sets are determined based on linguistic opinions of experts, represented by the decision matrix. The defuzzified decision values are computed, and Si, Ri, and Qi values are constructed to evaluate group utility and regret. The final scores of alternatives are determined using a weighted approach, with the weight of the strategy of maximum group utility set at v = 0.5. Each case is considered, and the final ranking of alternatives is based on fulfilling specific conditions ensuring consistency. If conditions are not met, a compromise solution is preferred, and comparative ranking values are calculated for result consistency with sensitivity analysis. This methodology integrates emotional expressions, linguistic evaluations, and QPFR to enhance decision-making robustness and consistency in uncertain environments. Compromise solution in the VIKOR method implies consensus with mutual concessions (Biswas et al., 2024; Biswas et al. 2023). The step of the extended VIKOR method is given below.

In Step 13, emotional expressions are collected for the alternatives. Linguistic evaluation is obtained. In Step 14 is about constructing the QPFR for the decision matrix. In Step 15, QPFRS sets are determined for the decision matrix. The decision matrix is defined as X, with \({X}_{ij}\) representing the alternative i with respect to criterion j based on the QPFRNs-based linguistic opinions of experts. The matrix is illustrated by Eq. (60).

$${X}_{k}=\left[\begin{array}{cccccc}0& {X}_{12}& \cdots & & \cdots & {X}_{1m}\\ {X}_{21}& 0& \cdots & & \cdots & {X}_{2m}\\ \vdots & \vdots & \ddots & & \cdots & \cdots \\ \vdots & \vdots & \vdots & & \ddots & \vdots \\ {X}_{n1}& {X}_{n2}& \cdots & & \cdots & 0\end{array}\right]$$
(60)

Step 16 involves computing the defuzzified decision values. The defuzzified of the QSFS’s for the decision matrix are obtained with the help of Eq. (56). In Step 17, Si, Ri and Qi values are constructed. First, fj values need to be calculated. The best \({\widetilde{f}}_{J}^{*}\) and worst \({\widetilde{f}}_{j}^{-}\) values for each criterion function are calculated by formula (61).

$${\widetilde{f}}_{J}^{*}=\underset{i}{max}{\widetilde{x}}_{ij},\text{ and }{\widetilde{f}}_{j}^{-}=\underset{i}{min}{\widetilde{x}}_{ij},$$
(61)

The mean group utility and maximal regret are calculated with Eqs. (62) and (63).

$$\tilde{S}_{i} = \sum\limits_{{i = 1}}^{n} {\tilde{w}_{j} } \frac{{\left( {\left| {\tilde{f}_{j} ^{*} - \tilde{x}_{{ij}} } \right|} \right)}}{{\left( {\left| {\tilde{f}_{j} ^{*} - \tilde{f}_{j} ^{ - } } \right|} \right)}}$$
(62)
$${\widetilde{R}}_{i}={\mathit{max}}_{j}\left[{\widetilde{w}}_{j}\frac{\left(\left|{{\widetilde{f}}_{j}}^{*}-{\widetilde{x}}_{ij}\right|\right)}{\left(\left|{{\widetilde{f}}_{j}}^{*}-{{\widetilde{f}}_{j}}^{-}\right|\right)}\right]$$
(63)

The final scores of alternatives is obtained using Eq. (64).

$${\widetilde{Q}}_{i}=v\left({\widetilde{S}}_{i}-{\widetilde{S}}^{*}\right)/\left({\widetilde{S}}^{-}-{\widetilde{S}}^{*}\right)+\left(1-v\right)\left({\widetilde{R}}_{i}-{\widetilde{R}}^{*}\right)/\left({\widetilde{R}}^{-}-{\widetilde{R}}^{*}\right)$$
(64)

The weight of the strategy of maximum group utility is v = 0.5. The weight of individual regret means (1 − v). Each case is considered. The final ranking of alternatives requires the fulfillment of two requirements once the values of S, R, and Q are sorted. Condition 1 is calculated with Eq. (65). The situation considers the second position of the alternatives ranked according to Q value. Next condition requires that the alternative must be ranked by either S or R, or both.

$$Q\left({A}^{(2)}\right)-Q\left({A}^{(1)}\right)\ge 1/\left(j-1\right)$$
(65)

If any of these conditions are not met, a compromise solution is preferred. These alternatives are preferred based on their close ranking scores. It is calculated with the maximum value of M (Yue 2024). In Step 18, comparative raking values are calculated for consistency of results with sensitivity analysis.

4 Analysis

In this section, the findings obtained in determining strategies for NFT management in the Metaverse are presented under subtitles. The proposed analysis explains the step-by-step solutions of the methodology employed for decision-making in selecting identity management for NFT in the Metaverse. The methodology involves two main stages: criteria determination and strategy evaluation. Criteria are weighted using the M-SWARA method, followed by strategy evaluation using the VIKOR method, integrating Quantum Picture Fuzzy Rough Sets (QPFR) to handle uncertainty. Additionally, AI-based expert prioritization using k-means clustering is utilized, with further considerations for demographic diversity. Neuro Computing with facial expressions aids in decision-making, while Quantum Picture Fuzzy Rough Sets with golden cuts model uncertainty. The response provides detailed equations and explanations, showcasing a comprehensive approach to decision-making methodology incorporating various techniques and considerations.

For this purpose, the primary focus of our investigation is to understand the strategies for NFT management in the Metaverse based on the expert prioritization. For this purpose, at the first stage, the similarities among experts are discerned based on their specific competencies rather than demographic characteristics. We concur that emphasizing expertise over demographic attributes is pivotal in ensuring the accuracy and reliability of our findings. It is aimed to prioritize expertise-driven criteria in the expert ranking process, thereby bolstering the discernment of homogeneous expert clusters. Accordingly, our prioritization is to define the most irrelevant expert among the decision-maker group which facilitates a nuanced understanding of which expert holds no weight in the group.

4.1 Prioritizing the expert choices with AI-based decision-making method

We aim to discern which cohorts are most adept at providing nuanced evaluations pertinent to decision-making, finance, and the metaverse by categorizing experts into three distinct industry groups as university, service, and production. Consequently, our endeavor is to ascertain the most pertinent industry cohorts whose evaluations contribute significantly to the research discourse, thereby facilitating informed decision-making processes. Accordingly, five experts with academic or industry experience are determined. Artificial intelligence is used to determine the people to be included in the analysis among these experts. Firstly, in Step 1, the demographic information of the experts is defined. The demographic information of the experts is displayed in Table 1.

Table 1 Specifications of the decision makers

In Step 2, the optimal k value is calculated for cluster analysis using Eq. (1). The values of the WCSS are computed for the different number of clusters. In this case, the number of cluster is 5, from 1 to 5, the set of WCSS values are presented for the different k values as seen in Table 9. Also, the elbow point is defined for selecting the optimal number of clusters for the dataset. Accordingly, Fig. 2 illustrate the plot of the values of WCSS against the number of k.

Fig. 2
figure 2

The plot of the WCSS values and k numbers

According to the results, for K = 3, the WCSS value is minimized and optimized as adding more clusters will not significantly reduce the WCSS value. In Step 3, k-means clustering algorithm is applied for decision makers with Eqs. (2) and (3). The optimal value for K number is applied for defining the clusters of decision makers. In this example, the optimal value of K is 3. The iteration results of different three clusters are given in Table 10. As seen in Table 10, the cluster assignment results are same with initial cluster centers and average of data points for iteration 1. So, the cluster of the decision makers is considered as DM1 and DM5 are considered in cluster 1; DM2 and DM4 are listed in cluster 2; DM3 is stated in cluster 3. In Step 4, the weights of the decision makers are computed by considering the cluster weights of the decision makers in Table 11. The weights of the decision are calculated with the help of Eqs. (4)–(8).

Furthermore, the normalization of our expert dataset is integral to mitigating potential biases and ensuring equitable weighting across decision-makers. To this end, we have incorporated the Pareto principle within our normalization process, thereby facilitating a more uniform distribution of weights and enhancing the consistency of evaluations among experts as seen in Table 2. The Pareto Principle, also known as the 80/20 rule, is a powerful concept used in various fields, including decision-making and resource allocation. It states that for many phenomena, about 80% of the consequences are produced by 20% of the causes. Accordingly, the Pareto Principle helps identify the most significant factors in a set of data. This meticulous approach not only enhances the reliability of our analysis but also underscores our commitment to methodological rigor. The weights and the normalized values of the decision makers for expert prioritization are given in Table 2.

Table 2 Weights of the decision makers

In Table 2, it is seen that DM1 and DM5 have the best priorities with the value of 0.48 as DM2 and DM4 have relatively the weakest priorities among the decision makers. However, DM3 has no priority in the expert team. Similarly, the normalized values are seen as 0.40 for DM1 and DM5; 0.10 for DM2 and DM4. Thus, in the decision-making process, the decision makers excluding DM3 are considered for collecting their linguistic evaluations for assessing the criteria and alternatives.

4.2 Weighting the criteria set for managing non-fungible tokens with QPFR-M-SWARA

Step 5 is about defining the criteria set for NFT management. As a result of the literature review, four criteria that are effective on managing NFT are determined. The determined criteria are given in Table 3.

Table 3 Criteria set for managing Non-Fungible Tokens

Step 6 involves collecting emotional expressions for the criteria. In this process, opinions are taken from determined experts. Action coding system (AUs) offers a structured way to analyze facial expressions, providing insights into the emotions being conveyed through the observed combinations of facial muscle movements. In this study, each emotion is associated with specific combinations of AUs by the specific emotions defined as contempt, surprise, happiness, and intermediate emotions. The details of the action unit combinations and the related linguistic scales of criteria and alternatives are given in Table 12. Based on the facial observations, the action unit combinations of each criteria assessment from the decision makers are collected and the action unit combinations are detailed in Table 4.

Table 4 Observed action unit combinations of emotional expressions

In Step 7, QPFR for the relation matrix is constructed using Eq. (54). The detail of result is given in Table 13. In Step 8, the quantum picture fuzzy rough sets for the relation matrix are determined with Eq. (55). Table 14 presents the relevant results. Step 9 is the step defuzzified the values for the criteria using Eq. (56). The defuzzified values is shown in Table 15. In Step 10, the relationship matrix is normalized with Eqs. (57)–(59). The results are exhibited in Table 16. In Step 11, sj, kj, qj, and wj values are calculated. The values are given in Table 17. In Step 12, relation matrix and the directions among the criteria are constructed. The details of results are presented in Table 18. Finally, the matrix is stabilized. The result is shown in Table 5.

Table 5 Stable matrix

According to Table 5, the most important criterion is SEC, while the second most important criterion is TBI. Because the Stable matrix values of these two criteria are the highest. In the order of importance of the criteria, MBS ranks last.

4.3 Ranking the identity management choices for non-fungible tokens in the Metaverse with QPFR-VIKOR

In Step 13, emotional expressions for the alternatives are collected. For this purpose, alternatives are determined first. The alternatives considered in the study are presented with their codes in Table 6.

Table 6 Strategies of managing NFT in Metaverse

Then, the observed unit combinations of emotional expressions for the alternatives are obtained using Eq. (60). The obtained values are presented in Table 7.

Table 7 Observed action unit combinations of emotional expressions for the alternatives

In Step 14, QPFR for the decision matrix is constructed in Table 19. Step 15 involves determining the quantum picture fuzzy rough sets for the decision matrix. The set is given in Table 20. With Step 16, the defuzzified decision values in Table 21 are calculated. In Step 17, S, R and Q values are constructed using Eqs. (61)–(65). The value is given in Table 22. Step 18 is about comparison of results with sensitivity analysis. In this step, the results of two methods are compared with four different cases. The results obtained are shared in Table 8.

Table 8 Comparative ranking values with sensitivity analysis

In case of maximum group utility, veto, and consensus, biometrics for unique identification has the best ranking performance among the alternatives. Privacy with authentication plays also critical role for the effectiveness of this process.

5 Discussion

It has been determined that security must be ensured first to increase the use of non-fungible tokens on the Metaverse platform. A secure platform allows users to feel safe regarding these tokens. In other words, if personal data can be securely protected, investors’ anxiety will decrease, and they will prefer this platform more. Moreover, non-fungible tokens can represent assets of very high value. Thanks to the security of the platform, the risk of theft of these assets can be minimized. In addition, these tokens also contribute to ensuring payment security. These issues allow investors to prefer this platform more. In this context, it is appropriate to take some precautions to increase security. For example, authentication methods need to be strengthened to ensure the security of user accounts. Cruz et al. (2024) mentioned that the application of a double control system helps minimize the risks in the process. On the other hand, Lekhi (2024) underlined that security audits should be performed on the system at regular intervals. As a result of the findings obtained in these inspections, more effective security measures should be designed. Similarly, Kizza (2024) identified that educating users on security issues raises awareness about these measures. This situation helps to significantly reduce security problems.

It is also identified that the technological infrastructure must be sufficient to increase the use of non-fungible tokens on the Metaverse platform. High technological infrastructure allows faster solutions to possible problems. On the other hand, businesses must have sufficient technological infrastructure so that many people can operate smoothly on this platform. Moreover, the development of technological infrastructure helps to implement transactions on the blockchain system more securely. This allows the trading volume on the platform to be increased. Due to these issues, businesses need to take some actions to improve the technological infrastructure. In this context, Messina et al. (2024) concluded that performance analysis should be carried out by performing regular checks on the platform. As a result of these analyses, necessary steps can be taken to use more up-to-date technology. Additionally, Alt et al. (2024) and Pineda et al. (2024) stated that appropriate mobile applications should be developed so that users can access the platform via mobile devices. This situation allows more investors to trade on this platform.

6 Conclusion

In this study, it is aimed to identify the appropriate the identity management choices of non-fungible tokens in the Metaverse. There are three different stages in the proposed novel fuzzy decision-making model. The first stage includes prioritizing the expert choices with artificial intelligence-based decision-making methodology. In the second stage, the criteria sets for managing non-fungible tokens are weighted by using Quantum picture fuzzy rough sets-based M-SWARA methodology. Thirdly, the identity management choices regarding non-fungible tokens in the Metaverse are ranked with Quantum picture fuzzy rough sets oriented VIKOR. It is concluded that security must be ensured first to increase the use of non-fungible tokens on the Metaverse platform. Furthermore, technological infrastructure must also be sufficient to achieve this objective. Moreover, biometrics for unique identification has the best ranking performance among the alternatives. Privacy with authentication plays also critical role for the effectiveness of this process.

To increase the effectiveness of NFT projects, it is important to increase security. To achieve this goal, some policy implications can be taken into consideration. Conducting independent audits is of vital importance in this process. In this way, flaws in the projects can be clearly identified. This situation allows security measures to be implemented effectively. On the other hand, authentication methods can be made more comprehensive. In this way, users’ accounts can be protected, and fraud attempts can be prevented. In addition, more secure payment methods should be preferred. Thanks to encrypted payment methods, fraud transactions can be minimized. This also contributes to increasing investors’ confidence in the projects. Moreover, IT software is constantly updated. Using new software can increase security in the process. Moreover, providing training to users also allows increasing the security in NFT projects. Thanks to these trainings, customers will be more informed about these processes. Thus, it will be much more possible to prevent fraud attempts.

The main contribution of this study is that artificial intelligence methodology is integrated to the fuzzy decision-making modelling to differentiate the experts. With the help of this situation, it can be possible to create clusters for the experts. Hence, the opinions of experts outside this group may be excluded from the scope. Similarly, M-SWARA methodology is preferred to weight the determinants. Owing to this issue, the causal directions among these items can be considered. This proposed model can be taken into consideration to solve the critical problems of other industries. The main objectives of businesses are to increase their profitability. On the other hand, businesses have many income and expense items. Therefore, businesses need to make the right decisions regarding these complex and multi-factorial issues to determine effective financial strategies. It will be possible to determine these strategies successfully with the fuzzy multi-criteria decision-making model developed in this study. In this study, this model was developed for the efficiency of the financial sector. Similarly, this model can also be taken into account in making strategic decisions for other sectors such as textiles, energy and automobiles.

The main limitation of this study is making a general evaluation with respect to the effectiveness of the non-fungible tokens in Metaverse platform. Thus, in the following studies, an industry-specific examination can also be performed. With the help of these analysis results, more specific strategies can be presented for these industries. On the other side, there are also some limitations in the proposed model. In this study, artificial intelligence methodology is integrated to the fuzzy decision-making modelling to differentiate experts. However, the weights of these experts are not calculated. Therefore, for the future research direction, these weights can also be identified by considering artificial intelligence theory.