Time management is very important and it may actually affect individual’s overall performance and achievements. Students nowadays always commented that they do not have enough time to complete all the tasks assigned to them. In addition, a university environment’s flexibility and freedom can derail students who have not mastered time management skills. Therefore, the aim of this study is to determine the relationship between the time management and academic achievement of the students. The factor analysis result showed three main factors associated with time management which can be classified as time planning, time attitudes and time wasting. The result also indicated that gender and races of students show no significant differences in time management behaviours. While year of study and faculty of students reveal the significant differences in the time management behaviours. Meanwhile, all the time management behaviours are significantly positively related to academic achievement of students although the relationship is weak. Time planning is the most significant correlated predictor.

Purpose-led Publishing is a coalition of three not-for-profit publishers in the field of physical sciences: AIP Publishing, the American Physical Society and IOP Publishing.
Together, as publishers that will always put purpose above profit, we have defined a set of industry standards that underpin high-quality, ethical scholarly communications.
We are proudly declaring that science is our only shareholder.
ISSN: 1742-6596
The open access Journal of Physics: Conference Series (JPCS) provides a fast, versatile and cost-effective proceedings publication service.
View forthcoming volumes accepted for publication.
If you would like more detailed information regarding Journal of Physics: Conference Series please visit conferenceseries.iop.org, and if you are interested in publishing a proceedings with IOP Conference Series please visit our page for conference organizers.
Conference organizers can use our online form and we will get in touch with a quote and further details.
S N A M Razali et al 2018 J. Phys.: Conf. Ser. 995 012042
M R Ab Hamid et al 2017 J. Phys.: Conf. Ser. 890 012163
Assessment of discriminant validity is a must in any research that involves latent variables for the prevention of multicollinearity issues. Fornell and Larcker criterion is the most widely used method for this purpose. However, a new method has emerged for establishing the discriminant validity assessment through heterotrait-monotrait (HTMT) ratio of correlations method. Therefore, this article presents the results of discriminant validity assessment using these methods. Data from previous study was used that involved 429 respondents for empirical validation of value-based excellence model in higher education institutions (HEI) in Malaysia. From the analysis, the convergent, divergent and discriminant validity were established and admissible using Fornell and Larcker criterion. However, the discriminant validity is an issue when employing the HTMT criterion. This shows that the latent variables under study faced the issue of multicollinearity and should be looked into for further details. This also implied that the HTMT criterion is a stringent measure that could detect the possible indiscriminant among the latent variables. In conclusion, the instrument which consisted of six latent variables was still lacking in terms of discriminant validity and should be explored further.
Qingbing Ji and Hao Yin 2020 J. Phys.: Conf. Ser. 1673 012047
The encryption mode of WinRAR3 which does not encrypt the file name uses encryption and compression, the password recovery complexity is high. The existing cracking systems crack on a single CPU or GPU platform. Because the decryption algorithm is slow on the CPU platform, while the decompression algorithm is slow on the GPU platform, the overall performance of the cracking algorithm is not high. This paper studies the mode of CPU and GPU collaborative computing, and proposes an efficient cracking method of encrypted WinRAR3 without encrypting filename. By using the CPU + GPU pipeline cooperation method, the waiting time in the calculation is reduced, and the performance of the algorithm is improved; by using the magic number matching method of compressed files, the decompression calculation can be effectively reduced. The experimental results show that the speed of the cracking algorithm proposed by this paper for 8-digit passwords is 24423/s, which is 2.3 times as fast as before.
Gerd Ehret et al 2013 J. Phys.: Conf. Ser. 425 152016
Highly accurate flatness measurements are needed for synchrotron optics, optical flats, or optical mirrors. Recently, two new scanning deflectometric flatness measurement systems have been installed at the Physikalisch-Technische Bundesanstalt (PTB). The two systems (one system for horizontal and the other for vertical specimens) can measure specimens with sizes up to one metre with an expected uncertainty in the sub-nanometre range. In addition to the classical deflectometric procedure, also the 'extended shear angle difference (ESAD)' and the 'exact autocollimation deflectometric scanning (EADS)' procedures are implemented. The lateral resolution of scanning deflectometric techniques is limited by the aperture of the angle measurement system, usually an autocollimator with typical apertures of a few millimetres. With the EADS procedure, the specimen is scanned with an angular null instrument which has the potential to improve the lateral resolution down to the sub-millimetre region. A new concept and design of an appropriate angular null instrument are presented and discussed.
Xue Ying 2019 J. Phys.: Conf. Ser. 1168 022022
Overfitting is a fundamental issue in supervised machine learning which prevents us from perfectly generalizing the models to well fit observed data on training data, as well as unseen data on testing set. Because of the presence of noise, the limited size of training set, and the complexity of classifiers, overfitting happens. This paper is going to talk about overfitting from the perspectives of causes and solutions. To reduce the effects of overfitting, various strategies are proposed to address to these causes: 1) “early-stopping” strategy is introduced to prevent overfitting by stopping training before the performance stops optimize; 2) “network-reduction” strategy is used to exclude the noises in training set; 3) “data-expansion” strategy is proposed for complicated models to fine-tune the hyper-parameters sets with a great amount of data; and 4) “regularization” strategy is proposed to guarantee models performance to a great extent while dealing with real world issues by feature-selection, and by distinguishing more useful and less useful features.
J Bethanney Janney et al 2021 J. Phys.: Conf. Ser. 1937 012034
The Physiological condition of cardiovascular system is analyzed by arterial blood pressure pulse wave. The arterial pulse wave displays the genetic traits of the heart, average records of a heartbeat and variation in pressure as the heart spouts blood. This pulse monitoringis a standard process used to assess the cardiovascular system’s medical history. A waveform ofthe Arterial blood pressure usually involves a systolic level, diastolic occurrence, and dicrotic spike and dicrotic notch. The cardiac cavity contracting and relaxing leads to systolic and diastolic blood pressure respectively. The dicrotic notch which is a drop on the down slope shows systole termination and depicts the aorta closure of successive backward stream. The position of the dicrotic notch throughout the cardiac activity differs as per the duration of aortic closure. Dicrotic notch plays an essential part in sclerosis, occlusion, stenosis, arterial spasm and erythromelalgia diagnostic test. Hence Discrete Wavelet transform is utilized in this proposed work to examine and assess the dicrotic notch in arterial pulse wave form. Arterial pulse data are processed using a data acquisition system consisting of multiple channels sensor signal processing and a computer to collect the necessary data for future examination. The uniform peer group of 22 patients has been evaluated utilizing two distinct Haar and Daubuchies4 (db4) wavelet transformations. The peripheral wave in the patients seems to have a sharp rise and a notch on dropping slope, has been identified. The data collected are contrasted between the two techniques, and the Haar wavelet is observed to reasonably represent the best outcome.
Dian Rachmawati and Lysander Gustin 2020 J. Phys.: Conf. Ser. 1566 012061
Finding the shortest path in direction effective is essential. To solve this shortest path problem, we usually using Dijkstra or A* algorithm. These two algorithms are often used in routing or road networks. This paper’s objective is to compare those two algorithms in solving this shortest path problem. In this research, Dijkstra and A* almost have the same performance when using it to solve town or regional scale maps, but A* is better when using it to solve a large scale map.
Noor I. Jalal et al 2021 J. Phys.: Conf. Ser. 1973 012015
The importance of Super-capacitors (SCs) stems from their distinctive properties including long cycle life, high strength and environment friendly, they are sharing similar fundamental equations as the traditional capacitors; for attaining high capacitances SC using electrodes materials with thinner dielectrics and high specific surface area. In this review paper, all types of SCs were covered, depending on the energy storage mechanism; a brief overview of the materials and technologies used for SCs is presented. The major concentration is on materials like the metal oxides, carbon materials, conducting polymers along with their composites. The composites’ performance was examined via parameters like capacitance, energy, cyclic performance power and the rate capability also presents details regarding the electrolyte materials.
Jamal I. Daoud 2017 J. Phys.: Conf. Ser. 949 012009
In regression analysis it is obvious to have a correlation between the response and predictor(s), but having correlation among predictors is something undesired. The number of predictors included in the regression model depends on many factors among which, historical data, experience, etc. At the end selection of most important predictors is something objective due to the researcher. Multicollinearity is a phenomena when two or more predictors are correlated, if this happens, the standard error of the coefficients will increase [8]. Increased standard errors means that the coefficients for some or all independent variables may be found to be significantly different from In other words, by overinflating the standard errors, multicollinearity makes some variables statistically insignificant when they should be significant. In this paper we focus on the multicollinearity, reasons and consequences on the reliability of the regression model.
Jafar Alzubi et al 2018 J. Phys.: Conf. Ser. 1142 012012
The current SMAC (Social, Mobile, Analytic, Cloud) technology trend paves the way to a future in which intelligent machines, networked processes and big data are brought together. This virtual world has generated vast amount of data which is accelerating the adoption of machine learning solutions & practices. Machine Learning enables computers to imitate and adapt human-like behaviour. Using machine learning, each interaction, each action performed, becomes something the system can learn and use as experience for the next time. This work is an overview of this data analytics method which enables computers to learn and do what comes naturally to humans, i.e. learn from experience. It includes the preliminaries of machine learning, the definition, nomenclature and applications’ describing it’s what, how and why. The technology roadmap of machine learning is discussed to understand and verify its potential as a market & industry practice. The primary intent of this work is to give insight into why machine learning is the future.
2026 J. Phys.: Conf. Ser. 3172 011001
XXVII Biennial Symposium on Measuring Techniques in Turbomachinery
Larnaca, Cyprus, April 29 - 30, 2024
Editorial Preface
Turbomachinery plays a crucial role not only in transportation but also in power generation. Although the energy sector has recently shifted toward renewable sources such as wind and hydro turbines, large industrial machines—particularly gas turbines—will remain essential components of a balanced and reliable energy mix for the foreseeable future. At the same time, the renewed global interest in nuclear energy is bringing steam turbines back into focus as key technologies for future power production.
With sustainability becoming an ever-stronger priority, the push to improve turbomachine efficiency and to increase the power-to-weight ratio of aircraft engines is more intense than ever. Achieving these goals relies fundamentally on high-quality data gathered both from operational machines in the field and from controlled test rigs in research laboratories.
List of Editorial board, Organizing committee, Senior Scientific and Advising Committee and Local organizing committee are available in this PDF.
2026 J. Phys.: Conf. Ser. 3172 011002
All papers published in this volume have been reviewed through processes administered by the Editors. Reviews were conducted by expert referees to the professional and scientific standards expected of a proceedings journal published by IOP Publishing.
• Type of peer review: Single Anonymous
• Conference submission management system: Morressier
• Number of submissions received: 5
• Number of submissions sent for review: 5
• Number of submissions accepted: 5
• Acceptance Rate (Submissions Accepted / Submissions Received × 100): 100
• Average number of reviews per paper: 1
• Total number of reviewers involved: 2
• Contact person for queries:
Name: David Simurda
Email: simurda@it.cas.cz
Affiliation: Institute of Thermomechanics, Czech Academy of Sciences
Michal Hoznedl et al 2026 J. Phys.: Conf. Ser. 3172 012001
The article describes a test rig for the experimental verification of the superheated steam flow through the control and follow-up stage of an experimental steam turbine placed in Doosan Škoda Power laboratory. Due to the innovative solution concept, the turbine is equipped with two mechanically independent rotors and the possibility to measure the performance of the control and follow-up stage separately. In the case of the control stage, it is also possible to simulate the partial admission by covering part of the inlet cross-section and to determine the energy loss by the presence of partial admission. In a wide range of rotational speeds of one or both rotors and steam inlet and outlet pressures and steam inlet temperature, local values of efficiency, stage reactions, Mach and Reynolds numbers and losses in the inter-stage channel can be determined by measurement for both stages separately. It is possible to probe the flow fields along the blade length before and behind second stage. Experiments, including probing, can be carried out at a steam pressure of up to 3.5 bar(a) and a temperature of up to 300 °C. The obtained data are used for tuning and verification of 3D CFD simulations and for corrections of the in-house SW system for the design of flow paths.
Matteo Benvenuti et al 2026 J. Phys.: Conf. Ser. 3172 012002
This paper aims to dimension a test rig of a high-temperature regenerated Brayton heat pump, which provides heat at more than 130°C, using small-size and off-the-shelf components. This facility will enable the testing of several models concerning the Brayton heat pump systems and components, with the final goal of highlighting the technology’s key features and overcoming the challenges that may hinder its wide deployment in the industry. Eventually, the designed test rig will be able to operate in four different operative conditions: regenerated closed cycle, non-regenerated closed cycle, regenerated open cycle, non-regenerated open cycle. An analysis was carried out to identify the range of parameters that can be tested with this rig.
Patrick Jagerhofer 2026 J. Phys.: Conf. Ser. 3172 012003
This paper presents a novel hybrid approach to film cooling measurement, combining infrared (IR) thermography with the seed gas concentration technique. The seed gas concentration technique is used as an in-situ calibration ground truth for the IR measurements to correct for imperfect insulation and test facility-induced thermal disturbances, such as heat up of the coolant through viscous dissipation in the cavities as well as ingress and egress in the uncooled baseline case. This new approach allows the final film cooling results to inherit the advantages of both measurement techniques. The robustness against thermal disturbances, as well as the high accuracy, are inherited from the point-wise seed gas concentration technique, while the high spatial resolution is inherited from the full surface coverage IR measurements.
This new approach is demonstrated for purge film cooling measurements in a turbine center frame (TCF) tested under Mach-similarity in the transonic test turbine facility (TTTF) at Graz University of Technology. The TCF, also known as intermediate turbine diffuser, is a stationary duct that connects the high-pressure turbine (HPT) to the low-pressure turbine (TCF) in modern high-bypass ungeared turbofan engines. The TCF was operated in an engine-representative 1.5-stage test vehicle, where a fully purged HPT was operating upstream of the TCF and a row of LPT vanes was situated downstream of the TCF. The herein investigated sources of film cooling are the purge flows that emanate from the hub cavities of the HPT. These hub purge flows bear significant cooling potential for the downstream TCF hub surface. The TTTF is a good example to showcase the benefits of this new measurement approach as the complexity of the rig is high, and the challenging boundary conditions imposed on the technique are representative of many continuously operated and high technology readiness level (TRL) turbine test facilities.
M R Ab Hamid et al 2017 J. Phys.: Conf. Ser. 890 012163
Assessment of discriminant validity is a must in any research that involves latent variables for the prevention of multicollinearity issues. Fornell and Larcker criterion is the most widely used method for this purpose. However, a new method has emerged for establishing the discriminant validity assessment through heterotrait-monotrait (HTMT) ratio of correlations method. Therefore, this article presents the results of discriminant validity assessment using these methods. Data from previous study was used that involved 429 respondents for empirical validation of value-based excellence model in higher education institutions (HEI) in Malaysia. From the analysis, the convergent, divergent and discriminant validity were established and admissible using Fornell and Larcker criterion. However, the discriminant validity is an issue when employing the HTMT criterion. This shows that the latent variables under study faced the issue of multicollinearity and should be looked into for further details. This also implied that the HTMT criterion is a stringent measure that could detect the possible indiscriminant among the latent variables. In conclusion, the instrument which consisted of six latent variables was still lacking in terms of discriminant validity and should be explored further.
Jamal I. Daoud 2017 J. Phys.: Conf. Ser. 949 012009
In regression analysis it is obvious to have a correlation between the response and predictor(s), but having correlation among predictors is something undesired. The number of predictors included in the regression model depends on many factors among which, historical data, experience, etc. At the end selection of most important predictors is something objective due to the researcher. Multicollinearity is a phenomena when two or more predictors are correlated, if this happens, the standard error of the coefficients will increase [8]. Increased standard errors means that the coefficients for some or all independent variables may be found to be significantly different from In other words, by overinflating the standard errors, multicollinearity makes some variables statistically insignificant when they should be significant. In this paper we focus on the multicollinearity, reasons and consequences on the reliability of the regression model.
Matthew Newville 2013 J. Phys.: Conf. Ser. 430 012007
LARCH, a package of analysis tools for XAFS and related spectroscopies is presented. A complete rewrite of the ifeffit package, the initial release of larch preserves the core XAFS analysis procedures such as normalization, background subtraction, Fourier transforms, fitting of XANES spectra, and fitting of experimental spectra to a sum of feff Paths, with few algorithmic changes made in comparison to IFEFFIT. LARCH is written using Python and its packages for scientific programming, which gives significant improvements over IFEFFIT in the ability to handle multi-dimensional and large data sets, write complex analysis scripts, visualize data, add new functionality, and customize existing capabilities. Like the earlier version, larch can run from an interactive command line or in batch-mode, but larch can also be run as a server and accessed from clients using standard inter-process communication techniques available in a variety of computer languages. larch is freely available under an open source license. Examples of using larch are shown, future directions for development are discussed, and collaborations for adding new capabilities are actively sought.
Xue Ying 2019 J. Phys.: Conf. Ser. 1168 022022
Overfitting is a fundamental issue in supervised machine learning which prevents us from perfectly generalizing the models to well fit observed data on training data, as well as unseen data on testing set. Because of the presence of noise, the limited size of training set, and the complexity of classifiers, overfitting happens. This paper is going to talk about overfitting from the perspectives of causes and solutions. To reduce the effects of overfitting, various strategies are proposed to address to these causes: 1) “early-stopping” strategy is introduced to prevent overfitting by stopping training before the performance stops optimize; 2) “network-reduction” strategy is used to exclude the noises in training set; 3) “data-expansion” strategy is proposed for complicated models to fine-tune the hyper-parameters sets with a great amount of data; and 4) “regularization” strategy is proposed to guarantee models performance to a great extent while dealing with real world issues by feature-selection, and by distinguishing more useful and less useful features.
Jafar Alzubi et al 2018 J. Phys.: Conf. Ser. 1142 012012
The current SMAC (Social, Mobile, Analytic, Cloud) technology trend paves the way to a future in which intelligent machines, networked processes and big data are brought together. This virtual world has generated vast amount of data which is accelerating the adoption of machine learning solutions & practices. Machine Learning enables computers to imitate and adapt human-like behaviour. Using machine learning, each interaction, each action performed, becomes something the system can learn and use as experience for the next time. This work is an overview of this data analytics method which enables computers to learn and do what comes naturally to humans, i.e. learn from experience. It includes the preliminaries of machine learning, the definition, nomenclature and applications’ describing it’s what, how and why. The technology roadmap of machine learning is discussed to understand and verify its potential as a market & industry practice. The primary intent of this work is to give insight into why machine learning is the future.
T. G. F. Souza et al 2016 J. Phys.: Conf. Ser. 733 012039
The accuracy of dynamic light scattering (DLS) measurements are compared with transmission electron microscopy (TEM) studies for characterization of size distributions of ceramic nanoparticles. It was found that measurements by DLS using number distribution presented accurate results when compared to TEM. The presence of dispersants and the enlargement of size distributions induce errors to DLS particle sizing measurements and shifts its results to higher values.
Jérôme Kieffer and Dimitrios Karkoulis 2013 J. Phys.: Conf. Ser. 425 202012
2D area detectors like CCD or pixel detectors have become popular in the last 15 years for diffraction experiments (e.g. for WAXS, SAXS, single crystal and powder diffraction (XRPD)). These detectors have a large sensitive area of millions of pixels with high spatial resolution. The software package pyFAI has been designed to reduce SAXS, WAXS and XRPD images taken with those detectors into 1D curves (azimuthal integration) usable by other software for in-depth analysis such as Rietveld refinement, or 2D images (a radial transformation named caking). As a library, the aim of pyFAI is to be integrated into other tools like PyMca or EDNA with a clean pythonic interface. However pyFAI features also command line tools for batch processing, converting data into q-space (q being the momentum transfer) or 2θ-space (θ being the Bragg angle) and a calibration graphical interface for optimizing the geometry of the experiment using the Debye-Scherrer rings of a reference sample. PyFAI shares the geometry definition of SPD but can directly import geometries determined by the software FIT2D. PyFAI has been designed to work with any kind of detector and geometry (transmission or reflection) and relies on FabIO, a library able to read more than 20 image formats produced by detectors from 12 different manufacturers. During the transformation from cartesian space (x,y) to polar space (2θ, χ), both local and total intensities are conserved in order to obtain accurate quantitative results. Technical details on how this integration is implemented and how it has been ported to native code and parallelized on graphic cards are discussed in this paper.
L A Falkovsky 2008 J. Phys.: Conf. Ser. 129 012004
Reflectance and transmittance of graphene in the optical region are analyzed as a function of frequency, temperature, and carrier density. We show that the optical graphene properties are determined by the direct interband electron transitions. The real part of the dynamic conductivity in doped graphene at low temperatures takes the universal constant value, whereas the imaginary part is logarithmically divergent at the threshold of interband transitions. The graphene transmittance in the visible range is independent of frequency and takes the universal value given by the fine structure constant.
B K Mehta et al 2017 J. Phys.: Conf. Ser. 836 012050
A cost effective and environment friendly technique for green synthesis of silver nanoparticles has been reported. Silver nanoparticles have been synthesized using ethanol extract of fruits of Santalum album (Family Santalaceae), commonly known as East Indian sandalwood. Fruits of S.album were collected and crushed. Ethanol was added to the crushed fruits and mixture was exposed to microwave for few minutes. Extract was concentrated by Buchi rotavaporator. To this extract, 1mM aqueous solution of silver nitrate (AgNO3) was added. After about 24 hr incubation Ag+ ions in AgNO3 solution were reduced to Ag atoms by the extract. Silver nanoparticles were obtained in powder form. X-ray diffraction (XRD) pattern of the prepared sample of silver nanoparticles was recorded The diffractogram has been compared with the standard powder diffraction card of JCPDS silver file. Four peaks have been identified corresponding to (hkl) values of silver. The XRD study confirms that the resultant particles are silver nanoparticles having FCC structure. The average crystalline size D, the value of the interplanar spacing between the atoms, d, lattice constant and cell volume have been estimated. Thus, silver nanoparticles with well-defined dimensions could be synthesized by reduction of metal ions due to fruit extract of S.album.
Y R Martin et al 2008 J. Phys.: Conf. Ser. 123 012033
The input power requirements for accessing H-mode at low density and maintaining it during the density ramp in ITER is addressed by statistical means applied to the international H-mode threshold power database. Following the recent addition of new data, the improvement of existing data and the improvement of selection criteria, a revised scaling law that describes the threshold power required to obtain an L-mode to H-mode transition is presented. Predictions for ITER give a threshold power of ∼52MW in a deuterium plasma at a line average density, ne = 0.5×1020m-3. At the nominal ITER H-mode density, ne = 1.0×1020m-3, the threshold power required is ∼86MW. Detailed analysis of data from individual devices suggests that the density dependence of the threshold power might increase with the plasma size and the magnetic field. On the other hand, the density at which the threshold power is minimal is found to decrease with the plasma size and increase with magnetic field. The influence of these effects on the accessibility of the H-mode regime in ITER plasmas is discussed. Analyses of the confinement database show that, in present day devices, H-modes are generally maintained with powers exceeding the threshold power by a factor larger than 1.5, and that, on the other hand, good confinement can be obtained close to the threshold power although rarely demonstrated.
Journal links
Journal information
- 2004-present
Journal of Physics: Conference Series
doi: 10.1088/issn.1742-6596
Online ISSN: 1742-6596
Print ISSN: 1742-6588