Engineering ComputationsTable of Contents for Engineering Computations. List of articles from the current issue, including Just Accepted (EarlyCite)https://www.emerald.com/insight/publication/issn/0264-4401/vol/41/iss/1?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestEngineering ComputationsEmerald Publishing LimitedEngineering ComputationsEngineering Computationshttps://www.emerald.com/insight/proxy/containerImg?link=/resource/publication/journal/3fd6b696867d70225deda7868308679b/urn:emeraldgroup.com:asset:id:binary:ec.cover.jpghttps://www.emerald.com/insight/publication/issn/0264-4401/vol/41/iss/1?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestA resource-efficient form-finding approach to tensegrity structureshttps://www.emerald.com/insight/content/doi/10.1108/EC-07-2023-0354/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestThe aim of this article is to obtain a stable tensegrity structure by using the minimum knowledge of the structure. Three methods have been formulated based on the eigen value decomposition (EVD) and singular value decomposition theorems. These two theorems are being implemented on the matrices, which are computed from the minimal data of the structure. The required minimum data for the structure is the dimension of the structure, the connectivity matrix of the structure and the initial force density matrix computed from the type of elements. The stability of the structure is analyzed based on the rank deficiency of the force density matrix and equilibrium matrix. The main purpose of this article is to use the defined methods to find (1) the nodal coordinates of the structure, (2) the final force density values of the structure, (3) single self-stress from multiple self-stresses and (4) the stable structure. By using the defined approaches, one can understand the difference of each method, which includes, (1) the selection of eigenvalues, (2) the selection of nodal coordinates from the first decomposition theorem, (3) the selection of mechanism mode and force density values further and (4) the solution of single feasible self-stress from multiple self-stresses.A resource-efficient form-finding approach to tensegrity structures
Heping Liu, Sanaullah, Angelo Vumiliya, Ani Luo
Engineering Computations, Vol. 41, No. 1, pp.1-17

The aim of this article is to obtain a stable tensegrity structure by using the minimum knowledge of the structure.

Three methods have been formulated based on the eigen value decomposition (EVD) and singular value decomposition theorems. These two theorems are being implemented on the matrices, which are computed from the minimal data of the structure. The required minimum data for the structure is the dimension of the structure, the connectivity matrix of the structure and the initial force density matrix computed from the type of elements. The stability of the structure is analyzed based on the rank deficiency of the force density matrix and equilibrium matrix.

The main purpose of this article is to use the defined methods to find (1) the nodal coordinates of the structure, (2) the final force density values of the structure, (3) single self-stress from multiple self-stresses and (4) the stable structure.

By using the defined approaches, one can understand the difference of each method, which includes, (1) the selection of eigenvalues, (2) the selection of nodal coordinates from the first decomposition theorem, (3) the selection of mechanism mode and force density values further and (4) the solution of single feasible self-stress from multiple self-stresses.

]]>
A resource-efficient form-finding approach to tensegrity structures10.1108/EC-07-2023-0354Engineering Computations2023-11-21© 2023 Emerald Publishing LimitedHeping Liu SanaullahAngelo VumiliyaAni LuoEngineering Computations4112023-11-2110.1108/EC-07-2023-0354https://www.emerald.com/insight/content/doi/10.1108/EC-07-2023-0354/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatest© 2023 Emerald Publishing Limited
Dynamic analysis of a highway bridge beam under the passage of compound heavy trucks with multiple trailershttps://www.emerald.com/insight/content/doi/10.1108/EC-10-2022-0641/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestRealistic composite vehicles with 2, 3, 5 and 9 axles, consisting of a truck with one or two trailers, are addressed in this paper by computational models for vehicle–bridge interaction analysis. The vehicle–bridge interaction (VBI) models are formed by sets of 2-D rigid blocks interconnected by mass, damping and stiffness elements to simulate their suspension system. The passage of the vehicles is performed at different speeds. Several rolling surface profiles are admitted, considering the maintenance grade of the pavement. The spectral density functions are generated from an experimental database to form the longitudinal surface irregularity profiles. A computational code written in Phyton based on the finite element method was developed considering the Euler–Bernoulli beam model. Several models of composite heavy vehicles are presented as manufactured and currently travel on major roads. Dynamic amplification factors are presented for each type of composite vehicle. The VBI models for compound heavy vehicles are 2-D. This work contributes to improving the safety and lifetime of the bridges, as well as the stability and comfort of the vehicles when passing over a bridge. The structural response of the bridge is affected by the type and size of the compound vehicles, their speed and the conservative grade of the pavement. Moreover, one axle produces vibrations that can be superposed by the vibrations of the other axles. This effect can generate not usual dynamic responses.Dynamic analysis of a highway bridge beam under the passage of compound heavy trucks with multiple trailers
Diego Gabriel Metz, Roberto Dalledone Machado, Marcos Arndt, Carlos Eduardo Rossigali
Engineering Computations, Vol. 41, No. 1, pp.18-45

Realistic composite vehicles with 2, 3, 5 and 9 axles, consisting of a truck with one or two trailers, are addressed in this paper by computational models for vehicle–bridge interaction analysis.

The vehicle–bridge interaction (VBI) models are formed by sets of 2-D rigid blocks interconnected by mass, damping and stiffness elements to simulate their suspension system. The passage of the vehicles is performed at different speeds. Several rolling surface profiles are admitted, considering the maintenance grade of the pavement. The spectral density functions are generated from an experimental database to form the longitudinal surface irregularity profiles. A computational code written in Phyton based on the finite element method was developed considering the Euler–Bernoulli beam model.

Several models of composite heavy vehicles are presented as manufactured and currently travel on major roads. Dynamic amplification factors are presented for each type of composite vehicle.

The VBI models for compound heavy vehicles are 2-D.

This work contributes to improving the safety and lifetime of the bridges, as well as the stability and comfort of the vehicles when passing over a bridge.

The structural response of the bridge is affected by the type and size of the compound vehicles, their speed and the conservative grade of the pavement. Moreover, one axle produces vibrations that can be superposed by the vibrations of the other axles. This effect can generate not usual dynamic responses.

]]>
Dynamic analysis of a highway bridge beam under the passage of compound heavy trucks with multiple trailers10.1108/EC-10-2022-0641Engineering Computations2023-11-23© 2023 Emerald Publishing LimitedDiego Gabriel MetzRoberto Dalledone MachadoMarcos ArndtCarlos Eduardo RossigaliEngineering Computations4112023-11-2310.1108/EC-10-2022-0641https://www.emerald.com/insight/content/doi/10.1108/EC-10-2022-0641/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatest© 2023 Emerald Publishing Limited
Prediction of soil degree of compaction based on machine learning: a case study of two fine-grained soilshttps://www.emerald.com/insight/content/doi/10.1108/EC-06-2023-0304/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestThe purpose of this paper is to develop an appropriate machine learning model for predicting soil compaction degree while also examining the contribution rates of three influential factors: moisture content, electrical conductivity and temperature, towards the prediction of soil compaction degree. Taking fine-grained soil A and B as the research object, this paper utilized the laboratory test data, including compaction parameter (moisture content), electrical parameter (electrical conductivity) and temperature, to predict soil degree of compaction based on five types of commonly used machine learning models (19 models in total). According to the prediction results, these models were preliminarily compared and further evaluated. The Gaussian process regression model has a good effect on the prediction of degree of compaction of the two kinds of soils: the error rates of the prediction of degree of compaction for fine-grained soil A and B are within 6 and 8%, respectively. As per the order, the contribution rates manifest as: moisture content > electrical conductivity >> temperature. By using moisture content, electrical conductivity, temperature to predict the compaction degree directly, the predicted value of the compaction degree can be obtained with higher accuracy and the detection efficiency of the compaction degree can be improved.Prediction of soil degree of compaction based on machine learning: a case study of two fine-grained soils
Yuling Ran, Wei Bai, Lingwei Kong, Henghui Fan, Xiujuan Yang, Xuemei Li
Engineering Computations, Vol. 41, No. 1, pp.46-67

The purpose of this paper is to develop an appropriate machine learning model for predicting soil compaction degree while also examining the contribution rates of three influential factors: moisture content, electrical conductivity and temperature, towards the prediction of soil compaction degree.

Taking fine-grained soil A and B as the research object, this paper utilized the laboratory test data, including compaction parameter (moisture content), electrical parameter (electrical conductivity) and temperature, to predict soil degree of compaction based on five types of commonly used machine learning models (19 models in total). According to the prediction results, these models were preliminarily compared and further evaluated.

The Gaussian process regression model has a good effect on the prediction of degree of compaction of the two kinds of soils: the error rates of the prediction of degree of compaction for fine-grained soil A and B are within 6 and 8%, respectively. As per the order, the contribution rates manifest as: moisture content > electrical conductivity >> temperature.

By using moisture content, electrical conductivity, temperature to predict the compaction degree directly, the predicted value of the compaction degree can be obtained with higher accuracy and the detection efficiency of the compaction degree can be improved.

]]>
Prediction of soil degree of compaction based on machine learning: a case study of two fine-grained soils10.1108/EC-06-2023-0304Engineering Computations2023-11-24© 2023 Emerald Publishing LimitedYuling RanWei BaiLingwei KongHenghui FanXiujuan YangXuemei LiEngineering Computations4112023-11-2410.1108/EC-06-2023-0304https://www.emerald.com/insight/content/doi/10.1108/EC-06-2023-0304/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatest© 2023 Emerald Publishing Limited
A DFT-based finite element model to study the elastic, buckling and vibrational characteristics of monolayer bismuthenehttps://www.emerald.com/insight/content/doi/10.1108/EC-05-2023-0239/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestIn this paper, based on the density functional theory (DFT) and finite element method (FEM), the elastic, buckling and vibrational behaviors of the monolayer bismuthene are studied. The computed elastic properties based on DFT are used to develop a finite element (FE) model for the monolayer bismuthene in which the Bi-Bi bonds are simulated by beam elements. Furthermore, mass elements are used to model the Bi atoms. The developed FE model is used to compute Young's modulus of monolayer bismuthene. The model is then used to evaluate the buckling force and fundamental natural frequency of the monolayer bismuthene with different geometrical parameters. Comparing the results of the FEM and DFT, it is shown that the proposed model can predict Young's modulus of the monolayer bismuthene with an acceptable accuracy. It is also shown that the influence of the vertical side length on the fundamental natural frequency of the monolayer bismuthene is not significant. However, vibrational characteristics of the bismuthene are significantly affected by the horizontal side length. DFT and FEM are used to study the elastic, vibrational and buckling properties of the monolayer bismuthene. The developed model can be used to predict Young's modulus of the monolayer bismuthene accurately. Effect of the vertical side length on the fundamental natural frequency is negligible. However, vibrational characteristics are significantly affected by the horizontal side length.A DFT-based finite element model to study the elastic, buckling and vibrational characteristics of monolayer bismuthene
Peyman Aghdasi, Shayesteh Yousefi, Reza Ansari
Engineering Computations, Vol. 41, No. 1, pp.68-85

In this paper, based on the density functional theory (DFT) and finite element method (FEM), the elastic, buckling and vibrational behaviors of the monolayer bismuthene are studied.

The computed elastic properties based on DFT are used to develop a finite element (FE) model for the monolayer bismuthene in which the Bi-Bi bonds are simulated by beam elements. Furthermore, mass elements are used to model the Bi atoms. The developed FE model is used to compute Young's modulus of monolayer bismuthene. The model is then used to evaluate the buckling force and fundamental natural frequency of the monolayer bismuthene with different geometrical parameters.

Comparing the results of the FEM and DFT, it is shown that the proposed model can predict Young's modulus of the monolayer bismuthene with an acceptable accuracy. It is also shown that the influence of the vertical side length on the fundamental natural frequency of the monolayer bismuthene is not significant. However, vibrational characteristics of the bismuthene are significantly affected by the horizontal side length.

DFT and FEM are used to study the elastic, vibrational and buckling properties of the monolayer bismuthene. The developed model can be used to predict Young's modulus of the monolayer bismuthene accurately. Effect of the vertical side length on the fundamental natural frequency is negligible. However, vibrational characteristics are significantly affected by the horizontal side length.

]]>
A DFT-based finite element model to study the elastic, buckling and vibrational characteristics of monolayer bismuthene10.1108/EC-05-2023-0239Engineering Computations2023-11-21© 2023 Emerald Publishing LimitedPeyman AghdasiShayesteh YousefiReza AnsariEngineering Computations4112023-11-2110.1108/EC-05-2023-0239https://www.emerald.com/insight/content/doi/10.1108/EC-05-2023-0239/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatest© 2023 Emerald Publishing Limited
Hydraulic loss characteristics of closed-loop piping system during start-up process of mixed-flow pumphttps://www.emerald.com/insight/content/doi/10.1108/EC-05-2023-0212/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestThe purpose of this study is to explore the transient characteristics of mixed-flow pumps during startup process. This study uses a full-flow field transient calculation method of mixed-flow pump based on a closed-loop model. The findings show the hydraulic losses and internal flow characteristics of the piping system during the start-up process. Large computational cost. Improve the accuracy of current numerical simulation results in transient process of mixed-flow pump. Simplify the setting of boundary conditions in the transient calculation.Hydraulic loss characteristics of closed-loop piping system during start-up process of mixed-flow pump
Wei Li, Yuxin Huang, Leilei Ji, Lingling Ma, Ramesh Agarwal
Engineering Computations, Vol. 41, No. 1, pp.86-106

The purpose of this study is to explore the transient characteristics of mixed-flow pumps during startup process.

This study uses a full-flow field transient calculation method of mixed-flow pump based on a closed-loop model.

The findings show the hydraulic losses and internal flow characteristics of the piping system during the start-up process.

Large computational cost.

Improve the accuracy of current numerical simulation results in transient process of mixed-flow pump.

Simplify the setting of boundary conditions in the transient calculation.

]]>
Hydraulic loss characteristics of closed-loop piping system during start-up process of mixed-flow pump10.1108/EC-05-2023-0212Engineering Computations2023-11-28© 2023 Emerald Publishing LimitedWei LiYuxin HuangLeilei JiLingling MaRamesh AgarwalEngineering Computations4112023-11-2810.1108/EC-05-2023-0212https://www.emerald.com/insight/content/doi/10.1108/EC-05-2023-0212/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatest© 2023 Emerald Publishing Limited
Multi-view fuzzy C-means clustering with kernel metric and local information for color image segmentationhttps://www.emerald.com/insight/content/doi/10.1108/EC-08-2023-0403/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestMulti-view fuzzy clustering algorithms are not widely used in image segmentation, and many of these algorithms are lacking in robustness. The purpose of this paper is to investigate a new algorithm that can segment the image better and retain as much detailed information about the image as possible when segmenting noisy images. The authors present a novel multi-view fuzzy c-means (FCM) clustering algorithm that includes an automatic view-weight learning mechanism. Firstly, this algorithm introduces a view-weight factor that can automatically adjust the weight of different views, thereby allowing each view to obtain the best possible weight. Secondly, the algorithm incorporates a weighted fuzzy factor, which serves to obtain local spatial information and local grayscale information to preserve image details as much as possible. Finally, in order to weaken the effects of noise and outliers in image segmentation, this algorithm employs the kernel distance measure instead of the Euclidean distance. The authors added different kinds of noise to images and conducted a large number of experimental tests. The results show that the proposed algorithm performs better and is more accurate than previous multi-view fuzzy clustering algorithms in solving the problem of noisy image segmentation. Most of the existing multi-view clustering algorithms are for multi-view datasets, and the multi-view fuzzy clustering algorithms are unable to eliminate noise points and outliers when dealing with noisy images. The algorithm proposed in this paper has stronger noise immunity and can better preserve the details of the original image.Multi-view fuzzy C-means clustering with kernel metric and local information for color image segmentation
Xiumei Cai, Xi Yang, Chengmao Wu
Engineering Computations, Vol. 41, No. 1, pp.107-130

Multi-view fuzzy clustering algorithms are not widely used in image segmentation, and many of these algorithms are lacking in robustness. The purpose of this paper is to investigate a new algorithm that can segment the image better and retain as much detailed information about the image as possible when segmenting noisy images.

The authors present a novel multi-view fuzzy c-means (FCM) clustering algorithm that includes an automatic view-weight learning mechanism. Firstly, this algorithm introduces a view-weight factor that can automatically adjust the weight of different views, thereby allowing each view to obtain the best possible weight. Secondly, the algorithm incorporates a weighted fuzzy factor, which serves to obtain local spatial information and local grayscale information to preserve image details as much as possible. Finally, in order to weaken the effects of noise and outliers in image segmentation, this algorithm employs the kernel distance measure instead of the Euclidean distance.

The authors added different kinds of noise to images and conducted a large number of experimental tests. The results show that the proposed algorithm performs better and is more accurate than previous multi-view fuzzy clustering algorithms in solving the problem of noisy image segmentation.

Most of the existing multi-view clustering algorithms are for multi-view datasets, and the multi-view fuzzy clustering algorithms are unable to eliminate noise points and outliers when dealing with noisy images. The algorithm proposed in this paper has stronger noise immunity and can better preserve the details of the original image.

]]>
Multi-view fuzzy C-means clustering with kernel metric and local information for color image segmentation10.1108/EC-08-2023-0403Engineering Computations2024-01-02© 2023 Emerald Publishing LimitedXiumei CaiXi YangChengmao WuEngineering Computations4112024-01-0210.1108/EC-08-2023-0403https://www.emerald.com/insight/content/doi/10.1108/EC-08-2023-0403/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatest© 2023 Emerald Publishing Limited
Dynamic analysis of Pine Flat dam–water–foundation rock system utilizing the H-W truncation boundary conditionhttps://www.emerald.com/insight/content/doi/10.1108/EC-02-2023-0082/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestThe response of the Pine Flat dam–water–foundation rock system is studied by a new described approach (i.e. FE-(FE-TE)-FE). The initial part of study is focused on the time harmonic analysis. In this part, it is possible to compare the transfer functions against corresponding responses obtained by the FE-(FE-HE)-FE approach (referred to as exact method which employs a rigorous fluid hyper-element). Subsequently, the transient analysis is carried out. In that part, it is only possible to compare the results for low and high normalized reservoir length cases. Therefore, the sensitivity of results is controlled due to normalized reservoir length values. In the present study, dynamic analysis of a typical concrete gravity dam–water–foundation rock system is formulated by the FE-(FE-TE)-FE approach. In this technique, dam and foundation rock are discretized by plane solid finite elements while, water domain near-field region is discretized by plane fluid finite elements. Moreover, the H-W (i.e. Hagstrom–Warburton) high-order condition is imposed at the reservoir truncation boundary. This task is formulated by employing a truncation element at that boundary. It is emphasized that reservoir far-field is excluded from the discretized model. High orders of H-W condition, such as O5-5 considered herein, generate highly accurate responses for both possible excitations under both types of full reflective and absorptive reservoir bottom conditions. It is such that transfer functions are hardly distinguishable from corresponding exact responses obtained through the FE-(FE-HE)-FE approach in time harmonic analyses. This is controlled for both low and high normalized reservoir length cases (L/H = 1 and 3). Moreover, it can be claimed that transient analysis leads practically to exact results (in numerical sense) when one is employing high order H-W truncation element. In other words, the results are not sensitive to reservoir normalized length under these circumstances. Dynamic analysis of concrete gravity dam–water–foundation rock systems is formulated by a new method. The salient aspect of the technique is that it utilizes H-W high-order condition at the truncation boundary. The method is discussed for all types of excitation and reservoir bottom conditions.Dynamic analysis of Pine Flat dam–water–foundation rock system utilizing the H-W truncation boundary condition
Vahid Lotfi, Hesamedin Abdorazaghi
Engineering Computations, Vol. 41, No. 1, pp.131-154

The response of the Pine Flat dam–water–foundation rock system is studied by a new described approach (i.e. FE-(FE-TE)-FE). The initial part of study is focused on the time harmonic analysis. In this part, it is possible to compare the transfer functions against corresponding responses obtained by the FE-(FE-HE)-FE approach (referred to as exact method which employs a rigorous fluid hyper-element). Subsequently, the transient analysis is carried out. In that part, it is only possible to compare the results for low and high normalized reservoir length cases. Therefore, the sensitivity of results is controlled due to normalized reservoir length values.

In the present study, dynamic analysis of a typical concrete gravity dam–water–foundation rock system is formulated by the FE-(FE-TE)-FE approach. In this technique, dam and foundation rock are discretized by plane solid finite elements while, water domain near-field region is discretized by plane fluid finite elements. Moreover, the H-W (i.e. Hagstrom–Warburton) high-order condition is imposed at the reservoir truncation boundary. This task is formulated by employing a truncation element at that boundary. It is emphasized that reservoir far-field is excluded from the discretized model.

High orders of H-W condition, such as O5-5 considered herein, generate highly accurate responses for both possible excitations under both types of full reflective and absorptive reservoir bottom conditions. It is such that transfer functions are hardly distinguishable from corresponding exact responses obtained through the FE-(FE-HE)-FE approach in time harmonic analyses. This is controlled for both low and high normalized reservoir length cases (L/H = 1 and 3). Moreover, it can be claimed that transient analysis leads practically to exact results (in numerical sense) when one is employing high order H-W truncation element. In other words, the results are not sensitive to reservoir normalized length under these circumstances.

Dynamic analysis of concrete gravity dam–water–foundation rock systems is formulated by a new method. The salient aspect of the technique is that it utilizes H-W high-order condition at the truncation boundary. The method is discussed for all types of excitation and reservoir bottom conditions.

]]>
Dynamic analysis of Pine Flat dam–water–foundation rock system utilizing the H-W truncation boundary condition10.1108/EC-02-2023-0082Engineering Computations2024-01-11© 2023 Emerald Publishing LimitedVahid LotfiHesamedin AbdorazaghiEngineering Computations4112024-01-1110.1108/EC-02-2023-0082https://www.emerald.com/insight/content/doi/10.1108/EC-02-2023-0082/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatest© 2023 Emerald Publishing Limited
A study of rotary cutting machine (RCM) performance on Korean granitehttps://www.emerald.com/insight/content/doi/10.1108/EC-08-2023-0462/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestDespite the many advantages this type of equipment offers, there are still some major drawbacks. Linear cutting machine (LCM) cannot accurately simulate the true rock-cutting process as 1. it does not account for the circular path along which tunnel boring machine (TBM) disk cutters cut the tunnel face, 2. it does not accurately model the position of a disk cutter on the cutterhead, 3. it cannot perfectly replicate the rotational speed of a TBM. To enhance the knowledge of these issues and in order to mimic the real rock-cutting process, a new lab testing equipment was developed by Hyundai Engineering and Construction. A new testing machine called rotary cutting machine (RCM) is designed to simulate the excavation process of hard-rock TBMs and includes features such as TBM cutterhead, RPM simulation, constant normal force mode and constant penetration rate mode. Two sets of tests were conducted on Hwandeung granite using different disk cutter sizes to analyze the cutting forces in various excavation modes. The results are analyzed using statistical analysis and dimensional analysis. A new model is generated using dimensional analysis, and its results are compared against the results of actual cases. The effectiveness of the new RCM test was demonstrated in its ability to apply various modes of excavation. Initial analysis of chip size revealed that the thickness of the chips is largely dependent on the cutter spacing. Tests with varying RPM showed that an increase in RPM results in an increase in the normal force and rolling force. The cutting coefficient (CC) demonstrated a linear correlation with penetration. The optimal specific energy is achieved at an S/p ratio of around 15. However, a slightly lower S/p ratio can also be used in the design if the cutter specifications permit. A dimensional analysis was utilized to develop a new RCM model based on the results from approximately 1200 tests. The model's applicability was demonstrated through a comparison of TBM penetration data from 26 tunnel projects globally. Results indicated that the predicted penetration rates by the RCM test model were in good agreement with actual rates for the majority of cases. However, further investigation is necessary for softer rock types, which will be conducted in the future using concrete blocks. The originality of the research lies in the development of Hyundai Engineering and Construction’s advanced full-scale laboratory rotary cutting machine (RCM), which accurately replicates the excavation process of hard-rock tunnel boring machines (TBMs). The study provides valuable insights into cutting forces, chip size, specific energy, RPM and excavation modes, enhancing understanding and decision-making in hard-rock excavation processes. The research also presents a new RCM model validated against TBM penetration data, demonstrating its practical applicability and predictive accuracy.A study of rotary cutting machine (RCM) performance on Korean granite
Young Jin Shin, Ebrahim Farrokh, Jaehoon Jung, Jaewon Lee, Hanbyul Kang
Engineering Computations, Vol. 41, No. 1, pp.155-182

Despite the many advantages this type of equipment offers, there are still some major drawbacks. Linear cutting machine (LCM) cannot accurately simulate the true rock-cutting process as 1. it does not account for the circular path along which tunnel boring machine (TBM) disk cutters cut the tunnel face, 2. it does not accurately model the position of a disk cutter on the cutterhead, 3. it cannot perfectly replicate the rotational speed of a TBM. To enhance the knowledge of these issues and in order to mimic the real rock-cutting process, a new lab testing equipment was developed by Hyundai Engineering and Construction.

A new testing machine called rotary cutting machine (RCM) is designed to simulate the excavation process of hard-rock TBMs and includes features such as TBM cutterhead, RPM simulation, constant normal force mode and constant penetration rate mode. Two sets of tests were conducted on Hwandeung granite using different disk cutter sizes to analyze the cutting forces in various excavation modes. The results are analyzed using statistical analysis and dimensional analysis. A new model is generated using dimensional analysis, and its results are compared against the results of actual cases.

The effectiveness of the new RCM test was demonstrated in its ability to apply various modes of excavation. Initial analysis of chip size revealed that the thickness of the chips is largely dependent on the cutter spacing. Tests with varying RPM showed that an increase in RPM results in an increase in the normal force and rolling force. The cutting coefficient (CC) demonstrated a linear correlation with penetration. The optimal specific energy is achieved at an S/p ratio of around 15. However, a slightly lower S/p ratio can also be used in the design if the cutter specifications permit. A dimensional analysis was utilized to develop a new RCM model based on the results from approximately 1200 tests. The model's applicability was demonstrated through a comparison of TBM penetration data from 26 tunnel projects globally. Results indicated that the predicted penetration rates by the RCM test model were in good agreement with actual rates for the majority of cases. However, further investigation is necessary for softer rock types, which will be conducted in the future using concrete blocks.

The originality of the research lies in the development of Hyundai Engineering and Construction’s advanced full-scale laboratory rotary cutting machine (RCM), which accurately replicates the excavation process of hard-rock tunnel boring machines (TBMs). The study provides valuable insights into cutting forces, chip size, specific energy, RPM and excavation modes, enhancing understanding and decision-making in hard-rock excavation processes. The research also presents a new RCM model validated against TBM penetration data, demonstrating its practical applicability and predictive accuracy.

]]>
A study of rotary cutting machine (RCM) performance on Korean granite10.1108/EC-08-2023-0462Engineering Computations2024-01-23© 2023 Emerald Publishing LimitedYoung Jin ShinEbrahim FarrokhJaehoon JungJaewon LeeHanbyul KangEngineering Computations4112024-01-2310.1108/EC-08-2023-0462https://www.emerald.com/insight/content/doi/10.1108/EC-08-2023-0462/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatest© 2023 Emerald Publishing Limited
Assessment of shear band evolution using discrete element modellinghttps://www.emerald.com/insight/content/doi/10.1108/EC-07-2023-0327/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestThe objective of this paper is to quantitatively assess shear band evolution by using two-dimensional discrete element method (DEM). The DEM model was first calibrated by retrospectively modelling existing triaxial tests. A series of DEM analyses was then conducted with the focus on the particle rotation during loading. An approach based on particle rotation was developed to precisely identify the shear band region from the surrounding. In this approach, a threshold rotation angle ω0 was defined to distinguish the potential particles inside and outside the shear band and an index g(ω0) was introduced to assess the discrepancy between the rotation response inside and outside shear band. The most distinct shear band region can be determined by the ω0 corresponding to the peak g(ω0). By using the proposed approach, the shear band development of two computational cases with different typical localised failure patterns were successfully examined by quantitatively measuring the inclination angle and thickness of shear band, as well as the microscopic quantities. The results show that the shear band formation is stress-dependent, transiting from conjugated double shear bands to single shear band with confining stress increasing. The shear band evolution of two typical localised failure modes exhibits opposite trends with increasing strain level, both in inclination angle and thickness. Shear band featured a larger volumetric dilatancy and a lower coordination number than the surrounding. The shear band also significantly disturbs the induced anisotropy of soil. This paper proposed an approach to quantitatively assess shear band evolution based on the result of two-dimensional DEM modelling.Assessment of shear band evolution using discrete element modelling
Yang Yang, Yinghui Tian, Runyu Yang, Chunhui Zhang, Le Wang
Engineering Computations, Vol. 41, No. 1, pp.183-201

The objective of this paper is to quantitatively assess shear band evolution by using two-dimensional discrete element method (DEM).

The DEM model was first calibrated by retrospectively modelling existing triaxial tests. A series of DEM analyses was then conducted with the focus on the particle rotation during loading. An approach based on particle rotation was developed to precisely identify the shear band region from the surrounding. In this approach, a threshold rotation angle ω0 was defined to distinguish the potential particles inside and outside the shear band and an index g(ω0) was introduced to assess the discrepancy between the rotation response inside and outside shear band. The most distinct shear band region can be determined by the ω0 corresponding to the peak g(ω0). By using the proposed approach, the shear band development of two computational cases with different typical localised failure patterns were successfully examined by quantitatively measuring the inclination angle and thickness of shear band, as well as the microscopic quantities.

The results show that the shear band formation is stress-dependent, transiting from conjugated double shear bands to single shear band with confining stress increasing. The shear band evolution of two typical localised failure modes exhibits opposite trends with increasing strain level, both in inclination angle and thickness. Shear band featured a larger volumetric dilatancy and a lower coordination number than the surrounding. The shear band also significantly disturbs the induced anisotropy of soil.

This paper proposed an approach to quantitatively assess shear band evolution based on the result of two-dimensional DEM modelling.

]]>
Assessment of shear band evolution using discrete element modelling10.1108/EC-07-2023-0327Engineering Computations2024-01-22© 2024 Emerald Publishing LimitedYang YangYinghui TianRunyu YangChunhui ZhangLe WangEngineering Computations4112024-01-2210.1108/EC-07-2023-0327https://www.emerald.com/insight/content/doi/10.1108/EC-07-2023-0327/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatest© 2024 Emerald Publishing Limited
Observer-based preview control for T-S fuzzy systemshttps://www.emerald.com/insight/content/doi/10.1108/EC-07-2023-0341/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestConsidering the unmeasurable states of the systems and the previewed reference signal, a novel fuzzy observer-based preview controller, which is a mixed controller of the fuzzy observer-based controller, fuzzy integrator and preview controller, is considered to address the tracking control problem. The authors employ an augmentation technique to construct an augmented error system for uncertain T-S fuzzy discrete-time systems with time-varying uncertainties. Additionally, the authors obtain the corresponding linear matrix inequality (LMI) conditions for designing the preview controller. This paper discusses the preview tracking problem for nonlinear systems. First, considering the unmeasurable states of the systems and the previewed reference signal, a novel fuzzy observer-based preview controller, which is a mixed controller of the fuzzy observer-based controller, fuzzy integrator, and preview controller, is considered to address the tracking control problem. Then, using the fuzzy Lyapunov functional with the linear matrix inequality (LMI) technique, new sufficient conditions for the asymptotic stability of the augmented system are derived by applying the LMI technique. The preview controller and fuzzy observer can be designed in one step. Finally, a numerical example is used to illustrate the effectiveness of the results. An augmented error system is successfully constructed by the state augmentation approach. A novel preview controller is designed to address the tracking control problem. The preview controller and fuzzy observer can be designed in one step.Observer-based preview control for T-S fuzzy systems
Li Li, Hui Ye, Xiaohua Meng
Engineering Computations, Vol. 41, No. 1, pp.202-218

Considering the unmeasurable states of the systems and the previewed reference signal, a novel fuzzy observer-based preview controller, which is a mixed controller of the fuzzy observer-based controller, fuzzy integrator and preview controller, is considered to address the tracking control problem.

The authors employ an augmentation technique to construct an augmented error system for uncertain T-S fuzzy discrete-time systems with time-varying uncertainties. Additionally, the authors obtain the corresponding linear matrix inequality (LMI) conditions for designing the preview controller.

This paper discusses the preview tracking problem for nonlinear systems. First, considering the unmeasurable states of the systems and the previewed reference signal, a novel fuzzy observer-based preview controller, which is a mixed controller of the fuzzy observer-based controller, fuzzy integrator, and preview controller, is considered to address the tracking control problem. Then, using the fuzzy Lyapunov functional with the linear matrix inequality (LMI) technique, new sufficient conditions for the asymptotic stability of the augmented system are derived by applying the LMI technique. The preview controller and fuzzy observer can be designed in one step. Finally, a numerical example is used to illustrate the effectiveness of the results.

An augmented error system is successfully constructed by the state augmentation approach. A novel preview controller is designed to address the tracking control problem. The preview controller and fuzzy observer can be designed in one step.

]]>
Observer-based preview control for T-S fuzzy systems10.1108/EC-07-2023-0341Engineering Computations2024-01-23© 2024 Emerald Publishing LimitedLi LiHui YeXiaohua MengEngineering Computations4112024-01-2310.1108/EC-07-2023-0341https://www.emerald.com/insight/content/doi/10.1108/EC-07-2023-0341/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatest© 2024 Emerald Publishing Limited
Numerical calculation of shock wave overpressure produced by multiple cloud detonationhttps://www.emerald.com/insight/content/doi/10.1108/EC-05-2023-0244/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestA model for calculating the global overpressure time history of a single cloud detonation from overpressure time history of discrete positions in the range of single cloud detonation is to be proposed and verified. The overpressure distribution produced by multiple cloud detonation and the influence of cloud spacing and fuel mass of every cloud on the overpressure distribution are to be studied. A calculation method is used to obtain the global overpressure field distribution after single cloud detonation from the overpressure time history of discrete distance to detonation center after single cloud detonation. On this basis, the overpressure distribution produced by multi-cloud under different cloud spacing and different fuel mass conditions is obtained. The results show that for 150 kg fuel, when the spacing of three clouds is 40 m, 50 m, respectively, the overpressure range of larger than 0.1 MPa is 5496.48 mˆ2 and 6235.2 mˆ2, which is 2.89 times and 3.28 times of that of single cloud detonation. The superposition effect can be ignored when the spacing between the three clouds is greater than 60 m. In the case of fixed cloud spacing, once the overpressure forms continuous effective superposition, the marginal utility of fuel decreases. A model for calculating the global overpressure time history of a single cloud detonation from overpressure time history of discrete positions in the range of single cloud detonation is proposed and verified. Based on this method, the global overpressure field of single cloud detonation is reconstructed, and the superimposed overpressure distribution characteristics of three cloud detonation are calculated and analyzed.Numerical calculation of shock wave overpressure produced by multiple cloud detonation
Zeye Fu, Jiahao Zou, Luxin Han, Qi Zhang
Engineering Computations, Vol. 41, No. 1, pp.219-236

A model for calculating the global overpressure time history of a single cloud detonation from overpressure time history of discrete positions in the range of single cloud detonation is to be proposed and verified. The overpressure distribution produced by multiple cloud detonation and the influence of cloud spacing and fuel mass of every cloud on the overpressure distribution are to be studied.

A calculation method is used to obtain the global overpressure field distribution after single cloud detonation from the overpressure time history of discrete distance to detonation center after single cloud detonation. On this basis, the overpressure distribution produced by multi-cloud under different cloud spacing and different fuel mass conditions is obtained.

The results show that for 150 kg fuel, when the spacing of three clouds is 40 m, 50 m, respectively, the overpressure range of larger than 0.1 MPa is 5496.48 mˆ2 and 6235.2 mˆ2, which is 2.89 times and 3.28 times of that of single cloud detonation. The superposition effect can be ignored when the spacing between the three clouds is greater than 60 m. In the case of fixed cloud spacing, once the overpressure forms continuous effective superposition, the marginal utility of fuel decreases.

A model for calculating the global overpressure time history of a single cloud detonation from overpressure time history of discrete positions in the range of single cloud detonation is proposed and verified. Based on this method, the global overpressure field of single cloud detonation is reconstructed, and the superimposed overpressure distribution characteristics of three cloud detonation are calculated and analyzed.

]]>
Numerical calculation of shock wave overpressure produced by multiple cloud detonation10.1108/EC-05-2023-0244Engineering Computations2024-01-25© 2024 Emerald Publishing LimitedZeye FuJiahao ZouLuxin HanQi ZhangEngineering Computations4112024-01-2510.1108/EC-05-2023-0244https://www.emerald.com/insight/content/doi/10.1108/EC-05-2023-0244/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatest© 2024 Emerald Publishing Limited
A complex model decomposition algorithm based on 3D frame fields and featureshttps://www.emerald.com/insight/content/doi/10.1108/EC-01-2023-0037/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestHexahedral meshing is one of the most important steps in performing an accurate simulation using the finite element analysis (FEA). However, the current hexahedral meshing method in the industry is a nonautomatic and inefficient method, i.e. manually decomposing the model into suitable blocks and obtaining the hexahedral mesh from these blocks by mapping or sweeping algorithms. The purpose of this paper is to propose an almost automatic decomposition algorithm based on the 3D frame field and model features to replace the traditional time-consuming and laborious manual decomposition method. The proposed algorithm is based on the 3D frame field and features, where features are used to construct feature-cutting surfaces and the 3D frame field is used to construct singular-cutting surfaces. The feature-cutting surfaces constructed from concave features first reduce the complexity of the model and decompose it into some coarse blocks. Then, an improved 3D frame field algorithm is performed on these coarse blocks to extract the singular structure and construct singular-cutting surfaces to further decompose the coarse blocks. In most modeling examples, the proposed algorithm uses both types of cutting surfaces to decompose models fully automatically. In a few examples with special requirements for hexahedral meshes, the algorithm requires manual input of some user-defined cutting surfaces and constructs different singular-cutting surfaces to ensure the effectiveness of the decomposition. Benefiting from the feature decomposition and the 3D frame field algorithm, the output blocks of the proposed algorithm have no inner singular structure and are suitable for the mapping or sweeping algorithm. The introduction of internal constraints makes 3D frame field generation more robust in this paper, and it can automatically correct some invalid 3–5 singular structures. In a few examples with special requirements, the proposed algorithm successfully generates valid blocks even though the singular structure of the model is modified by user-defined cutting surfaces. The proposed algorithm takes the advantage of feature decomposition and the 3D frame field to generate suitable blocks for a mapping or sweeping algorithm, which saves a lot of simulation time and requires less experience. The user-defined cutting surfaces enable the creation of special hexahedral meshes, which was difficult with previous algorithms. An improved 3D frame field generation method is proposed to correct some invalid singular structures and improve the robustness of the previous methods.A complex model decomposition algorithm based on 3D frame fields and features
Chengpeng Zhang, Zhihua Yu, Jimin Shi, Yu Li, Wenqiang Xu, Zheyi Guo, Hongshi Zhang, Zhongyuan Zhu, Sheng Qiang
Engineering Computations, Vol. 41, No. 1, pp.237-258

Hexahedral meshing is one of the most important steps in performing an accurate simulation using the finite element analysis (FEA). However, the current hexahedral meshing method in the industry is a nonautomatic and inefficient method, i.e. manually decomposing the model into suitable blocks and obtaining the hexahedral mesh from these blocks by mapping or sweeping algorithms. The purpose of this paper is to propose an almost automatic decomposition algorithm based on the 3D frame field and model features to replace the traditional time-consuming and laborious manual decomposition method.

The proposed algorithm is based on the 3D frame field and features, where features are used to construct feature-cutting surfaces and the 3D frame field is used to construct singular-cutting surfaces. The feature-cutting surfaces constructed from concave features first reduce the complexity of the model and decompose it into some coarse blocks. Then, an improved 3D frame field algorithm is performed on these coarse blocks to extract the singular structure and construct singular-cutting surfaces to further decompose the coarse blocks. In most modeling examples, the proposed algorithm uses both types of cutting surfaces to decompose models fully automatically. In a few examples with special requirements for hexahedral meshes, the algorithm requires manual input of some user-defined cutting surfaces and constructs different singular-cutting surfaces to ensure the effectiveness of the decomposition.

Benefiting from the feature decomposition and the 3D frame field algorithm, the output blocks of the proposed algorithm have no inner singular structure and are suitable for the mapping or sweeping algorithm. The introduction of internal constraints makes 3D frame field generation more robust in this paper, and it can automatically correct some invalid 3–5 singular structures. In a few examples with special requirements, the proposed algorithm successfully generates valid blocks even though the singular structure of the model is modified by user-defined cutting surfaces.

The proposed algorithm takes the advantage of feature decomposition and the 3D frame field to generate suitable blocks for a mapping or sweeping algorithm, which saves a lot of simulation time and requires less experience. The user-defined cutting surfaces enable the creation of special hexahedral meshes, which was difficult with previous algorithms. An improved 3D frame field generation method is proposed to correct some invalid singular structures and improve the robustness of the previous methods.

]]>
A complex model decomposition algorithm based on 3D frame fields and features10.1108/EC-01-2023-0037Engineering Computations2024-02-09© 2024 Emerald Publishing LimitedChengpeng ZhangZhihua YuJimin ShiYu LiWenqiang XuZheyi GuoHongshi ZhangZhongyuan ZhuSheng QiangEngineering Computations4112024-02-0910.1108/EC-01-2023-0037https://www.emerald.com/insight/content/doi/10.1108/EC-01-2023-0037/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatest© 2024 Emerald Publishing Limited
Vibration control enhancement in a full vehicle dynamic model by optimization of the controller’s gain parametershttps://www.emerald.com/insight/content/doi/10.1108/EC-04-2023-0178/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestIn this paper, improvements in reducing transmitted accelerations in a full vehicle are obtained by optimizing the gain parameters of an active control in a roughness road profile. For a classically designed linear quadratic regulator (LQR) control, the vibration attenuation performance will depend on weighting matrices Q and R. A methodology is proposed in this work to determine the optimal elements of these matrices by using a genetic algorithm method to get enhanced controller performance. The active control is implemented in an eight degrees of freedom (8-DOF) vehicle suspension model, subjected to a standard ISO road profile. The control performance is compared against a controlled system with few Q and R parameters, an active system without optimized gain matrices, and an optimized passive system. The control with 12 optimized parameters for Q and R provided the best vibration attenuation, reducing significantly the Root Mean Square (RMS) accelerations at the driver’s seat and car body. The research has positive implications in a wide class of active control systems, especially those based on a LQR, which was verified by the multibody dynamic systems tested in the paper. Better active control gains can be devised to improve performance in vibration attenuation. The main contribution proposed in this work is the improvement of the Q and R parameters simultaneously, in a full 8-DOF vehicle model, which minimizes the driver’s seat acceleration and, at the same time, guarantees vehicle safety.Vibration control enhancement in a full vehicle dynamic model by optimization of the controller’s gain parameters
Leonardo Valero Pereira, Walter Jesus Paucar Casas, Herbert Martins Gomes, Luis Roberto Centeno Drehmer, Emanuel Moutinho Cesconeto
Engineering Computations, Vol. 41, No. 1, pp.259-286

In this paper, improvements in reducing transmitted accelerations in a full vehicle are obtained by optimizing the gain parameters of an active control in a roughness road profile.

For a classically designed linear quadratic regulator (LQR) control, the vibration attenuation performance will depend on weighting matrices Q and R. A methodology is proposed in this work to determine the optimal elements of these matrices by using a genetic algorithm method to get enhanced controller performance. The active control is implemented in an eight degrees of freedom (8-DOF) vehicle suspension model, subjected to a standard ISO road profile. The control performance is compared against a controlled system with few Q and R parameters, an active system without optimized gain matrices, and an optimized passive system.

The control with 12 optimized parameters for Q and R provided the best vibration attenuation, reducing significantly the Root Mean Square (RMS) accelerations at the driver’s seat and car body.

The research has positive implications in a wide class of active control systems, especially those based on a LQR, which was verified by the multibody dynamic systems tested in the paper.

Better active control gains can be devised to improve performance in vibration attenuation.

The main contribution proposed in this work is the improvement of the Q and R parameters simultaneously, in a full 8-DOF vehicle model, which minimizes the driver’s seat acceleration and, at the same time, guarantees vehicle safety.

]]>
Vibration control enhancement in a full vehicle dynamic model by optimization of the controller’s gain parameters10.1108/EC-04-2023-0178Engineering Computations2024-02-26© 2024 Emerald Publishing LimitedLeonardo Valero PereiraWalter Jesus Paucar CasasHerbert Martins GomesLuis Roberto Centeno DrehmerEmanuel Moutinho CesconetoEngineering Computations4112024-02-2610.1108/EC-04-2023-0178https://www.emerald.com/insight/content/doi/10.1108/EC-04-2023-0178/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatest© 2024 Emerald Publishing Limited
Parallel and automatic mesh sizing field generation for complicated CAD modelshttps://www.emerald.com/insight/content/doi/10.1108/EC-03-2023-0143/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestThe purpose of this paper is to present an automatic approach for mesh sizing field generation of complicated  computer-aided design (CAD) models. In this paper, the authors present an automatic approach for mesh sizing field generation. First, a source point extraction algorithm is applied to capture curvature and proximity features of CAD models. Second, according to the distribution of feature source points, an octree background mesh is constructed for storing element size value. Third, mesh size value on each node of background mesh is calculated by interpolating the local feature size of the nearby source points, and then, an initial mesh sizing field is obtained. Finally, a theoretically guaranteed smoothing algorithm is developed to restrict the gradient of the mesh sizing field. To achieve high performance, the proposed approach has been implemented in multithreaded parallel using OpenMP. Numerical results demonstrate that the proposed approach is remarkably efficient to construct reasonable mesh sizing field for complicated CAD models and applicable for generating geometrically adaptive triangle/tetrahedral meshes. Moreover, since the mesh sizing field is defined on an octree background mesh, high-efficiency query of local size value could be achieved in the following mesh generation procedure. How to determine a reasonable mesh size for complicated CAD models is often a bottleneck of mesh generation. For the complicated models with thousands or even ten thousands of geometric entities, it is time-consuming to construct an appropriate mesh sizing field for generating high-quality mesh. A parallel algorithm of mesh sizing field generation with low computational complexity is presented in this paper, and its usability and efficiency have been verified.Parallel and automatic mesh sizing field generation for complicated CAD models
Juelin Leng, Quan Xu, Tiantian Liu, Yang Yang, Peng Zheng
Engineering Computations, Vol. ahead-of-print, No. ahead-of-print, pp.-

The purpose of this paper is to present an automatic approach for mesh sizing field generation of complicated  computer-aided design (CAD) models.

In this paper, the authors present an automatic approach for mesh sizing field generation. First, a source point extraction algorithm is applied to capture curvature and proximity features of CAD models. Second, according to the distribution of feature source points, an octree background mesh is constructed for storing element size value. Third, mesh size value on each node of background mesh is calculated by interpolating the local feature size of the nearby source points, and then, an initial mesh sizing field is obtained. Finally, a theoretically guaranteed smoothing algorithm is developed to restrict the gradient of the mesh sizing field.

To achieve high performance, the proposed approach has been implemented in multithreaded parallel using OpenMP. Numerical results demonstrate that the proposed approach is remarkably efficient to construct reasonable mesh sizing field for complicated CAD models and applicable for generating geometrically adaptive triangle/tetrahedral meshes. Moreover, since the mesh sizing field is defined on an octree background mesh, high-efficiency query of local size value could be achieved in the following mesh generation procedure.

How to determine a reasonable mesh size for complicated CAD models is often a bottleneck of mesh generation. For the complicated models with thousands or even ten thousands of geometric entities, it is time-consuming to construct an appropriate mesh sizing field for generating high-quality mesh. A parallel algorithm of mesh sizing field generation with low computational complexity is presented in this paper, and its usability and efficiency have been verified.

]]>
Parallel and automatic mesh sizing field generation for complicated CAD models10.1108/EC-03-2023-0143Engineering Computations2024-01-09© 2023 Emerald Publishing LimitedJuelin LengQuan XuTiantian LiuYang YangPeng ZhengEngineering Computationsahead-of-printahead-of-print2024-01-0910.1108/EC-03-2023-0143https://www.emerald.com/insight/content/doi/10.1108/EC-03-2023-0143/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatest© 2023 Emerald Publishing Limited
Variational sparse diffusion and its application in mesh processinghttps://www.emerald.com/insight/content/doi/10.1108/EC-07-2023-0390/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestThe primary objective of this study is to tackle the enduring challenge of preserving feature integrity during the manipulation of geometric data in computer graphics. Our work aims to introduce and validate a variational sparse diffusion model that enhances the capability to maintain the definition of sharp features within meshes throughout complex processing tasks such as segmentation and repair. We developed a variational sparse diffusion model that integrates a high-order L1 regularization framework with Dirichlet boundary constraints, specifically designed to preserve edge definition. This model employs an innovative vertex updating strategy that optimizes the quality of mesh repairs. We leverage the augmented Lagrangian method to address the computational challenges inherent in this approach, enabling effective management of the trade-off between diffusion strength and feature preservation. Our methodology involves a detailed analysis of segmentation and repair processes, focusing on maintaining the acuity of features on triangulated surfaces. Our findings indicate that the proposed variational sparse diffusion model significantly outperforms traditional smooth diffusion methods in preserving sharp features during mesh processing. The model ensures the delineation of clear boundaries in mesh segmentation and achieves high-fidelity restoration of deteriorated meshes in repair tasks. The innovative vertex updating strategy within the model contributes to enhanced mesh quality post-repair. Empirical evaluations demonstrate that our approach maintains the integrity of original, sharp features more effectively, especially in complex geometries with intricate detail. The originality of this research lies in the novel application of a high-order L1 regularization framework to the field of mesh processing, a method not conventionally applied in this context. The value of our work is in providing a robust solution to the problem of feature degradation during the mesh manipulation process. Our model’s unique vertex updating strategy and the use of the augmented Lagrangian method for optimization are distinctive contributions that enhance the state-of-the-art in geometry processing. The empirical success of our model in preserving features during mesh segmentation and repair presents an advancement in computer graphics, offering practical benefits to both academic research and industry applications.Variational sparse diffusion and its application in mesh processing
Yongjiang Xue, Wei Wang, Qingzeng Song
Engineering Computations, Vol. ahead-of-print, No. ahead-of-print, pp.-

The primary objective of this study is to tackle the enduring challenge of preserving feature integrity during the manipulation of geometric data in computer graphics. Our work aims to introduce and validate a variational sparse diffusion model that enhances the capability to maintain the definition of sharp features within meshes throughout complex processing tasks such as segmentation and repair.

We developed a variational sparse diffusion model that integrates a high-order L1 regularization framework with Dirichlet boundary constraints, specifically designed to preserve edge definition. This model employs an innovative vertex updating strategy that optimizes the quality of mesh repairs. We leverage the augmented Lagrangian method to address the computational challenges inherent in this approach, enabling effective management of the trade-off between diffusion strength and feature preservation. Our methodology involves a detailed analysis of segmentation and repair processes, focusing on maintaining the acuity of features on triangulated surfaces.

Our findings indicate that the proposed variational sparse diffusion model significantly outperforms traditional smooth diffusion methods in preserving sharp features during mesh processing. The model ensures the delineation of clear boundaries in mesh segmentation and achieves high-fidelity restoration of deteriorated meshes in repair tasks. The innovative vertex updating strategy within the model contributes to enhanced mesh quality post-repair. Empirical evaluations demonstrate that our approach maintains the integrity of original, sharp features more effectively, especially in complex geometries with intricate detail.

The originality of this research lies in the novel application of a high-order L1 regularization framework to the field of mesh processing, a method not conventionally applied in this context. The value of our work is in providing a robust solution to the problem of feature degradation during the mesh manipulation process. Our model’s unique vertex updating strategy and the use of the augmented Lagrangian method for optimization are distinctive contributions that enhance the state-of-the-art in geometry processing. The empirical success of our model in preserving features during mesh segmentation and repair presents an advancement in computer graphics, offering practical benefits to both academic research and industry applications.

]]>
Variational sparse diffusion and its application in mesh processing10.1108/EC-07-2023-0390Engineering Computations2024-03-04© 2024 Emerald Publishing LimitedYongjiang XueWei WangQingzeng SongEngineering Computationsahead-of-printahead-of-print2024-03-0410.1108/EC-07-2023-0390https://www.emerald.com/insight/content/doi/10.1108/EC-07-2023-0390/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatest© 2024 Emerald Publishing Limited
Improved multiple-Toeplitz matrices reconstruction method using quadratic spatial smoothing for coherent signals DOA estimationhttps://www.emerald.com/insight/content/doi/10.1108/EC-08-2023-0416/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestThe purpose of this paper is to exploit the multiple-Toeplitz matrices reconstruction method combined with quadratic spatial smoothing processing to improve the direction-of-arrival (DOA) estimation performance of coherent signals at low signal-to-noise ratio (SNRs). An improved multiple-Toeplitz matrices reconstruction method is proposed via quadratic spatial smoothing processing. Our proposed method takes advantage of the available information contained in the auto-covariance matrices of individual Toeplitz matrices and the cross-covariance matrices of different Toeplitz matrices, which results in a higher noise suppression ability. Theoretical analysis and simulation results show that, compared with the existing Toeplitz matrix processing methods, the proposed method improves the DOA estimation performance in cases with a low SNR. Especially for the cases with a low SNR and small snapshot number as well as with closely spaced sources, the proposed method can achieve much better performance on estimation accuracy and resolution probability. The study investigates the possibility of reusing pre-existing designs for the DOA estimation of the coherent signals. The proposed technique enables achieve good estimation performance at low SNRs. The paper includes implications for the DOA problem at low SNRs in communication systems. The proposed method proved to be useful for the DOA estimation at low SNR.Improved multiple-Toeplitz matrices reconstruction method using quadratic spatial smoothing for coherent signals DOA estimation
Bingbing Qi, Lijun Xu, Xiaogang Liu
Engineering Computations, Vol. ahead-of-print, No. ahead-of-print, pp.-

The purpose of this paper is to exploit the multiple-Toeplitz matrices reconstruction method combined with quadratic spatial smoothing processing to improve the direction-of-arrival (DOA) estimation performance of coherent signals at low signal-to-noise ratio (SNRs).

An improved multiple-Toeplitz matrices reconstruction method is proposed via quadratic spatial smoothing processing. Our proposed method takes advantage of the available information contained in the auto-covariance matrices of individual Toeplitz matrices and the cross-covariance matrices of different Toeplitz matrices, which results in a higher noise suppression ability.

Theoretical analysis and simulation results show that, compared with the existing Toeplitz matrix processing methods, the proposed method improves the DOA estimation performance in cases with a low SNR. Especially for the cases with a low SNR and small snapshot number as well as with closely spaced sources, the proposed method can achieve much better performance on estimation accuracy and resolution probability.

The study investigates the possibility of reusing pre-existing designs for the DOA estimation of the coherent signals. The proposed technique enables achieve good estimation performance at low SNRs.

The paper includes implications for the DOA problem at low SNRs in communication systems.

The proposed method proved to be useful for the DOA estimation at low SNR.

]]>
Improved multiple-Toeplitz matrices reconstruction method using quadratic spatial smoothing for coherent signals DOA estimation10.1108/EC-08-2023-0416Engineering Computations2024-03-29© 2024 Emerald Publishing LimitedBingbing QiLijun XuXiaogang LiuEngineering Computationsahead-of-printahead-of-print2024-03-2910.1108/EC-08-2023-0416https://www.emerald.com/insight/content/doi/10.1108/EC-08-2023-0416/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatest© 2024 Emerald Publishing Limited
Very weak finite element methods: discretisation and applicationshttps://www.emerald.com/insight/content/doi/10.1108/EC-10-2023-0699/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestThis study aims to propose and numerically assess different ways of discretising a very weak formulation of the Poisson problem. We use integration by parts twice to shift smoothness requirements to the test functions, thereby allowing low-regularity data and solutions. Various conforming discretisations are presented and tested, with numerical results indicating good accuracy and stability in different types of problems. This is one of the first articles to propose and test concrete discretisations for very weak variational formulations in primal form. The numerical results, which include a problem based on real MRI data, indicate the potential of very weak finite element methods for tackling problems with low regularity.Very weak finite element methods: discretisation and applications
Douglas Ramalho Queiroz Pacheco
Engineering Computations, Vol. ahead-of-print, No. ahead-of-print, pp.-

This study aims to propose and numerically assess different ways of discretising a very weak formulation of the Poisson problem.

We use integration by parts twice to shift smoothness requirements to the test functions, thereby allowing low-regularity data and solutions.

Various conforming discretisations are presented and tested, with numerical results indicating good accuracy and stability in different types of problems.

This is one of the first articles to propose and test concrete discretisations for very weak variational formulations in primal form. The numerical results, which include a problem based on real MRI data, indicate the potential of very weak finite element methods for tackling problems with low regularity.

]]>
Very weak finite element methods: discretisation and applications10.1108/EC-10-2023-0699Engineering Computations2024-03-22© 2024 Emerald Publishing LimitedDouglas Ramalho Queiroz PachecoEngineering Computationsahead-of-printahead-of-print2024-03-2210.1108/EC-10-2023-0699https://www.emerald.com/insight/content/doi/10.1108/EC-10-2023-0699/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatest© 2024 Emerald Publishing Limited