Automated geometric features evaluation method for normal foot skeleton model

– “Normal foot model” is a geometric model of a healthy human foot. As the comparison of the processed feet requires a reference ideal healthy foot parameterization it was necessary to create such a model by deﬁning skeleton geometric features and generating the feature set on a dataset population. Manual positioning of such number of landmarks is both a complex and time consuming task for a skilled radiologist, not to mention the total cost of such a procedure. Thus it was recommended to formulate an automated computer algorithm to perform this procedure with accuracy at a comparable level as the manual process. The following paper describes our approach based on automatic landmark positioning in a volumetric foot dataset. The proposed automated procedure is based on four main steps: manual landmark positioning on a reference dataset, registration of the reference dataset with the examined study, transformation of landmark positions from the reference dataset space into the examined dataset space, and calculation of the geometric features on the basis of landmarks positions. The results of our algorithm are presented and discussed in the context of pros and cons of the automated method itself as well as in the context of the generated normal foot model.


Methodology
In the foot diagnostics X-ray has been a golden standard for many years. First papers describing patient positioning [2], angular measurements and range of norms [3] for those values were written in the fourth decade of 20th century. Also the great majority of surgical planning, even though sometimes computer assisted, is based on the two dimensional imaging with the skeletal geometry measurements performed manually by radiologists in image space [4,5,6]. For the hallux valgus osteotomy surgery planning it is still common to measure only several basic angles and distances in a plane radiograph.
However, new imaging techniques were developed to describe more precisely foot pathology -the most important being Computer Tomography (CT) and Magnetic Resonance Imaging (MRI) -and their decreasing cost is effecting in increasing application possibilities of those techniques [7].
One of the basic foundations of our project was to utilize full three-dimensional information provided by 3D imaging techniques to describe foot skeleton geometry. The main clinical outcome of this assumption is the increased information amount that can be extracted from patients dataset and the increased number of variables possible for surgical treatment planning, resulting in better understanding of addressed pathologies and treatment.
However, the defined normal foot model, and the geometry in its foundations, are not a fully general foot skeleton model, rather than a specialized one -directed for forefoot and first ray of the foot as well aas the possible deformations measured in these areas [1].
Geometric features measured in our approach are angles in a 3D space, angle projections to significant planes, normalized distances and distance ratios measured on the basis of landmark positions. Altogether the model consists of 40 geometric features based on 85 landmarks - Fig. 1 presents the volume rendering and the slice of MRI foot data with the selected landmark positions. Some examples of the most significant features for the hallux valgus deformation in our model are: M1P1 -the angle between the first metatarsal axis and the proximal hallux phalange axis; M1P1_PP -M1P1 projection to the sole plane (the plane based on lower skeletal points of the first metatarsal, the fifth metatarsal and calcaneus); P1D1 -the angle between the proximal and distal hallux phalange axes; P1D1_PP -P1D1 projection to the sole plane; M1M2 -the angle between the first and the second metatarsal axes; M1M2_PP -M1M2 projection to the sole plane. Fig. 2 presents some example axes and line segments used to calculate the described angles.
As the full manual landmark positioning requires approximately two hours of skilled radiologists' work at radiological workstation, it is even more recommended to automate the method. We propose the following procedure: • manual positioning of all landmarks on a reference dataset   • determination of a transformation of the reference landmarked dataset into the examined dataset with the help of a free-form registration algorithm • transformation of landmark positions from the reference dataset space into the processed volume space • computation of the geometric features of the processed data from the transformed landmark positions.
With this statement we obtain the methodology which does not only stands to our requirements but also provides quite a high level of flexibility. With tolerance to method parameterization it is not limited to a certain geometry but should be capable of transforming various landmark patterns assuming their proper positioning (in the distinguishable neighbourhood). Furthermore, the method itself is not limited to single modality and while preserving only intramodality registrations it could be applied to different 3D data types.
What is the most crucial part in order to localize the landmarks properly, the registration algorithm should provide a high level of accuracy and precision and assure an acceptable level of spatial deformations, particularly during elastic registration. On the other hand, the algorithm has to preserve the anatomy from the medical point of view and the topology from the mathematical one.
Three-dimensional medical imaging is based primarily on CT and MRI imaging. They are complementary and in everyday use do not exclude each other. For physicians the main advantage of CT is a good quality of bone imaging (bone structure, shape and relation to other bones), significantly better than in the case of MRI. CT is also more easily available than MRI, fast, easy to perform and interpret. On the other hand, while less comfortable to the patient and forbidden for patients with ferromagnetic implants and/or cardiostimulators, more expensive and harder to interpret, MRI is much better in soft-tissue imaging providing better reconstruction of ligaments, tendons, muscles and features like intraosseus oedema, but still shape of the bones is not well determined. Having the above in mind, the main modality selected for the whole project, especially for pre-and post-surgical assessment, was CT. However, due to the exposure of the patient to a dose of radiation the Ethic Committee did not agree to use CT in examination of "healthy foot" and therefore the first stage of the project was based on MRI studies. This was another reason to emphasize the modality independence of the proposed method in order to rely on the same procedures and algorithms in both MRI and CT analysis.
For the "healthy foot" model creation stage we examined a population of 100 healthy volunteer females (hallux valgus deformation is mainly observable in female population) and qualified 100 aged from 22 to 66. The qualification was based on subjective orthopedists' assessment employing clinical examination, initial X-ray examination, the interview, dynamic podoscopy and following MRI examination (more detailed classification and selection rules can be found in [1]).
All 100 qualified female patients were subject to MRI study resulting in respective datasets based on 3D T1 fSPGR-IRprep MR modality with in-plane spatial data resolution of 0.5 by 0.5 mm and distance between the slices of 0.8 mm.

Mathematical setting
As the well based technique of volumetric data alignment is data registration algorithm [8,9,10,11,12], we recover the transformation that aligns the reference volumetric dataset with that under examination using intensity based 3D volume registration. Such approach relies on an nonlinear transformation which allows to model local deformation of spatial objects. It is difficult to describe the local deformation via parameterized transformations. The method of choice is usually the FFD (free form deformation) method [13] which is commonly used as a powerful modelling tool for 3D deformable objects. The basic idea of FFD is to deform an object by manipulating an underlying mesh of control points. The manipulated lattice determines the deformation function that specifies a new position for each point of the deformed surface. Final and completely combined deformation (see Fig. 3
In our work we have used two similarity measures (energies) minimized in the process of registration. The first one -NMI (normalized mutual information) [9,15] was used in testing phase. Due to its complexity and difficulties in proper definition of derivatives suitable for robust minimization procedures we have decided to use a simpler measure -the MSD (mean square difference) similarity function: where It is well known (see e.g. [16]) that in general the registration problem both with the NMI and MSD minimized energies is ill-posed -that is, there is no single minimum and the global one can be often far from the correct solution. In order to control the deformation process it is necessary to define some constraints to limit variability of FFD parameters, by adding regularization terms. In our work we add a regularization defined as a squared matrix norm of the second derivatives of the deformation δp(D, p): where The value of this term increases while deformation changes rapidly, and stays at low level while deformation is smooth -note that E reg (D) is nothing else than the distance between D and the subspace of affine transforms in a suitable Sobolev H2 space. The generalization of the penalty function was defined in [17]. The final energy function used in the optimization process is defined as: where α denotes the regularization factor for the penalty function. The higher value of α will result in the smoother and less deformed final deformation field. This definition allows for an analytical definition of penalty function gradients and can be used for optimization process guidance. In this work, we decided to use the FFD model based on a tricubic Catmull-Rom B splines which has good localization properties. One will note that the support of the basis spline function interpolating {v(0) = 1, v(k) = 0 for k = 0} is zero outside the [-2,2] interval. Thus, in the tricubic interpolation from the integer lattice {(i, j, k)} to R3 the value of the interpolant at (x, y, z) depends on at most 64 lattice points In order to discretize our deformation, the domain of the object volume is defined as: where X, Y and Z denote the object resolution. Let Φ denote a (n x + 1) × (n y + 1) × (n z + 1) mesh of control points with the uniform spacing and D as the deformation field described by Φ. Defining the starting position of non deformed mesh points as φ ( i, j, k) 0 , the changed locations can be defined as φ i,j,k = φ 0 j,k,l + δφ i,j,k , where i ∈ (0, 1, . . . , n x ), j ∈ (0, 1, . . . , n y ), k ∈ (0, 1, . . . , n z ). The FFD transformation of point p to its new position p ′ can be written as the 3D tensor product of the 1-D cubic B-splines: (8) where δp(D, p) is the local deformation, u, v and w are the relative positions of p with respect to the nearest 64 control points and B i , B j , B k represents the i, j, k-th basis function of the cubic B-spline and l, m, n are the integer numbers defined as: The number of parameters to be optimized is equal to 3 times the number of control points in the lattice. Because of a good localization property of B-spline functions optimization procedures can be applied locally. This allows for acceptable running times even for very dense lattices.

Minimization procedure and exclusion of false local minima
In order to deal with large displacements in the registration process, we use a classical incremental multi-resolution procedure [18]. At the coarsest level (grid size 4 × 4 × 4, data is scaled down by the factor of 8) we can model the maximum allowed deformations. At the final level (25 × 25 × 25 grid with the full 3D image resolution, which is for the reference model 225 × 512 × 406 voxels) we are able to model small deformations to fine-tune the registration quality (see Fig. 4). One of the most serious problems in volume registration involves falling into false minima resulting in false identification of quasi-periodic features of registered volumes. In the case of feet volumetric images registration one could occasionally observe mismatched toes. In order to avoid this pitfall the standard energy function has been multiplied by a problem specific volumetric weight field.
A set S of points located in problem areas in the space of the reference data has been selected by hand. The Euclidean distance function w(x, y, z) = d((x, y, z), S) is smoothed by using a Gaussian kernel and normalized giving the desired weight field W (see Fig. 5) used in the last two iterations where the matching algorithm models the smallest local deformations. We change the value of the final matching function using the formula: After the matching procedure we are able to transform our set of landmarks defined for the reference dataset and place them in the proper positions for the current patients data.

Method assessment
In order to assess the proposed method we need to evaluate the method itself and take into account its diagnostic, therapeutic and social impact method as well as the final patient outcome [19,20]. However, his paper describes the automatization of the normal foot parameterization procedure only and is focused on normal foot model as the outcome, which is just a crucial part of more complicated clinical procedure, it does not afflicts the patient directly. Therefore, the assessment is actually limited to the automatic method validation and general economical impact related to evaluation time and cost, in the context of evaluation of the geometric features of the normal foot skeleton.
As far as the geometric features are calculated directly on the basis of landmark positions, it is more suitable for this paper to validate the limited method -automatic landmark positioning on the normal foot skeleton. More detailed error analysis and propagation of landmark positioning error to feature errors is described in [1].
The main objective of the proposed validation procedure is to assess if the automated method provides the landmark positioning accuracy and precision at a comparable level as the reference expert manual positioning.
Ten datasets were randomly selected from the project data population for the validation procedure (see section Data). All studies from the validation datasets were processed both manually and automatically with the proposed method.
Validation criteria selected for comparison of landmark positioning were both accuracy and precision as compared between the manual and automated procedures.
The reference landmark positions for accuracy validation were obtained by manual landmark positioning on all validation datasets performed subjectively by four qualified radiologists, while the reference landmark positions for precision validation were obtained by four separate manual landmark positioning performed by one qualified radiologist on one selected validation dataset. The more detailed description of the manual procedure and error calculations can be found in [1]. The accuracy validation criterion of the automated method was defined as the RMS metrics between the mean position of an i-th landmark in the reference manual procedure p p p M i and the position in the automated procedure p p p A i , defined as the vector: Pobrane z czasopisma Annales AI-Informatica http://ai.annales.umcs.pl Data: 17/07/2023 09:47:58 U M C S and analyzed both in the x i , y i and z i components and the vector norm RM S i , for each landmark. The expected accuracy of the automated method should be in the same order of magnitude and comparable level as the reference manual method accuracy which was defined as standard deviations of manual procedure (manual landmark positioning performed by 4 different radiologists) for each landmark respectively. Table 1 presents the obtained mean (over landmarks population) accuracy results comparison for both the automated method and the reference manual one, in each x, y and z component and distance magnitude (d). The precision validation criterion of the automated method was defined as the standard deviation calculated from the multiple procedure runs on a single dataset with same parameters and initial conditions while the manual method precision was defined as the standard deviation of the manual procedure (multiple manual landmark positioning performed by one radiologist) and both calculated for each landmark, respectively. However, in accordance with the theoretical expectations, the automated method is fully reproducible as it uses no heuristics and thus its precision depends on the reference model precision -based on multiple manual procedures thus with the increased precision due to landmark position averaging. The manual reference method is subject to human errors, human precision and interface precision and results in the mean standard deviation of landmark positions of 0.59, 0.60 and 0.71 mm for x, y and z positions respectively.
The accuracy of the automated method describes quantitatively how similar are the positions of landmarks obtained automatically to the reference positions obtained manually. As the results indicate the acceptably close accuracy to that of the manual method, the validation goal in this criterion is obtained.
The precision of the automated method describes quantitatively how reproducible are the landmark positions. As the automated method is fully reproducible under same conditions and its precision is related to the reference model both methods provide comparable results satisfying the validation goal also in precision criterion.
Accordingly to the above, the validation hypothesis of obtaining similar accuracy and precision level in the automated procedure those in the manual procedure holds.
Further assessment of the described automated method covers economic issues related to time and cost of the whole procedure. As described in the Methodology Pobrane z czasopisma Annales AI-Informatica http://ai.annales.umcs.pl Data: 17/07/2023 09:47:58 U M C S section, full feature set of our foot model requires 85 landmarks in a three-dimensional space. The reference procedure of manual landmark positioning, for a skilled radiologist, requires two hours of work time on average, utilizing one radiological workstation. Taking into account only the labour costs, the total cost of the manual procedure would be approximately 80 EUR (using an average hourly pay for a skilled radiologist in Poland). The proposed automated procedure, at the tested dataset resolution and required accuracy, needs less than eight hours of CPU time (measured at 2.33GHz Intel Xeon E5410 CPU core), giving the wall clock computation time under 1 hr on an 8-core workstation or server (measured at 2 x 4-core Intel Xeon E5410 2.33GHz workstation). Thus, together with the reduction of processing time by a factor of two, the total cost of the automated procedure will not exceed 1 EUR (the cost of CPU hour is estimated from server amortization and power consumption). Even taking into account the prospective requirement of human verification of the automatically calculated results -no more than 1/4 hour for a radiologist, the total cost of evaluating a single dataset with an automated method is significantly smaller as compared to the manual one. Table 2 presents the total procedure cost and time quantitative measures for both automated and reference manual methods.

Results
As presented in the previous sections the proposed automated method of feature assessment of human healthy foot from the volumetric MRI dataset is based on the geometric calculations of defined properties on the basis of landmark positions. The landmark positioning process is built on landmark transformation from the landmarked reference volume to the processed volume, where the transformation function is calculated using the registration algorithm based on the optimization process. The validation outcome of the proposed automated method was positive and the results which are obtained in a single procedure -single matching between the processed dataset and the reference dataset -are on the acceptable level in comparison to the manual procedure and provide an exceptional economical advantage. The accuracy of landmark positioning obtained in our method is similar to the other registration based algorithms presented in literature [21], which also have positioning accuracy at approximately 1.4 mm at similar spatial data resolutions. Calculation time is more subjective to compare between different machines but it still has much place for improvement and optimization, or specialized hardware acceleration (e.g. GPGPU). What is most important the Pobrane z czasopisma Annales AI-Informatica http://ai.annales.umcs.pl Data: 17/07/2023 09:47:58 U M C S accuracy of landmark positioning propagated to feature values accuracy gives satisfactory resolution of parameterization for clinical needs in almost all defined features, as confirmed by skilled radiologists and orthopedists. More details about all feature value accuracy statistics can be found in [1], while here we will focus only on the most important elements of the model to present the results.
The described data population (see the Data section) counting 100 MRI datasets of human healthy foot was the base for creation of normal foot model utilizing the proposed automated method of feature calculation. One dataset was subjectively selected as the reference one and the landmarks were positioned manually by 6 skilled radiologists -constituting the reference landmark positions calculated as mean position for each landmark respectively.
From the remaining 99 datasets, 5 were initially rejected due to internal data errors, while 94 datasets were selected to be processed by our method. The fully automated procedure succeeded in 79 out of 94 cases, while the remaining 15 cases exited from the procedure at very early stages. The reason for failure in those cases was very poor quality of data -whether related to the very low signal to background noise ratio or very high spatial signal intensity differences in the MRI datasets. In order to still utilize faulty datasets and to improve method robustness, further work is in progress to implement automatic spatial data normalization techniques and automatic background noise removal as well as robustness validation in the presence of noise and other disturbing factors. Fig. 6 presents the problematic data examples. However, in all 79 cases of proper data the implemented method completed successfully.  in the 40-dimensional feature space. All feature results were analyzed statistically including accuracy estimation. The relations between the individual features were also calculated and analyzed including correlation analysis and PCA (principal component analysis). All resulting data were processed in the direction of clustering both in the feature space and population space, measuring feature dependencies and population members grouping respectively.
According to the performed analysis as well as subjective specialists ratings, each feature was assigned a weight, introducing its importance in the whole normal foot model. On the basis of complete results and analysis outcome, it was possible to define a normal foot model as a set of features accompanied by weights, average values, variability ranges and other statistical parameters for each of the particular features. It was also feasible to define a weighted feature space norm for measuring the distances of parameterized individuals from the average one in the feature space.
Utilizing all the above we defined the selection rules for automatic qualitative assessment of the processed foot study as healthy or not, or for quantitative description of the deformation degree with indications on the out-of-range features. Complete results of the normal foot model are described in detail in [1].
The generated normal foot model is currently used as a reference model for the ongoing project works for deformed and post-surgery feet assessment, as well as for formulation of best surgery parameters estimation methodology.
To present briefly the model results, Table 3 shows several selected features -most important for hallux valgus deformation (see section Methodology for features description) -and the calculated basic statistics.

Conclusions
We proposed an automated method for healthy foot skeleton parameterization in accordance with the defined geometric features using the three-dimensional MRI data. The implemented automatic procedure calculates feature values in the processed Pobrane z czasopisma Annales AI-Informatica http://ai.annales.umcs.pl Data: 17/07/2023 09:47:58 U M C S dataset on the basis of landmark positions, by transforming from the manually landmarked reference dataset, where the transformation field is calculated by the volumetric data registration based on the optimization process. Different energy functionals for optimization were tested including different similarity measures and different penalty terms. The best results in robustness, accuracy and precision, as well as calculation time, were obtained in a multi-step, multiresolution registration approach using initial affine registration and fine-tuning elastic registration with the Mean Squared Difference similarity measure and smoothness term for deformation penalty. An additional spatially weighted function was introduced to avoid falling into false minima.
The implemented method was validated using the referential landmark positioning obtained manually by skilled radiologists. The applied assessment methodology provided positive outcome including evaluation of accuracy and precision of the implemented method on the level similar to the reference manual method, and more than satisfactory time and cost benefits.
The proposed method for geometric features assessment based on the landmark positions was applied to the population of healthy foot MRI datasets, calculating the parameterization of healthy foot skeleton utilized in the normal foot model. However, the feasibility of the method is much wider both in the context of data and application. As the procedure itself is not limited to the MRI data, it could be applied for different modalities as well (CT, Ultrasound, etc.), requiring only new parameterization of the registration algorithm. From the application point of view the proposed method should be used for any other landmark based geometry assessment, requiring one reference dataset to be manually landmarked and geometric feature calculations defined.
The benefit of the automated method is proportional to the complexity of the geometry and a number of landmarks as it provides accuracy and precision on the comparable level to the manual landmark positioning but on a large scale it consumes less time and provides much smaller cost.
Further studies are required to increase method robustness for the corrupted input data and in the presence of disruptive factors, especially by introducing data preprocessing taking into account global contrast and intensity equalization as well as noise removal. We are also addressing some registration modifications to increase the accuracy of landmark localization.