- Research
- Open access
- Published:
Automatic extraction of upper body landmarks using Rhino and Grasshopper algorithms
Fashion and Textiles volume 9, Article number: 36 (2022)
Abstract
The aim of this study is to develop algorithms for automatic landmark extraction on women with various upper body types and body inclinations using the Grasshopper algorithm editor, which enables the user to interact with the 3D modeling interface. First, 15 landmarks were defined based on the morphological features of 3D body surfaces and clothing applications, from which automatic landmark extraction algorithms were developed. To verify the accuracy of the algorithms on various body shapes, this study determined criteria for key body shape factors (BMI, neck slope, upper body slope, and shoulder slope) that influence each landmark position, classified them into body shape groups and sorted the scan samples for each body type using the 6th SizeKorea database. The statistical differences between the scan-derived measurements and the SizeKorea measurements were compared, with an allowable tolerance of ISO 20685. In the case of landmarks with significant differences, the algorithm was modified. It was found that the algorithms were successfully applied to various upper body shapes, which improved the reliability and accuracy of the algorithm.
Introduction
Acquiring accurate anthropometric data in a short time is critical for the apparel industry, since up-to-date anthropometric data can assist ready-to-wear apparel companies in identifying sectors of target market customers, and in analyzing and updating their sizing systems and patterns (Kim et al., 2016). As a traditional method to collect anthropometric data, body landmarks are manually marked on the body and the measurements are collected using direct measurement instruments such as measuring tape and calipers (Lu & Wang, 2008; Markiewicz et al., 2017). However, this process is time consuming, and only allows for one-dimensional measurements such as surface lengths and circumferences (Tsoli et al., 2014; Tyler et al., 2012). Therefore, 3D body scanning technology has been used in the apparel field since the late 1990s since it can capture the surface of the whole body without human intervention and generate a 3D scan in a short time.
Most commercial whole-body scanner companies such as [TC]2, SizeStream, and Human Solutions have developed automated measuring software to collect multi-dimensional measurement data for various applications such as national anthropometric surveys (Wang et al., 2007). Several studies have analyzed the accuracy of their automatic measuring software by comparing the measurements derived from the software with measurements taken manually (Clauser et al. 1988; Kouchi & Mochimaru, 2011; Mickinnon & Istook, 2001; Xia et al., 2018). They found that the biggest problem with the automatic 3D body measuring software is that it cannot detect landmark locations accurately, resulting in inaccurate measurements. Many studies have developed methods for automatic landmark extraction from 3D body scans (Allen et al., 2003; Au & Yuen, 1999; Anguelov et al., 2004; Azouz et al., 2006; Jo et al., 2014; Leong et al., 2013; Liu et al., 2010; Suikerbulk et al., 2004). However, their algorithms were designed for standard body shapes, so they could not sufficiently verify their algorithms for unconventional body shapes. Han and Nam (2011) only developed an algorithm that allows for flexibly in the differences in upper body shapes, but they only considered six landmarks.
The methods of developing algorithms in previous studies can be classified into three categories: template matching (Allen et al., 2003; Au & Yuen, 1999; Liu et al., 2010; Suikerbulk et al., 2004), statistical analysis, such as probabilistic reasoning models (Anguelov et al., 2004; Azouz et al., 2006), and geometric analysis of the body’s surface (Han et al., 2010; Jo et al., 2014; Leong et al., 2013).
Among these methods, the template matching technique and statistical method should be based on a large amount of scan data. Also, to add a new landmark or change the definition of the landmark, the development process should be re-started from the beginning. However, the method based on analysis of geometric characteristics of the landmarks on a 3D body surface can be programmed by transforming the landmark definition into logical mathematical definitions, so it is judged to be a method with high efficiency and stronger adaptability than other methods. Therefore, this study also intends to develop an automatic landmarking algorithm based on this method.
The current study focused on the algorithm editor, Grasshopper, a plug-in of the Rhinoceros 3D® (commonly abbreviated to Rhino) modeling software, as a tool to develop algorithms. The advantage of Grasshopper is that it enables the user to interact with the 3D modeling interface directly, so it is possible to complete all the following processes in one interface: scan editing, landmark identification, measuring, 2D patternmaking, and parametric 3D modeling. If the automatic landmark identification algorithm is successfully developed, it will be possible to edit the armpit or crotch part of the body scan, which should be done before finding the landmark, in one interface. In addition, the landmarks found by the algorithm can be directly used for parametric modeling, unfolding 3D body surfaces into 2D clothing patterns, and further enabling modification of the clothing design. However, there was no study that adopted this algorithm editor for the automatic extraction of landmarks.
Thus, the aim of this study was to develop algorithms for automatic landmark extraction from women with various body types using the Grasshopper algorithm editor. This study identified 15 landmarks based on the morphological features of 3D body surfaces and clothing applications. To verify the accuracy of the algorithms on various body shapes, this study determined the criteria of key body shape factors (BMI, neck slope, upper body slope, and shoulder slope) that influence each landmark position to classify body shape groups and sorted the scan samples of each body type recorded in the 6th SizeKorea database (Korean Agency for Technology & Standards, 2012). The statistical differences between the algorithm-derived measurements and the SizeKorea measurements were compared with the allowable tolerance specified by International Organization for Standardization [ISO] 20685. In the case of landmarks with significant differences, the algorithm was modified to improve the reliability and accuracy of the algorithm.
Automatic landmark identification
The approaches of automatic body landmarking found in the literature can mainly be classified into three methods: template matching, statistical analysis, and geometric analysis of the body’s surface. The first method, template matching, can match an individual body model to a template model with landmarks marked on it and determine the degree of correspondence between them by using a similarity function (Allen et al., 2003; Au & Yuen, 1999; Suikerbulk et al., 2004). Suikerbuik et al. (2004) created a set of templates for each landmark that contains information about the location of the landmark. They used bounding boxes for the creation process to select a small subset of points containing their hand-picked landmark locations to apply templates matching the regions of interest. Then, to determine the goodness of fit of the template to the region of interest, they developed a similarity function that could calculate “the max–min distance (slightly altered Hausdorff distance)” between the two points of a pair, which was used to calculate the “average-min” Euclidian distance. When the similarity function no longer returned a smaller value, the reiterative alignment of the point clouds could stop. With the aid of the location of the landmark indicated on the template, they reconstructed the location of the landmark in the region of interest. They found that the template matching method generated consistent results. However, because this method was so time consuming, they used it to detect only four landmarks (the sellion and four malleoli points).
The second approach was based on statistics used to investigate related information about and between the landmarks. Probabilistic reasoning models such as the neural network and Markov model were adopted to obtain the relative degrees, thus realizing the target landmark identification. Anguelov et al. (2004) developed algorithms to set an instance model into a template model by optimizing a joint probabilistic model over all point-to-point correspondences between them. They applied this algorithm to 3D body scans in order to select around 200 corresponding points between the template model and the instance models. These points were then used as landmarks to guide the deformation of the template model to fit their corresponding instance models. For each vertex of the instance model, the algorithm assigns a corresponding vertex on the template model. Azouz et al. (2006) also developed a method based on learning landmark characteristics and the spatial relationships between them from a set of body scans where the landmarks are identified. They positioned the landmarks by formalizing the learned information into a pair-wise Markov network. However, this approach has the limitation that it can only provide an approximation of the anthropometric correspondence between different body models.
The third method is based on an analysis of the geometric characteristics of the 3D body surfaces around each landmark. Leong et al. (2013) interpreted the descriptions of 21 body landmarks and 35 feature lines defined by ASTM (1999) and ISO (1989) standards into logical mathematical definitions. They also recommended initially sectioning the scan into the key areas of the torso and arms and legs and employing image processing, which extracts the Sobel curve (body contour plots) for edge detection on a 3D body model’s torso, identifying the landmarks using computational geometry techniques. They developed automated feature extraction software using the C++ programing language. Less than 2 min of processing time was taken for body feature extraction starting from a raw point cloud. However, this algorithm could extract landmarks from only seven Asian female adults.
Han et al. (2010) developed an automatic landmark extraction software program in C++ to extract five landmark locations (bust, underbust, waist, abdomen, and hip point) for various torso shapes. For example, a bust point was located on the first point where the slope degree changed from a minus to a plus value on the side silhouette from up to down. They highlighted the importance of analyzing the surface geometry for isolating key locations, especially when there is noise in the point placement and possible occlusions in the data (Gill, 2015).
Jo et al. (2014) also used the surface geometry of a point cloud to segment the body and then identify key landmarks such as the axilla, neck points, and side waist point. They defined a quasi-boundary point sequence (QBPS) to find the boundary of the body, and categorized body scan data by clustering the features extracted from the predefined QBPS. Then, they used a non-uniform rational B-spline (NURBS) approximation to find the landmarks of the segmented upper torso. Their method of applying NURBs approximations superseded previous landmark detection methods in that their method generated more robust and reliable results regardless of the scan data’s fidelity.
The template matching technique and statistical methods such as the probabilistic reasoning model must be based on a large amount of scan data (Gill, 2015). In addition, if a new landmark is added or the definition of a landmark needs to be changed, the development process should be re-started from the beginning. The approach based on geometric analysis also has a disadvantage; the robustness of this kind of method is not sufficient since it can be influenced by various factors. However, if the physical landmark definitions are clearly determined, the consistency and reliability of the landmarking would improve (Gill, 2015). Also, if the landmark definitions are transformed into logical mathematical definitions and programmed, then this method has higher efficiency and stronger adaptability than other methods.
Rhinoceros 3D® software and the Grasshopper plugin
To develop algorithms to extract body landmarks automatically, there are two possible solutions: using a commercial system or developing a new one with a programming language such as C++ . As a result of examining past studies on automatic landmark extraction software, most of the studies developed algorithms using C++ (Han et al., 2010; Leong et al., 2013; Lu & Wang, 2008; Niu et al., 2011).
However, commercial software has advantages such as reliable interfaces and relatively easy use for general users. This study focused on Robert McNeel & Associates Rhinoceros 3D® (Rhino) software and its graphic algorithm editor, Grasshopper, as a plug-in (McNeel, 2019). In fields where precise work is needed, such as industrial design (e.g., aircraft), architectural design, and craft design (e.g., furniture and accessories), NUBRS-based 3D modeling software such as Rhino has been most widely used (Hsu et al., 2015; Kwon et al., 2017; Shi & Yang, 2013).
The Grasshopper algorithm can be implemented by arranging predefined components (e.g., icons) serving as commands and connecting wires between components, without writing a lengthy programming language (Eltaweel & Yuehong, 2017). Input and output parameter values can be inputted through components or easily changed by dragging the mouse pointer. Grasshopper has also attracted industry and research attention for its ability to produce flexible complex geometric shapes (Shi & Yang, 2013). In particular, it is the most common software for parametric design in the architectural field. Grasshopper can deal with many parameters simultaneously and provide fast results compared to other parameter software such as 3D Max (Eltaweel & Yuehong, 2017).
In the apparel field, these programs have only been used for 3D printing design research and accessory design (Hsu et al., 2015; Kwon et al., 2017; Shi & Yang, 2013). However, no studies have adopted this algorithm editor for automatic landmark identification on 3D body scans. Rhino and the Grasshopper editor make it possible to view the details of changes to the algorithm directly in the 3D Rhino interface. Also, the Grasshopper algorithm interface enables the user to integrate all of the following processes: scan editing (e.g., removing noise or patching missing parts), landmark identification, measuring, parametric 3D body scan modeling, 2D clothing patterns converted from 3D body models, etc.).
The objective of this study was to develop algorithms for automatic landmark extraction for women with various upper body types using the Grasshopper algorithm. To verify the accuracy of the algorithms on various body shapes, this study determined the criteria of key body shape factors that influence each landmark position to classify body shape groups and sorted scan samples in each body type from the 6th SizeKorea database (Korean Agency for Technology & Standards, 2012). The statistical differences between the algorithm-derived measurements and the SizeKorea measurements were compared with the allowable tolerance specified by ISO 20685. The specific objectives were as follows:
-
1.
To define 15 landmark positions for the upper bodies of women ages 20–59 based on an analysis of the morphological characteristics of 3D body surfaces.
-
2.
To develop algorithms for automatic landmark identification based on 3D scan data from the 6th SizeKorea dataset using the Grasshopper algorithm editor.
-
3.
To determine the criteria of key body shape factors that influence each landmark position to classify body shape groups.
-
4.
To verify the accuracy of the algorithms on various body shapes, the statistical differences between the scan-derived measurements and the SizeKorea measurements were compared to the allowable tolerance of ISO 20685.
-
5.
To improve the reliability and accuracy of the algorithm by modifying the algorithms of the landmarks showing differences exceeding the ISO criteria.
Methods
Methods for automatic landmark identification
The 15 landmarks on the upper body were selected for this study based on research studies on body type analysis and clothing patternmaking methods. The definitions referred to the 6th SizeKorea Project Report (Korean Agency for Technology & Standards, 2012), but since those definitions were for the direct measuring method, only three (front mid-axilla point, back mid-axilla point, bust point) landmark definitions were adopted without change. The rest of the landmarks were newly defined by modifying the SizeKorea definitions, since the landmark search should be based on the morphological characteristics of the 3D body surface (Table 1).
The algorithm in this study was developed using Grasshopper, an add-in algorithm editor of the Rhinoceros® program. In Grasshopper, the algorithm is performed by arranging “components” that correspond to pre-defined commands (e.g., icons, connecting lines, and arrows). When a “wire” is connected between “components” that serve as input and output parameters, modeling can then be intuitively conducted in a continuous manner. Parameter values can be inputted through components or easily changed by dragging the mouse pointer. This study transformed the definitions of the landmarks into logical mathematical definitions and determined i)put and output parameters for connecting them with wires (Fig. 2).
Verification method of algorithm accuracy
To verify the accuracy of the algorithms on various body shapes, the 3D body scans of 819 females with ages ranging from 20 to 59 years were extracted from the 6th SizeKorea dataset (Korean Agency for Technology and Standards, 2012). The researchers tried to include data on various body shapes. First, body shape factors that influence each landmark position were selected based on Song et al. (2021) and Han and Nam’s (2011) studies shown in Table 2. Body mass index (BMI) was selected as the influential factor for the neck, axilla, bust, and waist landmarks. Additionally, the neck slope was used as the body shape factor for the neck points. The shoulder slope was selected as the body shape factor for the shoulder point. The upper body slope was used as the body shape factor for the bust point and waist point.
Second, this study determined the criterion values that could be used to classify the body types into three groups for each shape factor (Table 2). First, this study used the criteria for dividing the BMI categories defined by the World Health Organization (WHO). The BMI of the 819 scans extracted for this study were calculated. It was found that 8.2% of the extracted scans belonged to the thin group, 75.8% to the normal group, and 16% to the obese group in the SizeKorea dataset (Table 3). Therefore, at a similar rate, this study sorted a total of 120 scans: 20 for the thin group (16.7%), 80 for the normal group (66.7%), and 20 for the obese group (16.7%).
For the shoulder angle, this study utilized the 25th and 75th percentile values of the SizeKorea dataset as the criterion to classify the three body types (Table 2). However, percentiles for the neck slope and upper body slope were not included in the Size Korea dataset, so this study calculated the percentiles as shown in Table 2.
To verify the accuracy of the automatic landmark extraction algorithm, seven scan-derived height measurements were compared with the SizeKorea height measurements (Table 4).
The height values of axilla folds and mid-axilla points were not included in the SizeKorea dataset, but they were calculated using values from the axilla height, so it was considered that accuracy analysis of the axilla position would be enough. The scapular point and back protrusion point were newly defined by this study based on the position of the back shoulder dart in a bodice pattern presented in previous studies or patten-making textbooks, so they were not included in the SizeKorea dataset.
For extraction of the height values, this study developed an automatic measuring algorithm using the Grasshopper algorithm editor, and the measurements extracted from 120 scans were saved in a CSV file.
The whole process for the automatic search of upper body landmarks and automatic measurements is shown in Fig. 1.
The algorithm-derived measurements and SizeKorea measurements were compared by performing paired t-tests using the IBM SPSS Statistics 21 package. Then, the statistical differences between the algorithm-derived measurements and the SizeKorea measurements were compared with the allowable tolerance specified by International Organization for Standardization [ISO] 20685. In the case of landmarks with significant differences, the algorithm was modified to improve the reliability and accuracy of the algorithm.
Results and Discussion
Grasshopper algorithm for automatic landmark identification
Neck points
This study developed algorithms to search for the anterior neck point, side neck point, and back neck point automatically through the following process (Fig. 2):
-
1.
Among the horizontal sections dividing the region between 150 and 350 mm from the top at 2 mm intervals, the cross section (green line in Fig. 2) with the shortest circumference was extracted.
-
2.
Among the cross sections extracted by fixing the center of the extracted horizontal section and tilting it forward 1° per extraction, the closed curve with the minimum circumference was set as the “minimum neck circumference” (blue line in Fig. 2).
-
3.
The back point of the lowest cross section (orange line in Fig. 2) with a deviation from the “minimum neck circumference” of less than 9% was set as the back neck point. The left and right points were set as the temporary side neck points to set the back neck circumference line.
-
4.
The most recessed point in the vertical section passing through the front neck point of the “minimum neck circumference” was set as the final front point. The line connecting the front point, the temporary side point, and back neck point was set as the neck circumference line (red line in Fig. 2).
-
5.
The intersection point of the line passing through the bisecting point of the neck thickness and the neck circumference line (red line) was reset as the final side neck point.
-
6.
The back (orange line in step 3) and front neck circumference lines (red line in step 5) were combined to set the final neck circumference line.
To verify the accuracy of the algorithms for extracting neck points on various body shapes, the t-test results between the SizeKorea measurements and the algorithm measurements for the neck point height of the body type groups were divided by the BMI (Table 5) and neck slopes (Table 6). If statistical differences between the scan-derived measurements and the SizeKorea measurements were found, they were compared with the maximum allowable tolerance (0.4 cm) of ISO 20685.
First, when examining the accuracy of the neck position according to the body types divided by BMI, the algorithm successfully identified the position of the neck points except for the back neck point of the thin shape group. The back neck point extracted by the algorithm was located 1.1 cm above that of the SizeKorea data. Since this value is larger than the ISO tolerance (0.4 cm), it was necessary to modify the algorithm. The original algorithm was developed to set the back neck point at the back point of the lowest cross section (orange line in Fig. 2) among the cross sections with a deviation from the “minimum neck circumference” (blue line) of less than 9%. However, the thin group tended to have a smaller “minimum neck circumference” than the other groups; the back of the neck was positioned significantly higher. As a result of re-searching for the back neck location in the scan data of the thin group, it was found that the back of the neck point was set better when 9% was changed to 11%, so the algorithm of the thin group was modified. As a result of performing the t-test again, there was no significant difference between the SizeKorea value and the algorithm value. Finally, it was found that all neck points were well extracted in all body shape groups as shown in Fig. 3.
Second, when examining the accuracy of the neck position according to the body types divided by the neck slope, the algorithm successfully identified the position of the neck points except for the front neck point of the neck forward group (Table 6). The front neck point extracted by the algorithm was located 1.1 cm above the SizeKorea position. The problem found in the neck forward group is because when the neck is bent, the chin is pushed down and the dent is not evident. Therefore, the algorithm was modified to move the front neck point 1.1 cm down for the neck forward group (Fig. 4).
Shoulder point, axilla point, front/back axilla point
Process for identifying the armscye line and shoulder point (Fig. 5):
-
1.
Extract the cross section at the point where the number of cross sections is changed from three to one when extracting the cross sections of the body upward from the waist (green line in Fig. 5).
-
2.
Set the mid-point between the torso point and the arm point at the height of this cross section as a temporary axilla point.
-
3.
Make a section from this point toward the shoulder and extract the closed green curves while tilting the angle leftward and rightward by 1° until only one closed curve is created.
-
4.
Set the average point of the coordinate values of the highest points of the extracted closed curves as the shoulder point and set the closed curve passing this point and the temporary axilla point as the armscye line (black line).
To verify the accuracy of the algorithm for extracting the shoulder point on various body shapes, the t-test results between the SizeKorea measurements and the algorithm measurements for the neck point height of the body type groups were divided by the shoulder slope (Table 7). It was found that the algorithm successfully identified the shoulder point for the raised shoulder and sloped shoulder groups. However, the algorithm-identified shoulder point of the normal shoulder group was located 0.8 cm above the SizeKorea location. The reason for the difference found in the normal shoulder group is that the shoulder point of the SizeKorea data was defined from an anatomical point of view (the most lateral point of the acromial process of the scapula), but the shoulder point detected by the algorithm was defined from a view of clothing construction (the highest point of the armscye line). As shown in Fig. 6, it is considered that the algorithm-extracted shoulder point could be more suitable than the SizeKorea shoulder point, so the algorithm was not modified.
Then, the process of extracting the axilla point, front axilla point, and back axilla point from the armscye is as follows (Fig. 7):
-
1.
Set the lowest point of the armscye line as the axilla point (c) and create horizontal sections upward from this axilla point. Set the front and back recessed points of the point where the deviation between the horizontal lengths of two adjacent horizontal sections is smaller than 0.8 as the front axilla fold point (a) and the back axilla fold point (b).
-
2.
Set the bisecting point of the height between the shoulder point and the front axilla fold point as the front mid-axilla point (d) and set the bisecting point of the height between the shoulder point and back axilla fold point as the back mid-axilla point (e).
To verify the accuracy of the algorithms for the axilla point on various body shapes, the t-test results between the SizeKorea measurements and the algorithm measurements of the body type groups were divided by BMI standards (Table 8). The axilla points identified by the algorithm were 3.7–6.8 cm lower than the SizeKorea location. It was found that the more obese the model, the lower the armpit point detected by the algorithm. The reason is that the armpit area between the arms and the torso tends to be attached when scanning obese subjects.
Therefore, the algorithm was modified to move the axilla point up 3.7 cm for the thin group, 5.2 cm for the normal group, and 6.8 cm for the obese group. As a result, it was found that the position of the axilla point discovered by the algorithm became similar to that of the SizeKorea data.
Waist points
To define the waist position, this study compared the following four definitions suggested in the literature: (1) the most recessed point on the back of the body, (2) the point of minimum width, (3) the point of minimum circumference, (4) the point of minimum thickness (Fig. 8).
When the waist position was set as the “most recessed point on the back of the body,” the position was judged as appropriate. However, for body shapes 2 and 5 in Fig. 8, it was difficult to find the point of minimum width in the waist region, or the waist position tended to be set higher. When the waist position was set at the minimum circumference point, the height of the position was similar to the height of the “minimum width point.” When the waist position was found on the point of minimum thickness, the waist level also tended to be higher.
Therefore, in this study, cross sections were first created at intervals of 2 mm in range (axilla to crotch area) to search for the waist landmarks. The point most recessed from the back of the body toward the inside of the body is defined as the back waist point. The point where the transverse plane at the height of the back waist point meets the mid-sagittal plane is defined as the front waist point. The bisecting point of the thickness between the front waist point and the back waist point is defined as the side waist point.
To verify the accuracy of the algorithms for identifying the waist level on various body shapes, the t-test results between the SizeKorea measurements and the algorithm measurements for the back waist height of the body type groups were divided by the BMI and upper body slope (Table 9). The algorithm successfully identified the back waist point except for in the backward upper body shape. The waist point extracted by the algorithm was located 1.0 cm above that of the SizeKorea data.
The reason for this is that as the upper body leans back, space was created between the subject’s bra and body during scanning, so the algorithm recognized the most recessed point on the back of the body as higher than the actual most recessed point as shown in Fig. 9. Therefore, the algorithm was modified to move the back waist point 1.1 cm down for the backward upper body group (Fig. 10).
Bust point
In order to find the most prominent point of the bust area, this study created a contour line and set the highest point as the bust point. When the direction of the contour line was set so that the direction parallel to the coronal plane of the upper body was 0°, the bust point was found to be inclined toward the center of the body. Therefore, the positions of the bust points were compared while increasing the angle of the contour line from that of the original coronal plane. It was found that the position of the bust points was most appropriate when the contour line was set by turning this plane 35° (Fig. 11) . The bust point setting process is shown in Fig. 12 .
To verify the accuracy of the algorithms for identifying the bust point on various body shapes, the t-test results between the SizeKorea measurements and the algorithm measurements for the waist height of the body type groups were divided by the BMI and upper body slope (Table 10). No significant differences were found in any of the body shape groups indicating that the algorithm successfully identified the position of the bust point.
Scapular point and back protrusion point
Although the scapular and back protrusion points were not defined in SizeKorea, they were included in this study because they are related to the back darts when a bodice pattern is made. Therefore, this study newly defined the location of these two points based on the positions of the back shoulder darts of the bodice patterns presented in patternmaking books. The search process is as follows Fig. 13:
-
1.
Create a line connecting the bisecting point of the distance between the side neck point and the shoulder point, and the bisecting point of the distance between the back waist point and the side waist point.
-
2.
Set the point at the height of the back mid-axilla point as the end point of the scapula.
-
3.
Set the most prominent point on the line connecting the bisecting point of the distance between the side neck point and the shoulder point, and the bisecting point of the distance between the back waist point and the side waist point as the back protrusion point.
Algorithm integration
The algorithm for an automatic search of upper body landmarks and automatic measurement was developed and integrated as follows Fig. 14. The entire process, from defining landmark positions to measurement extraction, was designed using the algorithm editor Grasshopper, and commands that could execute each step of the process were generated in the Rhino interface. The landmarks were extracted on the scan image as shown in Fig. 15 .
Conclusions
This study developed algorithms for automatic landmark extraction for women aged 20–59 with various upper body types and body inclinations. The 15 landmarks were defined based on the morphological features of 3D body surfaces and clothing applications, and algorithms were developed. To verify the accuracy of the algorithms on various body shapes, this study determined the criteria of key body shape factors (BMI, neck slope, upper body slope, and shoulder slope) that influence each landmark position to classify body shape groups, and sorted scan samples in each body type from the 6th SizeKorea data. The statistical differences between the scan-derived measurements and the SizeKorea measurements were compared with the allowable tolerance of ISO 20685.
When the algorithms were tested on various body shapes and body inclinations, it was found that the algorithm successfully identified most of the landmark positions, but the algorithm should be revised for the back neck point of the thin shape group, the front neck point of the neck forward group, the shoulder point of the group with a normal shoulder slope, and the waist point of a backward upper body slope. Therefore, for the landmarks of these shape groups, the parameters of the algorithm were modified based on the statistical differences to deal flexibly with the differences in upper body shapes and body inclinations. Finally, our algorithms provided accuracy in identifying landmarks on various upper body types.
This study found that the item with the most significant difference between the SizeKorea measurements and the algorithm-derived measurements was axilla height. It was found that the more obese the subject, the lower the armpit point detected by the algorithm. The reason is that in obese subjects, the armpit area tends to be attached to the torso in the scan. Therefore, the algorithm was modified to move the axilla point 3.7 cm up for the thin group, 5.2 cm for the normal group, and 6.8 cm for the obese group, but these values might vary depending on the type of 3D body scanner used.
As a tool to develop algorithms, the current study utilized the algorithm editor Grasshopper, which enables the user to interact with the 3D modeling interface directly. Before executing the algorithm, it is possible to edit the armpit or crotch areas of the body scan in the same interface. Also, the landmarks found by the algorithm can be directly used for parametric modeling, unfolding 3D body surfaces into 2D clothing patterns, and further enabling clothing design modification.
Another significance of this study is the high efficiency and strong adaptability of the algorithm. This study developed a method based on an analysis of the geometric characteristics of the landmarks on 3D body surfaces. The automatic search for landmarks in this study was algorithmized by transforming the landmark definition into logical mathematical definitions. The Grasshopper algorithm can be easily implemented by arranging pre-defined components (e.g., icons) serving as commands and connecting wires between components, without writing a lengthy programming language. Input and output parameter values can be easily changed by dragging the mouse pointer. Therefore, if future studies use this method, it will be easy to modify the landmark definitions and adjust the automatic search settings to fit women of different age groups and body types in different countries.
In order to further improve the algorithm, it would be beneficial to enable BMI-based body type classification using the algorithm's measurement values. For this process, it will be necessary to examine the relationship between BMI and body measures (or index values), choose the body measurement (or index value) that has the strongest relationship with BMI, and define the criterion values that may be used to classify body types. Also, since the algorithm developed in this study was only based on the limited sample in the 6th SizeKorea data, if more samples in the latest data such as the 8th SizeKorea data (2021) are used for future studies, the accuracy and usability of the algorithm would be increased.
Availability of data and materials
Not applicable.
References
Allen, B., Curless, B., & Popović, Z. (2003). The space of human body shapes: Reconstruction and parameterization from range scans. ACM Transactions on Graphics, 22(3), 587–594. https://doi.org/10.1145/882262.882311
Anguelov, D., Srinivasan, P., Pang, H. C., Koller, D., Thrun, S., & Davis, J. (2004). The correlated correspondence algorithm for unsupervised registration of nonrigid surfaces. Proceedings of Advances in Neural Information Processing Systems, Canada, 17. https://proceedings.neurips.cc/paper/2004/hash/e02e27e04fdff967ba7d76fb24b8069d-Abstract.html
Au, C. K., & Yuen, M. M. F. (1999). Feature-based reverse engineering of mannequin for garment design. Computer-Aided Design, 31(12), 751–759. https://doi.org/10.1016/S0010-4485(99)00068-8
Azouz, Z. B., Shu, C., & Mantel, A. (2006). Automatic locating of anthropometric landmarks on 3D human models. Proceedings of the third International Symposium on 3D Data Processing, Visualization, and Transmission, USA, pp. 750–757. https://doi.org/10.1109/3DPVT.2006.34
Clauser, C., Tebbetts, I., Bradtmiller, B., McConville, J., & Gordon, C. C. (1988). Measurer's handbook: US Army anthropometric survey, 1987–1988 (Report No. NATICK/TR-88/043). Anthropology Research Project, Inc. https://apps.dtic.mil/sti/pdfs/ADA202721.pdf
Eltaweel, A., & Yuehong, S. U. (2017). Parametric design and daylighting: A literature review. Renewable and Sustainable Energy Reviews, 73, 1086–1103. https://doi.org/10.1016/j.rser.2017.02.011
Gill, S. (2015). A review of research and innovation in garment sizing, prototyping and fitting. Textile Progress, 47(1), 1–85. https://doi.org/10.1080/00405167.2015.1023512
Han, H., & Nam, Y. (2011). Automatic body landmark identification for various body figures. International Journal of Industrial Ergonomics, 41(6), 592–606. https://doi.org/10.1016/j.ergon.2011.07.002
Han, H., Nam, Y., & Shin, S. J. H. (2010). Algorithms of the automatic landmark identification for various torso shapes. International Journal of Clothing Science and Technology, 22(5), 343–357. https://doi.org/10.1108/09556221011071811
Hsu, M. C., Wang, C., Herrema, A. J., Schillinger, D., Ghoshal, A., & Bazilevs, Y. (2015). An interactive geometry modeling and parametric design platform for isogeometric analysis. Computers & Mathematics with Applications, 70(7), 1481–1500. https://doi.org/10.1016/j.camwa.2015.04.002
International Organization for Standardization. (2018). 3-D scanning methodologies for internationally compatible anthropometric databases - Part 1: Evaluation protocol for body dimensions extracted from 3-D body scans (ISO Standard No. 20685-1:2018). https://www.iso.org/standard/63260.html
Jo, J., Suh, M., Oh, T., Kim, H., Bae, H., Choi, S., & Han, S. (2014). Automatic human body segmentation based on feature extraction. International Journal of Clothing Science and Technology, 26(1), 4–24. https://doi.org/10.1108/IJCST-10-2012-0062
Kim, Y., Song, H. K., & Ashdown, S. P. (2016). Women’s petite and regular body measurements compared to current retail sizing conventions. International Journal of Clothing Science and Technology, 28(1), 47–64. https://doi.org/10.1108/IJCST-07-2014-0081
Korean Agency for Technology and Standards. (2012). The 6th SizeKorea 3D scan and measurement technology report (ICPSR) [Data set]. https://sizekorea.kr/human-info/meas-report?measDegree=6
Kouchi, M., & Mochimaru, M. (2011). Errors in landmarking and the evaluation of the accuracy of traditional and 3D anthropometry. Applied Ergonomics, 42(3), 518–527. https://doi.org/10.1016/j.apergo.2010.09.011
Kwon, Y. M., Lee, Y., & Kim, S. J. (2017). Case study on 3D printing education in fashion design coursework. Fashion and Textiles, 4, 26. https://doi.org/10.1186/s40691-017-0111-3
Leong, I. F., Fang, J. J., & Tsai, M. J. (2013). A feature-based anthropometry for garment industry. International Journal of Clothing Science and Technology, 25(1), 6–23. https://doi.org/10.1108/09556221311292183
Liu, Y. J., Zhang, D. L., & Yuen, M. M. F. (2010). A survey on CAD methods in 3D garment design. Computers in Industry, 61(6), 576–593. https://doi.org/10.1016/j.compind.2010.03.007
Lu, J., & Wang, M. (2008). Automated anthropometric data collection using 3D whole body scanners. Expert Systems with Applications, 35(1–2), 407–414. https://doi.org/10.1016/j.eswa.2007.07.008
Markiewicz, Ł, Witkowski, M., Sitnik, R., & Mielicka, E. (2017). 3D anthropometric algorithms for the estimation of measurements required for specialized garment design. Expert Systems with Applications, 85, 366–385. https://doi.org/10.1016/j.eswa.2017.04.052
Mckinnon, L., & Istook, C. (2001). Comparative analysis of the image twin system and the 3T6 body scanner. Journal of Textile and Apparel, Technology and Management, 1(2), 1–7.
McNeel, R. (2019). Rhinoceros 3D® (Version 6.0.) [Computer Software]. https://www.rhino3d.com/download/
Niu, J. W., Zheng, X. H., Zhao, M., Fan, N., & Ding, S. T. (2011). Landmark automatic identification from three-dimensional (3D) data by using Hidden Markov Model (HMM). Proceedings of 2011 IEEE 18th International Conference on Industrial Engineering and Engineering Management, China, pp. 600–604. https://doi.org/10.1109/ICIEEM.2011.6035230
Shi, X., & Yang, W. (2013). Performance-driven architectural design and optimization technique from a perspective of architects. Automation in Construction, 32, 125–135. https://doi.org/10.1016/j.autcon.2013.01.015
Song, H. K., Baytar, F., Ashdown, S. P., & Kim, S. (2021). 3D anthropometric analysis of women’s aging bodies: Upper body shape and posture changes. Fashion Practice, 14(1), 26–48. https://doi.org/10.1080/17569370.2021.1879463
Suikerbuik, R., Tangelder, H., Daanen, H., & Oudenhuijzen, A. (2004). Automatic feature detection in 3D human body scans (Report No. 2004-01-2193). SAE International in United States. https://doi.org/10.4271/2004-01-2193
Tsoli, A., Loper, M., & Black, M. J. (2014). Model-based anthropometry: Predicting measurements from 3D human scans in multiple poses. Proceedings of IEEE Winter Conference on Applications of Computer Vision, USA, pp. 83–90. https://doi.org/10.1109/WACV.2014.6836115
Tyler, D., Mitchell, A., & Gill, S. (2012). Recent advances in garment manufacturing technology: Joining techniques, 3D body scanning and garment design. In R. Shishoo (Eds.). The global textile and clothing industry (pp. 131–170). Woodhead Publishing. https://doi.org/10.1533/9780857095626.131
Wang, M. J. J., Wu, W. Y., Lin, K. C., Yang, S. N., & Lu, J. M. (2007). Automated anthropometric data collection from three-dimensional digital human models. The International Journal of Advanced Manufacturing Technology, 32(1–2), 109–115. https://doi.org/10.1007/s00170-005-0307-3
Xia, S., Guo, S., Li, J., & Istook, C. (2018). Comparison of different body measurement techniques: 3D stationary scanner, 3D handheld scanner, and tape measurement. The Journal of the Textile Institute, 110(8), 1103–1113.
Funding
This work was supported by the National Research Foundation of Korea (NRF) Grant funded by the Korean Government (NRF-2019R1F1A1053385).
Author information
Authors and Affiliations
Contributions
HKS originated the research idea and EJR carried out the research. HKS and EJR wrote the manuscript and approved the final version.
Acknowledgements
Not applicable.
Authors’ information
EJR Invited Professor, Department of Fashion Industry, Ewha Womans University, Republic of Korea. HKS Associate Professor, Department of Clothing and Textiles, Kyung Hee University, Republic of Korea.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Ryu, E.J., Song, H.K. Automatic extraction of upper body landmarks using Rhino and Grasshopper algorithms. Fash Text 9, 36 (2022). https://doi.org/10.1186/s40691-022-00302-y
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s40691-022-00302-y