Deep Learning-Based Uroflowmetry Curve Analysis Improves the Noninvasive Diagnosis of Lower Urinary Tract Symptoms

Article information

Int Neurourol J. 2025;29(Suppl 2):S73-S82
Publication date (electronic) : 2025 November 30
doi : https://doi.org/10.5213/inj.2550266.133
1Department of Urology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
2Department of Digital Health, SAIHST, Sungkyunkwan University, Seoul, Korea
3Medical AI Research Center, Research Institute for Future Medicine, Samsung Medical Center, Seoul, Korea
4Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
5Department of Urology, Chung-Ang University Gwangmyeong Hospital, Chung-Ang University College of Medicine, Gwangmyeong, Korea
Corresponding author: Deok-Hyun Han Department of Urology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul 06351, Korea Email: deokhyun.han@samsung.com
Co-corresponding author: Jung Hyun Kim Medical AI Research Center, Research Institute for Future Medicine, Samsung Medical Center, 81 Irwon-ro, Gangnam-gu, Seoul 06351, Korea Email: jhkim.junghyun@gmail.com
*Jong Hoon Lee and Yungon Lee contributed equally to this study as co-first authors.
Received 2025 September 15; Accepted 2025 November 6.

Abstract

Purpose

This study aimed to evaluate the performance of an artificial intelligence (AI)-based analysis of uroflowmetry (UFM) curve images, enhanced with customized preprocessing techniques, to improve diagnostic accuracy for bladder outlet obstruction (BOO) and detrusor underactivity (DUA).

Methods

We retrospectively analyzed 2,579 UFM curve images from patients who underwent urodynamic study (UDS), including 725 normal and 1,854 abnormal cases (736 BOO and 1,387 DUA). A VGG16 convolutional neural network model was developed to perform 3 binary classification tasks: normal versus abnormal, BOO versus non-BOO, and DUA versus non-DUA. To improve model performance, we implemented a preprocessing pipeline consisting of denoising, cropping, axis scaling, and color-coding of clinical parameters such as voided volume and postvoid residual volume (PVR). Model performance was evaluated using 5-fold stratified cross-validation and the area under the receiver operating characteristic curve (AUROC).

Results

Abnormal cases demonstrated a lower median maximum flow rate (8.9 mL/sec vs. 14.8 mL/sec), higher PVR (60.0 mL vs. 20.0 mL), and lower voiding efficiency (78.5% vs. 92.5%) than normal cases. Within the abnormal group, the BOO subgroup showed a higher PVR (80.0 mL) than the non-BOO subgroup (30.0 mL). After applying the preprocessing pipeline, model performance improved, with AUROC increasing from 0.807±0.024 to 0.827±0.016 for normal vs. abnormal classification, from 0.749±0.019 to 0.773±0.034 for BOO classification, and from 0.693±0.016 to 0.709±0.031 for DUA classification.

Conclusions

AI-based analysis of UFM curve images, enhanced through customized preprocessing, improved diagnostic accuracy in patients with lower urinary tract symptoms, effectively identifying BOO and DUA. This noninvasive method may serve as an adjunct or screening tool to reduce reliance on invasive UDS.

INTRODUCTION

Lower urinary tract symptoms (LUTS), which commonly include bladder outlet obstruction (BOO) and detrusor underactivity (DUA), are prevalent among older adults and substantially reduce quality of life [1]. Uroflowmetry (UFM) is a simple and noninvasive test widely used to assess voiding function; however, interpretation of flow curves often relies on subjective judgment and lacks reproducibility [2]. Urodynamic study (UDS) remains the gold standard for diagnosing BOO and DUA but is invasive, time-consuming, and uncomfortable for patients [3]. These limitations underscore the need for less invasive and more objective diagnostic approaches.

Deep learning-based medical image analysis has achieved remarkable progress, particularly with convolutional neural networks (CNNs), which have demonstrated expert-level accuracy in radiology, pathology, and other clinical imaging domains [4-6]. More recently, transformer-based architectures and hybrid models have emerged, achieving state-of-the-art performance in complex tasks such as multi-organ segmentation, volumetric reconstruction, and multimodal learning [7, 8]. However, such advanced architectures are not always required for simpler data types. UFM curves are relatively straightforward 2-dimensional representations of a 1-dimensional physiological signal. Unlike radiologic or pathologic images with complex textures and spatial variability, UFM images display a single flow curve with limited heterogeneity. Despite this simplicity, deep learning studies that directly analyze UFM curve images are scarce, and comparable work in related curve-based fields, such as spirometry and electrocardiography, remains limited [9-12]. Curve images pose unique challenges, including grid noise, inconsistent scaling between devices, and the absence of standardized formats.

To address these challenges, we developed a deep learning framework using a VGG16 CNN architecture integrated with customized preprocessing techniques. The pipeline included denoising, cropping, axis scaling, and a color-encoding strategy to embed clinically relevant features such as voided volume (VV) and postvoid residual volume (PVR). This design enabled the model to learn both curve morphology and clinical context. We evaluated this approach across 3 clinically meaningful classification tasks —normal versus abnormal, BOO versus non-BOO, and DUA versus non-DUA— using UFM data labeled by UDS. By moving beyond numerical parameter-based evaluation, this study demonstrates the feasibility of deep learning-based UFM curve analysis as a noninvasive, efficient, and objective diagnostic tool to support clinical decision-making in functional urology.

MATERIALS AND METHODS

Study Design and Setting

This retrospective observational study analyzed UFM curve images from patients who underwent UDS at Samsung Medical Center between December 2006 and December 2017. All UDS procedures were performed by experienced urodynamicists in accordance with the International Continence Society Good Urodynamic Practices protocol, using an Aquarius TT UDS system and DORADO-KT (Laborie Medical Technologies, Canada). BOO was diagnosed when the BOO index (BOOI=PdetQmax [detrusor pressure at maximum flow]–2Qmax [maximum flow rate]) exceeded 40, whereas DUA was defined as a bladder contractility index (BCI=PdetQmax+ 5Qmax) of less than 100. Voiding efficacy was calculated as VV divided by the sum of VV and PVR, expressed as a percentage: Voiding efficacy (%) = VV/(VV+PVR) × 100.

Study Population

The study included 2,579 patients who underwent UDS, comprising 725 normal and 1,854 abnormal cases. In the abnormal group, 736 had BOO, 1,387 had DUA, and 269 exhibited both conditions. Patients with medical conditions that could affect lower urinary tract function, such as bladder or prostate cancer, were excluded. Additional exclusion criteria included prior prostate, bladder, or urethral surgery; use of indwelling catheters or the need for regular catheterization; history of cerebrovascular accident, neurologic disorders, or spinal or pelvic trauma affecting LUTS; and a VV less than 150 mL during simple UFM.

Data Preprocessing Techniques

All 2,579 UFM curve images underwent 4 preprocessing steps: denoising, cropping, axis scaling, and color-coding.

Denoising techniques

The original UFM curve images contained grid scales that interfered with accurate curve detection. To resolve this, we applied a combination of nonlocal means, morphology, and contours techniques. The nonlocal means and morphology methods were first used to blur grid scales while enhancing the curve features. Subsequently, the contours technique detected 2 curve boundaries, and the regions above and below these boundaries were filled with black. The process concluded with color inversion, yielding clean UFM curves with complete grid removal (Fig. 1).

Fig. 1.

Denoising process: (A) original image, (B) contour detection, (C) regions above and below contours filled with black, (D) final image after color inversion.

Cropping techniques

Each UFM curve image was cropped to include the region corresponding to Qmax on the y-axis and voiding time on the x-axis. This process involved identifying 3 key points: Qmax, voiding onset, and voiding end. To minimize artifacts caused by edge noise, the lower y-axis was trimmed 1 unit above the lowest curve coordinate, while the upper y-axis and both x-axis edges were trimmed by a 5-unit margin. Final cropping was then performed based on these identified key points (Fig. 2).

Fig. 2.

Cropping process, showing identification of 3 key points (maximum flow rate, voiding onset, and voiding end) followed by image extraction.

Axis scaling techniques

To standardize the varying scales of UFM curve images from different devices, we developed an axis-scaling technique. The process began with identifying the start, end, and peak points of each curve. Using the voiding time and Qmax values derived from numerical features, the x- and y-axes of the cropped curve images were proportionally adjusted. The rescaled curve was then aligned within a standardized frame to ensure consistent representation of voiding time and Qmax dimensions (Fig. 3).

Fig. 3.

Axis scaling procedure, with proportional rescaling of the x- and y-axis lengths of the curve image based on VT on the x-axis and Qmax on the y-axis. VT, voiding time; Qmax, maximum flow rate.

Colored curve image techniques

To embed key numerical features into the UFM curve images, we implemented a color-coding technique for PVR and VV values. PVR was encoded in the upper portion of the image using blue RGB values, where smaller PVR values corresponded to brighter blue shades. Similarly, VV was encoded in the lower portion of the image using green colors, with larger VV values represented by brighter green intensities (Fig. 4).

Fig. 4.

Color-coding representation, with green indicating voided volume (higher values shown as brighter shades) and blue indicating postvoid residual volume (smaller values shown as brighter shades).

Deep Learning Model and Statistical Analysis

To justify the choice of the CNN architecture, we selected the VGG16 model based on its proven reliability in medical image analysis using relatively small datasets [13]. The VGG16 architecture offers a balanced trade-off between computational efficiency, training stability, and interpretability, which makes it well suited for our dataset of limited size and low-texture images [14]. Because UFM curve images represent simple 1-dimensional physiological signals visualized as 2-dimensional plots, rather than complex anatomical structures, a deep or transformer-based model was considered unnecessary at this stage. Therefore, VGG16 served as an appropriate baseline model for extracting curve morphology and benchmarking performance before exploring more advanced architectures.

We developed 3 binary classification models based on the VGG16 CNN to predict UDS outcomes from UFM curves: (1) normal versus abnormal, (2) BOO versus non-BOO, and (3) DUA versus non-DUA. Each model was trained and evaluated twice — first using the original UFM curve images and then using the preprocessed images (Fig. 5). To optimize performance with a relatively small dataset, we employed the VGG16 model pretrained on the ImageNet dataset. The original VGG16 architecture consists of 13 convolutional layers and 5 max-pooling layers, followed by 3 fully connected layers. In our implementation, the fully connected layers were replaced with a global average pooling layer, which reduces model complexity, mitigates overfitting, and provides structural regularization, eliminating the need for dropout layers. The output layer used a sigmoid activation function for binary classification. For each classification task, stratified 5-fold cross-validation was performed to ensure robust performance evaluation. The area under the receiver operating characteristic curve (AUROC) served as the primary performance metric for assessing diagnostic accuracy. All statistical analyses were conducted using IBM SPSS Statistics ver. 25.0 (IBM Co., USA). This study received approval from the Institutional Review Board at our institution.

Fig. 5.

Overview of the study design and workflow. UFM, uroflowmetry; CNN, convolutional neural network; BOO, bladder outlet obstruction; DUA, detrusor underactivity.

RESULTS

Baseline Characteristics

Among 2,579 patients, 725 (28.1%) were classified as normal and 1,854 (71.9%) as abnormal. Within the abnormal group, 736 patients (28.5%) had BOO, 1,387 (53.8%) had DUA, and 269 had both conditions. The median age of all patients was 67.0 (interquartile range, 60.0–72.0) years, with abnormal cases being slightly older than normal ones (67.0 years vs. 65.0 years). Significant differences in urodynamic parameters were observed between the normal and abnormal groups. The normal group exhibited a higher median Qmax (14.8 mL/sec vs. 8.9 mL/sec), shorter voiding time (40.2 seconds vs. 52.8 seconds), greater VV (243.7 mL vs. 197.2 mL), and higher voiding efficiency (VE; 92.5% vs. 78.5%) compared to the abnormal group. Patients with BOO demonstrated a higher BOOI (58.0 vs. 18.0) and lower VE (67.6% vs. 89.3%) than non-BOO patients. Similarly, patients with DUA showed a lower BCI (77.0 vs. 121.0) and decreased VE (80.4% vs. 88.1%) compared to non-DUA patients (Table 1).

Baseline characteristics and urodynamic parameters of study participants

Effect of Preprocessing

The performance of the deep learning model improved consistently across all classification tasks following the application of preprocessing techniques. For normal versus abnormal classification, the mean AUROC increased from 0.807 ±0.024 to 0.827 ±0.016. Likewise, the BOO classification improved from 0.749±0.019 to 0.773±0.034, and DUA classification increased from 0.693±0.016 to 0.709±0.031 (Table 2). The receiver operating characteristic and precision-recall curves demonstrated consistent performance across all 5 folds of cross-validation (Fig. 6). The preprocessed model (Fig. 6DF) exhibited more stable and enhanced performance compared to the baseline model (Fig. 6AC) across all classification tasks, with the most substantial improvement observed in the normal versus abnormal classification.

Mean AUROC of binary classification before and after preprocessing: normal and abnormal, BOO and non-BOO, DUA and non-DUA

Fig. 6.

Receiver operating characteristic (ROC) and precision-recall (PR) curves for 5-fold stratified cross-validation. Binary classification before preprocessing: (A) normal vs. abnormal, (B) BOO vs. non-BOO, (C) DUA vs. non-DUA. Binary classification after preprocessing: (D) normal vs. abnormal, (E) BOO vs. non-BOO, (F) DUA vs. non-DUA. AUC, area under the ROC curve; BOO, bladder outlet obstruction; DUA, detrusor underactivity.

DISCUSSION

Recent advances in functional urology have increasingly highlighted the clinical utility of AI. In alignment with this trend, our institution has conducted multiple AI-based investigations on the diagnosis and prognosis of voiding dysfunction, producing meaningful results. The first study developed predictive models using CatBoost and XGBoost algorithms to differentiate BOO and DUA in male patients with LUTS. Using a large dataset comprising patient-reported outcomes, UFM parameters, and ultrasound-derived features, the XGBoost model demonstrated superior diagnostic performance, achieving an AUROC of 0.826 for BOO and 0.819 for DUA, outperforming conventional clinical indicators. These results underscored the potential of noninvasive and objective machine learning-based diagnostics for accurately classifying complex voiding pathophysiology [15]. The second study developed a deep neural network model to predict postoperative changes in Qmax and VE in 1,142 of 1,933 patients who underwent holmium laser enucleation of the prostate. This model achieved an AUROC of 0.884 (sensitivity 0.783, specificity 0.891) for predicting Qmax and an AUROC of 0.817 for VE prediction, both of which outperformed other machine learning algorithms. These findings demonstrated AI’s potential to quantitatively predict functional recovery following benign prostatic hyperplasia surgery, supporting the development of personalized treatment strategies in clinical practice [16]. Collectively, our institution has established a robust data infrastructure and expertise in applying AI to the diagnosis and prognostication of voiding dysfunction. Building on this foundation, we expanded our research toward noninvasive diagnostic models integrating AI and deep learning, as demonstrated in the present study —an emerging field also being explored by other research groups [17, 18].

Earl AI applications in UFM primarily focused on numerical parameters such as Qmax, VV, and PVR. While these metrics are informative, they cannot fully capture the morphological characteristics of the flow curve [9, 10]. To overcome this limitation, we implemented a direct image-based analysis using UFM curve images, allowing the model to learn morphologic flow patterns beyond numerical features through a deep learning framework. This concept was inspired by established methodologies in other physiologic signal domains —such as electrocardiography and spirometry —and adapted here to the context of UFM analysis [11, 12].

In this study, we demonstrated that deep learning-based analysis of UFM curve images enhances the noninvasive diagnosis of LUTS, particularly when supported by tailored preprocessing techniques. Our VGG16 CNN-based approach achieved consistent performance improvements across 3 classification tasks —normal versus abnormal, BOO versus non-BOO, and DUA versus non-DUA—following preprocessing. While UDS remains the gold standard, AI-based image analysis can serve as a clinically meaningful noninvasive complement, potentially reducing the need for invasive procedures. This approach offers the potential to streamline diagnostic workflows, improve patient comfort, and support more efficient clinical decision-making. Our method incorporates 2 key innovations. First, the preprocessing steps (denoising, cropping, and axis scaling) standardized UFM curve images across patients and devices, enabling more reliable image-based learning. Second, we implemented a color-coding strategy that embeds clinically relevant parameters such as VV and PVR directly into the images. This dual strategy allows the model to recognize both standardized curve morphology and contextual clinical information, thereby enhancing diagnostic accuracy.

This study has several notable strengths. The preprocessing pipeline was specifically tailored to address inherent challenges in curve images, including grid noise, inconsistent scaling, and the integration of critical numerical parameters. Moreover, stratified 5-fold cross-validation minimized bias and improved the robustness of our results. However, several limitations must be acknowledged. The retrospective, single-center design may limit generalizability, and external validation across diverse patient populations and UFM systems is necessary to confirm broad applicability. Furthermore, while our model improved diagnostic accuracy, we did not directly evaluate clinical outcomes or its impact on clinical decision-making. Future research should include prospective studies, external validation, and integration of AI-based UFM analysis into real-world clinical workflows.

In conclusion, this study demonstrated that deep learning-based analysis of UFM curve images can serve as a noninvasive and objective diagnostic tool for lower urinary tract dysfunction. Incorporating preprocessing techniques — such as denoising, cropping, axis scaling, and color-coding — significantly enhanced the model’s predictive accuracy across all classification tasks, including normal versus abnormal, BOO versus non-BOO, and DUA versus non-DUA. These preprocessing steps standardized heterogeneous UFM images and embedded clinically relevant parameters, allowing the model to more effectively capture both morphological and contextual information. By improving diagnostic performance without the need for invasive testing, the proposed preprocessing-integrated deep learning framework offers a practical and patient-friendly complement to conventional urodynamic studies. Future research should focus on external validation and the clinical implementation of this noninvasive AI-based diagnostic approach within functional urology.

Notes

Grant/Fund Support

This work was supported by the Basic Science Research Program through the National Research Foundation of Korea funded by the Ministry of Education (No. 2021R1F1A1064112). This work was supported by the National IT Industry Promotion Agency of Korea grant funded by the Korea government. Grant No. (RQT-25-030603), for development of (the adoption of a benign prostatic hyperplasia diagnostic decision-support software solution). This work was supported by Establishment of K-Health National Medical Care Service and Industrial Ecosystem funded by the Ministry of Science and ICT (MSIT, Korea) Balanced National Development Account (Project Name: Establishment of K-Health National Medical Care Service and Industrial Ecosystem/Project Number: ITAH0603230110010001000100100).

Research Ethics

This study was approved by the Institutional Review Board of Samsung Medical Center (approval number: SMC 2021-08-073). The requirement for informed consent was waived due to the retrospective design and the use of anonymized data. All procedures performed in this study involving human participants were conducted in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Declaration of Helsinki and its later amendments.

Conflict of Interest

No potential conflict of interest relevant to this article was reported.

AUTHOR CONTRIBUTION STATEMENT

· Conceptualization: DHH, JHK

· Data curation: JHL, YL

· Formal analysis: JHK

· Funding acquisition: DHH

· Methodology: DHH, JHK, JHL, YL

· Project administration: DHH, MJC

· Visualization: JHK, YL

· Writing - original draft: JHL, YL

· Writing - review & editing: DHH, JHK, CUL, KJK

References

1. Coyne KS, Sexton CC, Thompson CL, Milsom I, Irwin D, Kopp ZS, et al. The prevalence of lower urinary tract symptoms (LUTS) in the USA, the UK and Sweden: results from the Epidemiology of LUTS (EpiLUTS) study. BJU Int 2009;104:352–60.
2. Gacci M, Del Popolo G, Artibani W, Tubaro A, Palli D, Vittori G, et al. Visual assessment of uroflowmetry curves: description and interpretation by urodynamists. World J Urol 2007;25:333–7.
3. Nager CW, Brubaker L, Litman HJ, Zyczynski HM, Varner RE, Amundsen C, et al. A randomized trial of urodynamic testing before stress-incontinence surgery. N Engl J Med 2012;366:1987–97.
4. Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, et al. A survey on deep learning in medical image analysis. Med Image Anal 2017;42:60–88.
5. Lundervold AS, Lundervold A. An overview of deep learning in medical imaging focusing on MRI. Z Med Phys 2019;29:102–27.
6. Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017;542:115–8.
7. Shamshad F, Khan S, Zamir SW, Khan MH, Hayat M, Khan FS, et al. Transformers in medical imaging: a survey. Med Image Anal 2023;88:102802.
8. Azad R, Kazerouni A, Heidari M, Aghdam EK, Molaei A, Jia Y, et al. Advances in medical image analysis with vision Transformers: a comprehensive review. Med Image Anal 2024;91:103000.
9. Bang S, Tukhtaev S, Ko KJ, Han DH, Baek M, Jeon HG, et al. Feasibility of a deep learning-based diagnostic platform to evaluate lower urinary tract disorders in men using simple uroflowmetry. Investig Clin Urol 2022;63:301–8.
10. Choo MS, Ryu HY, Lee S. Development of an automatic interpretation algorithm for uroflowmetry results: application of artificial intelligence. Int Neurourol J 2022;26:69–77.
11. Wang Y, Li Q, Chen W, Jian W, Liang J, Gao Y, et al. Deep learning-based analytic models based on flow-volume curves for identifying ventilatory patterns. Front Physiol 2022;13:824000.
12. Ribeiro AH, Ribeiro MH, Paixão GM, Oliveira DM, Gomes PR, Canazart JA, et al. Automatic diagnosis of the 12-lead ECG using a deep neural network. Nat Commun 2020;11:1760.
13. Kim HE, Cosa-Linan A, Santhanam N, Jannesari M, Maros ME, Ganslandt T. Transfer learning for medical image classification: a literature review. BMC Med Imaging 2022;22:69.
14. Sistaninejhad B, Rasi H, Nayeri P. A review paper about deep learning for medical image analysis. Comput Math Methods Med 2023;2023:7091301.
15. Shin H, Ko KJ, Park WJ, Han DH, Yeom I, Lee KS. Machine learning models for the noninvasive diagnosis of bladder outlet obstruction and detrusor underactivity in men with lower urinary tract symptoms. Int Neurourol J 2024;28(Suppl 2):S74–81.
16. Lee JH, Kim JH, Chung MJ, Lee KS, Ko KJ. Development of a deep learning-based predictive model for improvement after holmium laser enucleation of the prostate according to detrusor contractility. Int Neurourol J 2024;28(Suppl 2):S82–9.
17. Huang HH, Cheng PY, Tsai CY. Exploring artificial intelligence in functional urology: a comprehensive review. Urol Sci 2025;36:2–10.
18. Speich JE, Klausner AP. Artificial intelligence in urodynamics (AIUDS): the next “big thing”. Continence 2025;13:101754.

Article information Continued

Fig. 1.

Denoising process: (A) original image, (B) contour detection, (C) regions above and below contours filled with black, (D) final image after color inversion.

Fig. 2.

Cropping process, showing identification of 3 key points (maximum flow rate, voiding onset, and voiding end) followed by image extraction.

Fig. 3.

Axis scaling procedure, with proportional rescaling of the x- and y-axis lengths of the curve image based on VT on the x-axis and Qmax on the y-axis. VT, voiding time; Qmax, maximum flow rate.

Fig. 4.

Color-coding representation, with green indicating voided volume (higher values shown as brighter shades) and blue indicating postvoid residual volume (smaller values shown as brighter shades).

Fig. 5.

Overview of the study design and workflow. UFM, uroflowmetry; CNN, convolutional neural network; BOO, bladder outlet obstruction; DUA, detrusor underactivity.

Fig. 6.

Receiver operating characteristic (ROC) and precision-recall (PR) curves for 5-fold stratified cross-validation. Binary classification before preprocessing: (A) normal vs. abnormal, (B) BOO vs. non-BOO, (C) DUA vs. non-DUA. Binary classification after preprocessing: (D) normal vs. abnormal, (E) BOO vs. non-BOO, (F) DUA vs. non-DUA. AUC, area under the ROC curve; BOO, bladder outlet obstruction; DUA, detrusor underactivity.

Table 1.

Baseline characteristics and urodynamic parameters of study participants

Variable Total Normal Abnormal BOO No BOO DUA No DUA
No. of patients 2,579 725 (28.1) 1,854 (71.9) 736 (28.5) 1,843 (71.5) 1,387 (53.8) 1,192 (46.2)
Age (yr) 67.0 (60.0–72.0) 65.0 (57.0–70.0) 67.0 (61.0–73.0) 68.0 (62.0–74.0) 66.0 (59.0–72.0) 68.0 (61.0–73.0) 66.0 (59.0–71.0)
Qmax (mL/sec) 10.5 (7.2–14.4) 14.8 (11.8–18.3) 8.9 (6.3–12.0) 7.5 (5.3–10.4) 11.7 (8.6–15.5) 9.3 (6.5–12.3) 12.2 (8.5–16.4)
Average flow (mL/sec) 4.9 (3.3–7.0) 7.1 (5.4–9.1) 4.2 (2.8–5.8) 3.6 (2.4–5.0) 5.6 (3.8–7.5) 4.3 (2.9–6.1) 5.7 (3.9–7.9)
Voiding time (sec) 49.0 (35.8–68.4) 40.2 (30.0–53.5) 52.8 (39.0–74.4) 53.6 (40.0–73.9) 47.6 (34.1–65.8) 54.4 (39.2–76.0) 44.2 (32.6–59.0)
Flow time (sec) 41.3 (30.2–54.8) 34.8 (26.4–44.0) 44.6 (32.6–58.8) 45.5 (34.1–59.4) 39.6 (29.0–52.8) 45.0 (32.4–59.8) 38.1 (28.8–49.1)
Time to peakflow (sec) 13.4 (8.8–21.4) 11.3 (8.0–16.2) 14.7 (9.4–23.7) 13.2 (8.8–21.5) 13.6 (9.0–21.4) 15.5 (9.8–25.0) 11.8 (8.0–17.0)
Voided volume (mL) 210 (154.3–274.4) 243.7 (191.2–307.2) 197.2 (139.2–260.9) 176.8 (125.5–228.7) 226.6 (167.4–288.0) 204.4 (143.7–268.3) 217.1 (164.4–278.6)
Residual volume (mL) 40.0 (15.0–110.0) 20.0 (8.5–41.0) 60.0 (20.0–150.0) 80.0 (30.0–165.7) 30.0 (10.0–80.0) 50.0 (17.0–150.0) 30.0 (10.0–80.0)
Voiding efficiency (%) 85.1 (61.6–94.7) 92.5 (84.0–97.3) 78.5 (50.0–92.8) 67.6 (43.6–86.7) 89.3 (72.1–96.3) 80.4 (51.6–93.9) 88.1 (70.7–95.5)
BOOI 25.2 (12.0–43.0) 15.4 (3.0–25.2) 31.0 (16.0–52.0) 58.0 (47.0–76.4) 18.0 (7.0–28.0) 24.0 (12.0–36.0) 29.0 (11.0–58.0)
BCI 97.0 (76.0–118.5) 118.0 (108.0–134.0) 85.0 (68.0–100.0) 109.7 (91.2–132.0) 91.0 (70.0–112.0) 77.0 (64.0–89.0) 121.0 (109.0–137.0)

Values are presented as number (%) or median (interquartile range).

BOO, bladder outlet obstruction; DUA, detrusor underactivity; Qmax, maximum flow rate; BOOI, bladder outlet obstruction index; BCI, bladder contractility index.

Table 2.

Mean AUROC of binary classification before and after preprocessing: normal and abnormal, BOO and non-BOO, DUA and non-DUA

Variable Abnormal (n = 1,854, 71.9%) BOO (n = 736, 28.5%) DUA (n = 1,387, 53.8%)
Before preprocessing 0.807 ± 0.024 0.749 ± 0.019 0.693 ± 0.016
After preprocessing 0.827 ± 0.016 0.773 ± 0.034 0.709 ± 0.031

Values are presented as mean±standard deviation.

AUROC, area under receiver operating characteristic; BOO, bladder outlet obstruction; DUA, detrusor underactivity.