Enzyme dosage detection to degrade feathers in edible bird’s nests: A comparative convolutional neural networks study

Verianti Liana, Rizal Arifiandika, Bagas Rohmatulloh, Riris Waladatun Nafi’ah, Arif Hidayat, Yusuf Hendrawan, Dimas Firmanda Al-Riza, Tunjung Mahatmanto, Hermawan Nugroho

Abstract


Edible Bird’s Nest (EBN), a costly food product made from swiftlet’s saliva, has encountered a longstanding problem of plucking the swiftlet’s feather from the nests. The destructive and inefficient manual process of plucking the feathers can be substituted with a serine protease enzyme alternative. Accurate detection of enzyme dosage is crucial for ensuring efficient feather degradation with cost-effective enzyme usage. This research employed the transfer learning method using pretrained Convolutional Neural Networks (Pt-CNN) to detect enzyme dosage based on EBN’s images. This study aimed to compare the image classification mechanisms, architectures, and performance of three Pt-CNN: Resnet50, InceptionResnetV2, and EfficientNetV2S. InceptionResnetV2, using parallel convolutions and residual networks, significantly contributes to learning rich informative features. Consequently, the InceptionResnetV2 model achieved the highest accuracy of 96.18%, while Resnet50 and EfficientNetV2S attained only 30.44% and 17.82%, respectively. The differences in architecture complexity, parameter count, dataset characteristics, and image resolution also play a role in the performance disparities among the models. The study’s findings aid future researchers in streamlining model selection when facing limited datasets by understanding the reasons for the model’s performance and contributing to a non-destructive and quick solution for EBN’s cleaning process.  


Keywords


Convolutional neural networks; Edible bird’s nest; Serine protease enzyme; Swiftlet feathers; Transfer learning

Full Text:

PDF

References


Alaeddine, H., and Jihene, M. (2021) ‘Deep residual network in network’, Computational Intelligence and Neuroscience, 2021, pp. 1–9

Alzubaidi, L., Zhang, J., Humaidi, A. J., Al-Dujaili, A., Duan, Y., Al-Shamma, O., Santamaría, J., Fadhel, M. A., Al-Amidie, M., and Farhan, L. (2021) 'Review of deep learning: concepts, CNN architectures, challenges, applications, future directions', Journal of Big Data, 8(53), pp. 1-74

Amidi, A., Amidi, S., Vlachakis, D., Megalooikonomou, V., Paragios, N., and Zacharaki, E. I. (2018) ‘EnzyNet: enzyme classification using 3d convolutional neural networks on spatial representation’, PeerJ, 2018(5), pp. 1–11

BaniMustafa, A., Qattous, H., Ghabeish, I., and Karajeh, M. (2023) ‘A machine learning hybrid approach for diagnosing plants bacterial and fungal diseases’, International Journal of Advanced Computer Science and Applications, 14(1), pp. 912–921

Barry-Straume, J., Tschannen, A., Engels, D. W. and Fine, E. (2018) ‘An evaluation of training size impact on validation accuracy for optimized convolutional neural networks’, SMU Data Science Review, 1(4), pp. 1–17

Bemporad, A., and Piga, D. (2021) 'Global optimization based on active preference learning with radial basis functions', Machine Learning, 110, pp. 417-448

Bonet, D., Ortega, A., Ruiz-Hidalgo, J., and Shekkizhar, S. (2021) ‘Channel-wise early stopping without a validation set via nnk polytope interpolation’, 2021 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2021 - Proceedings, pp. 351–358.

Chan, G. K. (2018) ‘Searching for active ingredients in edible bird’s nest’, Journal of Complementary Medicine & Alternative Healthcare, 6(2), pp. 1–5

Chieng, H. H., Wahid, N., Pauline, O., and Perla, S. R. K. (2018) ‘Flatten-t swish: A thresholded relu-swish-like activation function for deep learning’, International Journal of Advances in Intelligent Informatics, 4(2), pp. 76–86

Chollet, F. (2017) ‘Xception: Deep learning with depthwise separable convolutions’, Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, pp. 1251–1258.

Chun, M., Jeong, H., Lee, H., Yoo, T., and Jung, H. (2022) ‘Development of korean food image classification model using public food image dataset and deep learning methods’, IEEE Access, 10, pp. 128732–128741

Cubuk, E. D., Zoph, B., Shlens, J., and Le, Q. V. (2020) ‘RandAugment: Practical automated data augmentation with a reduced search space’, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pp. 1–10.

Darafsh, S., Ghidary, S. S., and Zamani, M.S. (2021) ‘Real-Time activity recognition and intention recognition using a vision-based embedded system’, Computer Vision and Pattern Recognition, pp. 1–16.

Daud, N., Yusop, S. M., Babji, A. S., Lim, S. J., Sarbini, S. R., and Yan, T. H. (2019) ‘Edible bird’s nest: Physicochemical properties, production, and application of bioactive extracts and glycopeptides’, Food Reviews International, 37(2), pp. 177–196

Demir, A., and Yilmaz, F. (2020) ‘Inception-resnet-v2 with leakyrelu and averagepooling for more reliable and accurate classification of chest x-ray images’, TIPTEKNO 2020 - Tip Teknolojileri Kongresi - 2020 Medical Technologies Congress, TIPTEKNO 2020, pp. 1–4.

El-Magd, L. M. A., Elsonbaty, A. A., and Elbelkasy, M. S. A. (2022) ‘Enhanced ct-image for covid-19 classification using resnet 50’, Journal of Theoretical and Applied Information Technology, 100(12), pp. 3830–3840

Elharrouss, O., Akbari, Y., Almaadeed, N., and Al-Maadeed, S. (2022) ‘Backbones-review: Feature extraction networks for deep learning and deep reinforcement learning approaches’, Computer Vision and Pattern Recognition, pp. 1–23.

Gadze, J. D., Bamfo-Asante, A. A., Agyemang, J. O., Nunoo-Mensah, H., and Opare, K. A. B. (2021) ‘An investigation into the application of deep learning in the detection and mitigation of DDOS attack on SDN controllers’, Technologies, 9(1), pp. 1–22

Galanis, N.I., Vafiadis, P., Mirzaev, K.G. and Papakostas, G.A. (2022) ‘Convolutional neural networks: A roundup and benchmark of their pooling layer variants’, Algorithms, 15(11), pp. 1–19

Goodfellow, I., Bengio, Y., and Courville, A. (2016) Deep Learning. Cambridge: MIT Press.

Gopinath, S. C. B., Anbu, P., Lakshmipriya, T., Tang, T. H., Chen, Y., Hashim, U., Ruslinda, A. R., and Arshad, M. K. M. (2015) ‘Biotechnological aspects and perspective of microbial Keratinase production’, BioMed Research International, 2015, pp. 1–10

Hammad, I., and El-Sankary, K. (2019) ‘Practical considerations for accuracy evaluation in sensor-based machine learning and deep learning’, Sensors (Switzerland), 19, pp. 1–13

He, K., Zhang, X., Ren, S., and Sun, J. (2015) ‘Deep residual learning for image recognition’, Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1–12.

Hendrawan, Y., Damayanti, R., Riza, D. F. A., and Hermanto, M. B. (2021) ‘Classification of water stress in cultured Sunagoke moss using deep learning’, Telkomnika (Telecommunication Computing Electronics and Control), 19(5), pp. 1594–1604

Ho, S. Y., Phua, K., Wong, L., and Bin Goh, W. W. (2020) ‘Extensions of the external validation for checking learned model interpretability and generalizability’, Patterns, 1(8), pp. 1–9

Hu, J., Shen, L., Albanie, S., Sun, G., and Wu, E. (2020) ‘Squeeze-and-excitation networks’, IEEE Transactions on Pattern Analysis and Machine Intelligence, 42(8), pp. 2011–2023

Iman, M., Arabnia, H. R. and Rasheed, K. (2023) ‘A Review of deep transfer learning and recent advancements’, Technologies, 11(2), pp. 1–14

Islam, M. A., Shuvo, M. N. R., Shamsojjaman, M., Hasan, S., Hossain, S., and Khatun, T. (2021) ‘An Automated convolutional neural network based approach for paddy leaf disease detection’, International Journal of Advanced Computer Science and Applications, 12(1), pp. 280–288

Ito, Y., Matsumoto, K., Usup, A. and Yamamoto, Y. (2021) ‘A sustainable way of agricultural livelihood: edible bird’s nests in Indonesia’, Ecosystem Health and Sustainability, 7(1), pp. 1–10

Jong, C. H., Tay, K. M., and Lim, C. P. (2013) ‘Application of the fuzzy failure mode and effect analysis methodology to edible bird nest processing’, Computers and Electronics in Agriculture, 96, pp. 90–108

Kannojia, S. P., and Jaiswal, G. (2018) ‘Effects of varying resolution on performance of cnn based image classification an experimental study’, International Journal of Computer Sciences and Engineering, 6(9), pp. 451–456

Kulathunga, N., Ranasinghe, N. R., Vrinceanu, D., Kinsman, Z., Huang, L., and Wang, Y. (2021) ‘Effects of nonlinearity and network architecture on the performance of supervised neural networks’, Algorithms, 14(2), pp. 1–17

Kumar, V., and Garg, M.. (2018) ‘Deep learning as a frontier of machine learning: A Review’, International Journal of Computer Applications, 182(1), pp. 22–30

Laraba, S., Tilmanne, J., and Dutoit, T. (2019) ‘Leveraging pre-trained CNN models for skeleton-based action recognition’, Computer Vision Systems: 12th International Conference, ICVS 2019, pp. 612–626.

Li, S., Yuan, S., Liu, S., Wen, J., and Huang, Q. (2022) 'Research on an accuracy optimization algorithm of kriging model based on a multipoint filling criterion', Mathematics, 10, pp. 1-11

Lin, M., Chen, Q., and Yan, S. (2014) ‘Network in network’, 2nd International Conference on Learning Representations, ICLR 2014 - Conference Track Proceedings, pp. 1–10.

Liu, D., Zhang, L., Lai, X., and Liu, H. (2022) ‘Image feature selection embedded distribution differences between classes for convolutional neural network’, Applied Soft Computing, 131, pp. 1–12

Liu, M., Xie, T., Cheng, X., Deng, J., Yang, M., Wang, X., and Liu, M. (2022) ‘Focused dropout for convolutional neural network’, Applied Sciences (Switzerland), 12(15), pp. 1–14

Looi, H. Q., and Omar, A. R. (2016) ‘Swiftlets and edible bird’s nest industry in Asia’, Pertanika Journal of Scholarly Research Reviews, 2(1), pp. 32–48

Lu, Y., Huo, Y., Yang, Z., Niu, Y., Zhao, M., Bosiakov, S., and Li, L. (2022) ‘Influence of the parameters of the convolutional neural network model in predicting the effective compressive modulus of porous structure’, Frontiers in Bioengineering and Biotechnology, 10, pp. 1–11

Maeda-Gutiérrez, V., Galván-Tejada, C. E., Zanella-Calzada, L. A., Celaya-Padilla, J. M., Galván-Tejada, J. I., Gamboa-Rosales, H., Luna-García, H., Magallanes-Quintanar, R., Guerrero Méndez, C. A., and Olvera-Olvera, C. A. (2020) ‘Comparison of convolutional neural network architectures for classification of tomato plant diseases’, Applied Sciences (Switzerland), 10(4), pp. 1–15

Meng, G. K., Kin, L. W., Han, T. P., Koe, D., and Keen Raymond, W. J. (2017) ‘Size characterisation of edible bird nest impurities: A preliminary study’, Procedia Computer Science, 112, pp. 1072–1081

Merino, I., Azpiazu, J., Remazeilles, A., and Sierra, B. (2021) ‘3D convolutional neural networks initialized from pretrained 2D convolutional neural networks for classification of industrial parts’, Sensors (Switzerland), 21(4), pp. 1–18

Navone, L., and Speight, R. (2018) ‘Understanding the dynamics of keratin weakening and hydrolysis by proteases’, PLoS ONE, 13(8), pp. 1–21

Nazir, U., Khurshid, N., Bhimra, M. A., and Taj, M. (2019) ‘Tiny-Inception-resNet-v2: Using deep learning for eliminating bonded labors of brick kilns in South Asia’, IEEE Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2019, pp. 1–6.

Patel, K., and Wang, G. (2022) ‘A discriminative channel diversification network for image classification’, Pattern Recognition Letters, 153, pp. 176–182

Pawar, A., Singh, M., Jadhav, S., Kumbhar, V., Singh, T., and Shah, S. (2023) 'Different crop leaf disease detection using convolutional neural network', Proceedings of the International Conference on Applications of Machine Intelligence and Data Analytics (ICAMIDA 2022), pp. 966-979.

Ponzio, F., Urgese, G., Ficarra, E., and Di Cataldo, S. (2019) ‘Dealing with lack of training data for convolutional neural networks: The case of digital pathology’, Electronics (Switzerland), 8(3), pp. 1–21

Ramachandran, P., Zoph, B., and Le, Q. V (2017) ‘Swish: A self-gated activation function’, 6th International Conference on Learning Representations, ICLR 2018 - Workshop Track Proceedings, pp. 1–12.

Rani, N. S., Akshatha, K., and Koushik, K. (2023) ‘Quality assessment model for handwritten photo document images’, Procedia Computer Science, 218, pp. 133–142

Ryu, J. (2022) ‘A visual saliency-based neural network architecture for no-reference image quality assessment’, Applied Sciences (Switzerland), 12(19), pp. 1–9

Ryu, J. (2023) ‘Improved image quality assessment by utilizing pre-trained architecture features with unified learning mechanism’, Applied Sciences (Switzerland), 13(4), pp. 1–10

Ryu, J. Y., Kim, H. U., and Lee, S. Y. (2019) ‘Deep learning enables high-quality and high-throughput prediction of enzyme commission numbers’, Proceedings of the National Academy of Sciences of the United States of America, 116(28), pp. 13996–14001

Sabottke, C. F., and Spieler, B. M. (2020) ‘The effect of image resolution on deep learning in radiography’, Radiology: Artificial Intelligence, 2(1), pp. 1–7

Sharma, A. K., and Foroosh, H. (2020) ‘Slim-CNN: A light-weight CNN for face attribute prediction’, Proceedings - 2020 15th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2020, pp. 329–335.

Shazia, A., Xuan, T. Z., Chuah, J. H., Usman, J., Qian, P., and Lai, K. W. (2021) ‘A comparative study of multiple neural network for detection of COVID-19 on chest X-ray’, Eurasip Journal on Advances in Signal Processing, 2021(50), pp. 1–16

Sikander, R., Wang, Y., Ghulam, A., and Wu, X. (2021) ‘Identification of enzymes-specific protein domain based on DDE, and convolutional neural network’, Frontiers in Genetics, 12, pp. 1–14

Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. (2014) ‘Dropout: A simple way to prevent neural networks from overfitting’, Journal of Machine Learning Research, 15, pp. 1929–1958

Sungsiri, A., Nonsiri, S., and Monsakul, A. (2022) ‘The classification of edible-nest swiftlets using deep learning’, 6th International Conference on Information Technology, InCIT 2022, pp. 404–409.

Sunil, C. K., Jaidhar, C. D., and Patil, N. (2022) ‘Cardamom plant disease detection approach using efficientNetV2’, IEEE Access, 10, pp. 789–804

Szegedy, C., Ioffe, S., Vanhoucke, V., and Alemi, A. A. (2016) ‘Inception-v4, inception-ResNet and the impact of residual connections on learning’, 31st AAAI Conference on Artificial Intelligence, pp. 1–12.

Tan, M., and Le, Q. V. (2021) ‘EfficientNetV2: Smaller models and faster training’, International Conference on Machine Learning, 2021, pp. 1–11.

Taye, M. M. (2023) ‘Theoretical understanding of convolutional neural network: Concepts, architectures, applications, future directions’, Computation, 11(3), pp. 1–23

Thambawita, V., Strümke, I., Hicks, S. A., Halvorsen, P., Parasa, S., and Riegler, M. A. (2021) ‘Impact of image resolution on deep learning performance in endoscopy image classification: An experimental study using a large dataset of endoscopic images’, Diagnostics, 11(12), pp. 1–9

Tsalera, E., Papadakis, A., Samarakou, M., and Voyiatzis, I. (2022) ‘Feature extraction with handcrafted methods and convolutional neural networks for facial emotion recognition’, Applied Sciences (Switzerland), 12(17), pp. 1–20

Utomo, B., Rosyidi, D., Radiati, L. E., Tri Puspaningsih, N. N., and Proborini, W. D. (2018) ‘Use of keratinase to maintain pre-washing glycoprotein profiles of edible bird’s nest’, Drug Invention Today, 10(2), pp. 2986–2990

Wan, X., Ren, F., and Yong, D. (2019) ‘Using inception-resnet V2 for face-based age recognition in scenic spots’, Proceedings of 2019 6th IEEE International Conference on Cloud Computing and Intelligence Systems, CCIS 2019, pp. 159–163.

Wang, Q., Zhang, Y., Ge, H., Jiang, Y., and Qin, Y. (2023) ‘Identification of Rice Freshness Using Terahertz Imaging and Deep Learning’, Photonics, 10(5), pp. 1–12

Yamashita, R., Nishio, M., Do, R. K. G., and Togashi, K. (2018) ‘Convolutional neural networks: An overview and application in radiology’, Insights into Imaging, 9(4), pp. 611–629

Yeo, Y. H., and Yen, K. S. (2021) ‘Impurities detection in intensity inhomogeneous edible bird’s nest (EBN) using a u-net deep learning model’, International Journal of Engineering and Technology Innovation, 11(2), pp. 135–145

Yue, B., Fu, J., and Liang, J. (2018) ‘Residual recurrent neural networks for learning sequential representations’, Information (Switzerland), 9(3), pp. 1–14

Zhang, H., Wu, C., Zhang, Zhongyue, Zhu, Y., Lin, H., Zhang, Zhi, Sun, Y., He, T., Mueller, J., Manmatha, R., Li, M., and Smola, A. (2022) ‘ResNeSt: split-attention networks’, IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, pp. 2736–2746.

Zhao, Y., Han, S., Meng, Y., Feng, H., Li, Z., Chen, J., Song, X., Zhu, Y., and Yang, G. (2022) ‘Transfer-learning-based approach for yield prediction of winter wheat from planet data and SAFY model’, Remote Sensing, 14(21), pp. 1–17

Zhu, X., Yu, Y., Zheng, Y., Su, S., and Chen, F. (2022) ‘Bilinear attention network for image-based fine-grained recognition of oil tea (Camellia oleifera Abel.) cultivars’, Agronomy, 12(8), pp. 1–15




DOI: https://doi.org/10.21776/ub.afssaae.2023.006.04.6

Refbacks

  • There are currently no refbacks.


Copyright (c) 2023 Verianti Liana, Arif Hidayat, Yusuf Hendrawan

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.