Main Article Content

Authors

Yogesh Kumar

Dr. Gaurav Khandelwal

Abstract

Deep learning technologies have gained a great deal of interest in recent months for demonstrating material removal capabilities in a variety of separation, categorization, as well as other machine vision activities. The majority of these deep networks are built employing convolution or completely convolution structures. Our suggested Dynamic and Static gestures System guided by a hand gesture high dimensional data methodology We use a snapshot of a human's thoracic spine to determine the person's depth and the size of the space around his or her hands. In this research, we provide a novel object-based deep-learning framework for semantic segmentation of extremely high-resolution satellite data. In specifically, we leverage object-based prior record by augmenting a fully convolutional neural network's training strategy with an anisotropic diffusion data pre-processing phase and an additional loss term. The goal of this restricted framework is to ensure that similar visual data is assigned to the same semantic class. Here, we employ intermediate steps based on traditional image processing techniques to aid in the resolution of the subsequent problem of classification, detection, or segmentation. Research shows that adding pre- and post-processing steps to a deep learning pipeline can boost model performance over using only the network. Recent advances in UQ algorithms used for deep learning are analysed, and their potential for use in relevance feedback is addressed. It also emphasizes basic research difficulties and directions linked with UQ. Quantifiably, man-made classes with more precise geometry, such as buildings, benefited the most from our strategy, particularly along object borders, indicating the developed approach's enormous potential. The purpose of this paper is to offer an overview of the approaches used inside deep learning frameworks to either appropriately prepare the input (pre-processing) or enhance the outcomes of the network output (post-processing), with an emphasis on digital pathology image analysis.

Downloads

Download data is not yet available.

Article Details

Section

Articles

References

  1. Amini, A. Soleimany, S. Karaman, D. Rus, Spatial uncertainty sampling for end-to-end control, 2018, arXiv:1805.04829.
  2. G. Wang, W. Li, M. Aertsen, J. Deprest, S. Ourselin, T. Vercauteren, Aleatoric uncertainty estimation with test-time augmentation for medical image seg- mentation with convolutional neural networks, Neurocomputing 338 (2019) 34–45.
  3. Gleeson, B.; MacLean, K.; Haddadi, A.; Croft, E.; Alcazar, J. Gestures for industry intuitive human-robot communication from human observation. In Proceedings of the 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Tokyo, Japan, 3–6 February 2013; pp. 349–356.
  4. H. Liu, R. Ji, J. Li, B. Zhang, Y. Gao, Y. Wu, F. Huang, Universal adversarial perturbation via prior driven uncertainty approximation, in: Proceedings of the IEEE International Conference on Computer Vision, 2019, pp. 2941–2949.
  5. H.P. Do, Y. Guo, A.J. Yoon, K.S. Nayak, Accuracy, uncertainty, and adaptability of automatic myocardial ASL segmentation using deep CNN, Magn. Reson. Med. 83 (5) (2020) 1863–1874.
  6. J. Tompson, R. Goroshin, A. Jain, Y. LeCun, C. Bregler, Efficient object local- ization using convolutional networks, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 648–656.
  7. Jung, P.G.; Lim, G.; Kim, S.; Kong, K. A wearable gesture recognition device for detecting muscular activities based on air-pressure sensors. IEEE Trans. Ind. Inform. 2015, 11, 485–494.
  8. K. Brach, B. Sick, O. Dürr, Single shot MC dropout approximation, 2020, arXiv preprint arXiv:2007.03293.
  9. L. Yu, S. Wang, X. Li, C.-W. Fu, P.-A. Heng, Uncertainty-aware self-assembling model for semi-supervised 3D left atrium segmentation, in: D. Shen, T. Liu, T.M. Peters, L.H. Staib, C. Essert, S. Zhou, P.-T. Yap, A. Khan (Eds.), Medical Image Computing and Computer Assisted Intervention, Springer International Publishing, Cham, 2019, pp. 605–613.
  10. Neverova, N.; Wolf, C.; Taylor, G.W.; Nebout, F. Multi-scale Deep Learning for Gesture Detection and Localization. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; pp. 474–490.
  11. O. Ronneberger, P. Fischer, T. Brox, U-net: Convolutional networks for biomedical image segmentation, 2015, arXiv:1505.04597.
  12. P. McClure, N. Kriegeskorte, Representing inferential uncertainty in deep neural networks through sampling, in: International Conference on Learning Representations, ICLR 2017-Conference Track Proceedings, 2016.
  13. Papadomanolaki, M.; Vakalopoulou, M.; Karantzalos, K. Patch-based deep learning architectures for sparse annotated very high resolution datasets. In Proceedings of the 2017 Joint Urban Remote Sensing Event (JURSE), Dubai, UAE, 6–8 March 2017.
  14. Park, H.S.; Jung, D.J.; Kim, H.J. Vision-based Game Interface using Human Gesture. In Proceedings of the Pacific-Rim Symposium on Image and Video Technology, Hsinchu, Taiwan, 10–13 December 2006; pp. 662–671.
  15. Shelhamer, E.; Long, J.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 3431–3440
  16. T. Nair, D. Precup, D.L. Arnold, T. Arbel, Exploring uncertainty measures in deep networks for multiple sclerosis lesion detection and segmentation, Med. Image Anal. 59 (2020) 101557.
  17. Tölgyessy, M.; Dekan, M.; Ducho ˇn, F.; Rodina, J.; Hubinsky, P.; Chovanec, L. Foundations of visual linear human–robot interaction ` via pointing gesture navigation. Int. J. Soc. Robot. 2017, 9, 509–523.
  18. Tölgyessy, M.; Hubinsky, P.; Chovanec, L.; Ducho ˇn, F.; Babinec, A. Controlling a group of robots to perform a common task by ` gestures only. Int. J. Imaging Robot. 2017, 17, 1–13. 4.
  19. Volpi, M.; Tuia, D. Dense Semantic Labeling of Subdecimeter Resolution Images with Convolutional Neural Networks. IEEE Trans. Geosci. Remote Sens. 2017, 55, 881–893.
  20. Y. Gal, Z. Ghahramani, Dropout as a Bayesian approximation: Representing model uncertainty in deep learning, in: International Conference on Machine Learning, 2016, pp. 1050–1059