An Investigation About Improving Insulin Resistance Through Voluntary Physical Activity to Prevent and Control Diabetes

The Impact of Voluntary Physical Activity on Insulin Resistance and Glucose Control in Diabetes Management

by Manish Mahajan*,

- Published in International Journal of Information Technology and Management, E-ISSN: 2249-4510

Volume 7, Issue No. 10, Nov 2014, Pages 0 - 0 (0)

Published by: Ignited Minds Journals


ABSTRACT

Diabetes is related to both insulin and glucose levels inthe bloodstream. Normally, when an individual ingests carbohydrates or proteinat rest, the pancreas releases insulin, an endocrine hormone made in andreleased by the pancreatic beta cells that stimulates the uptake of circulatingglucose into muscle and fat tissues. Without adequate levels of insulin andinsulin action, blood glucose levels can rise to abnormally high levels,contributing over time to the development of health complications. Thus, theprimary goal of all diabetes management is effective control of blood glucosewithin normal or nearly normal levels. Physical inactivity and a sedentary lifestyle are riskfactors for the development of type 2 diabetes. Here, we identified the effects8 weeks of voluntary physical activity had on the prevention of insulin resistancein mouse skeletal muscles and liver (a hallmark of T2D). To do this, 8 week oldC57BL/6J mice with (RUN) and without (SED) voluntary access to running wheelswere fed a standard rodent chow ad libitum for 8 weeks. In the liver, there wasa 2.5-fold increase in insulin stimulated AktSER473 phosphorylation,and a threefold increase in insulin-stimulated (0.5 U/kg) GSKβSER9phosphorylation in RUN compared to SED mice. Although not induced in skeletalmuscles, there was a twofold increase in SOCS3 expression in SED compared toRUN mice in the liver. There was no difference in the glucose tolerance testbetween groups. This study was the first to show differences in liverinsulin sensitivity after 8 weeks of voluntary physical activity, and increasedSOCS3 expression in the liver of sedentary mice compared to active mice. Thesefindings demonstrate that even in young mice that would normally be consideredhealthy, the lack of physical activity leads to insulin resistance representingthe initial pathogenesis of impaired glucose metabolism leading to type 2diabetes.

KEYWORD

insulin resistance, voluntary physical activity, diabetes management, glucose levels, muscle and fat tissues, sedentary lifestyle, type 2 diabetes, liver insulin sensitivity, SOCS3 expression, glucose metabolism

INTRODUCTION

In many areas of health care, particularly in emergency care, health professionals rely on the information provided by the patients about their medical history. However, reliability of the information may be difficult that has acquired from patients who are unwell, confused or having communication difficulties. It is thereby suggested that a personal electronic health record device might empower patients to be aware and have more control of their health status. A number of different versions of patient record systems exist and it referred by a number of varying terms and acronyms, a common one being Electronic Health Record (EHR). In this research, a new dual watermarking scheme has been introduced for security and privacy of patient information. The robustness of the proposed technique is checked by applying some common attacks on the images and examining the visual quality of medical images by the PSNR and NC (normalized correlation coefficient) parameters. Simulation results shows that the proposed scheme is robust against common attacks such as Speckle noise, Gaussian noise, Median filter, salt &pepper noise, Gamma correction, Rotation, Automatic Equalisation and JPEG compression. From the simulation results, it is observed that the proposed technique is more suitable for radiological images as the robustness of this image to various attacks is better than the other input images. In recent year all the business applications are moving towards the digital era, because of great development in latest technologies such as in the area of communication, networked multimedia system, digital data storage etc. Also from the last two decades, use of internet is rapidly increased in business environment towards achievement of effectiveness, convenience and Security by introducing the digitization in their work. It was estimated that in 2003 the Internet will carry only 1% of the information however by 2000 this figure had grown to 51%, and by 2007 more than 97 % information was carried away across the globe. A study conducted by Jupiter Research says that 1.1 billion people have regular Web access and use application like electronic mail, instant messaging, social networking, online messaging etc. which, helps in growth & knowledge sharing in different domains such as education, research, development, Medical, and many business etc. In business applications to speed up the business process communication use of digital media has been drastically increased. This digital data includes text, images, audio, video and software which are transferred over open public network, hence there is need to protect this data. There are many techniques that are available for protection of this digital data, such as encryption (cryptography), authentication and time stamping. Also there is another method that improved the protection of digital data by merging a low level signal directly into the digital data. This low level signal is known as watermark, that uniquely identifies the ownership and provide the security to the digital data and can be easily

2

It is a process of embedding unremarkable logos or labels or information data or pattern into the digital data. The concept of digital watermarking is associated with the stegno-graphy. It is defined as covered writing, which hides the important message in a covered media while, digital watermarking is a way of hiding a secret or personal message to provide copyrights and the data integrity. Digital image watermarking is a new approach, which is suitable for medical, military, and archival based applications. The embedded watermarks are difficult to remove and typically imperceptible, could be in the form of text, image, audio, or video. The embedding of secret watermark in digital data, no matter how much invisible it may be. However it leads to some degradation in the resultant embedded digital data. To overcome this and to retrieve the original data, reversible watermarking has been implemented, which considered as a best approach over the cryptography. In cryptography after encryption the resultant data may not be visible or understandable also at the time of retrieval this may lead to loss of semantic information of host data, which is not in case of watermarking. In digital data several watermarks can be embedded at the same time and this is known as multiple watermarking technique. A digital watermark also considered as digital signature which provides the authenticity. A given watermark may be unique to each copy (e.g. to identify the intended recipient), or be common to multiple copies (e.g. to identify the document source). Basically, digital watermarking is consisted of two main processes, namely embedding process and extracting process. During the embedding process, watermark is embedded into the multimedia data (digital data). The original digital data (multimedia content) will slightly modified after embedding the watermark, this modified data is called as watermarked data. While in extraction process this embedded watermark is extracted from the watermarked data and recovers the original multimedia data. The extracted watermark is then compared with the original watermark; if the watermark is same it results in authenticated data. During the transmission of the watermarked data over the public network attacker may tamper the data, and if any modification in the data can be detected by comparing the extracted watermark with the original watermark. perform region classification for extracting livers soft tissue. Seo et al. employed a multimodal threshold method based on piecewise linear interpolation that used spine location as a reference point. Forouzan et al. introduced a multilayer threshold technique, in which by statistical analysis of the liver intensity it calculates the threshold value. Both these methods use the local information of the livers relative position to the spine and ribs. Non-model-based methods for organ segmentation leads to inaccuracies due to variation in imaging condition, because of occurrence of tumor inside the organ and noise. Dependencies on prior information such as texture and image values could cause inaccuracies in segmentation process as feature could change from one patient to another. Moreover, most of these methods are parameter dependent and hence for the best performance it often needs to adjust the parameters from one CT volume to other. In recent years, model-based image segmentation algorithms developed for various medical applications. These methods aim to recover an organ based on statistical information. State-of-the-art algorithms on model-based segmentation are based on active shape and appearance models. Model-based techniques provide more accurate and robust algorithm for segmenting the CT scan image. These techniques also deal with the missing image features via interpolation. The performance of these methods depends on the number and type of training data. Moreover, if the shape to be segmented lies too far from the model space, that might not be detected by many those better methods which does not implemented by statistical model-based approach. Pan and Dawant reported a geometrical-level set method for automatic segmentation of the liver in abdominal CT scans without relying on the prior knowledge of shape and size. Even if this method depends on a model-based technique, that outperforms threshold-based techniques, but it did not use prior knowledge of the liver shape. Lin et al. presented the algorithm to perform segmentation of kidney, based on an adaptive region growing and an elliptical kidney region positioning that used spines as landmark.

Manish Mahajan

registration and a multilayer segmentation technique are combined in this approach. This method is does not affected by the diversity of existing liver shapes, as it does not rely on any shape model. Samuel et al. has proposed the use of Ball-Algorithm for the segmentation of lungs. In this algorithm at the first stage, it applies the grey level thresholding to the CT images to segment the thorax from background and then the lungs from the thorax. Then in the next step to avoid loss of juxtapleural nodules, this method performs the rolling ball algorithm. Julian Ker has presented the method of doing segmentation of lungs, which is named as TRACE method. Due to the possible presence of various disease processes, and the change of the anatomy with vertical position results in variation of size, shape, texture of lungs CT image of different patients. Therefore, the boundary between lung and surrounding tissues can vary from a smooth-edged, sharp-intensity transition to irregularly jagged edges with a less distinct intensity transition. The TRACE algorithm implemented with new perception of a non-approximating technique for edge detection. Shiying et al. have introduced a fully automatic method for identifying lungs in 3D pulmonary X-Ray CT images. The method follows three main steps:

  • Lung region is extracted from CT-Scan image by applying gray-level thresholding,
  • By using a dynamic programming it identifies the anterior and posterior junction, to separate left and right lungs and
  • To smooth the irregularities of boundary along the mediastinum nodule, it implements sequence of morphological operations.

RESEARCH STUDY

Wavelet analysis is a windowing technique with variable regions Zhao et al (2004). Wavelet analysis uses long time intervals for accurate low frequency information and shorter regions when high-frequency information is sought. Wavelet analysis uses time scale regions and not time-frequency regions. Wavelet is a short duration waveform with an average zero value (Zhao et al 2004; Xu et al 2009). Fourier analysis basis is comparing wavelets with sine waves. Sinusoids sans limited duration extend from minus to plus infinity. And wavelets are irregular and symmetric, while sinusoids are smooth and predictable. Fourier analysis includes breaking a signal into differing frequency sine waves. The wavelet analysis breaks up a signal into shifted and scaled versions of original (or mother) wavelet. Seeing a wavelets and sine waves pictures, reveals that sharp change signals might be better analyzed with irregular wavelet than with smooth sinusoid, just as some food is handled better with a fork than with a spoon. Wavelet s major advantage is its ability for local analysis, i.e. analyzing a localized area of larger signals (Xu et al 2009; Boix & Canto 2010). Let us take the case of sinusoidal signals barely visible and with small discontinuity. They are generated in the real world via power fluctuation or noisy switches. But wavelet coefficients show correct location at discontinuity time. Wavelet domain compression is the ultimate choice due to its following advantages: a) Wavelet-based compression ensures multi-resolution hierarchical characteristics where images are compressed at varied resolution levels and processed from low to high resolution sequentially (Kofidis et al 2009). b) High vigor for common signal processing. c) Real time signals are time limited as well as band limited (or space limited in images). Block function basis is represented by time-limited signals (Dirac delta functions for infinitesimal small blocks). But it is not so with

4

Sines and Cosines are not time-limited. Wavelets are localized in time (space) and frequency (scale) domains and so are easy to capture local signal features. d) Multi resolution support is another advantage of wavelet basis. When we consider windowed Fourier transform its effect is localizing the signal under analysis. As a single window is used in frequencies, analysis resolution is same at all frequencies. Shorter windows or shorter basis functions are required to capture signal discontinuities (and spikes). Similarly, longer basis functions are required to analyze low frequency signal components. Window sizes vary with wavelet based decomposition. So signal analysis at various resolution levels is allowed. Compression in medical applications is required for quick interactivity when browsing through large image set (volumetric data sets, images time sequences, image databases) or to search context dependent detailed image structures, and/or quantitative analysis of measured data. It is unbearable in medical imaging when information loss occurs when storing or transmitting images (Hu et al 2003). For medical image sources, discarding small image details indicating pathology can alter diagnosis leading to human and legal consequences. Data transmission and regions of interests (ROIs) prioritization and thereby supporting lossy coding is important. Quick image inspection of large image volumes transferred over low bandwidth channels like ISDN, or satellite networks (teleradiology), need compression with progressive transmission capability.

SIGNIFICANCE OF THE STUDY

Wavelet transform has become one of the most important techniques for image denoising due to its high energy-compaction property. Wavelet- based tools and ideas are still very attractive for image processing problems because of their simplicity and efficiency. The applications of discrete wavelet transform have been extensively studied by Xu et al (2007) and have offered plenty of processing algorithms and realising structures. An important step in wavelet thresholding is the selection of threshold values. An improperly selected threshold value affects not only the denoised image, but also creates visually annoying artifacts. Selection of suitable features is a significant step for successfully implementing the specific applications. The different features used in image classification are spectral information, vegetation indices, transformed Myint (2001), Asner and Heidebrecht (2002), Neville et al (2003), Platt and Goetz (2004) and Christina et al (2009) discussed different feature extraction techniques like principal component analysis, minimum noise fraction transform, discriminant analysis, decision boundary feature extraction, non-parametric weighted feature extraction, wavelet transform, gabor transform, spectral mixture analysis and gray level co-occurrence matrix. These techniques reduced the data redundancy inherent in remotely sensed data or enhanced extraction of specific features of the information. The mapping of classes is much more accurate in supervised classification but is heavily dependent on the given input.

REFERENCES

[1] Cox IJ, Miller ML, Bloom JA, Fridrich J, Kalker T., Models of watermarking. In: Digital watermarking and steganography (second edition), ed Burlington: Morgan Kaufmann, 2008. [2] Mnch H, Engelmann U, Schrter A, Meinzer HP., The integration of medical images with the electronic patient record and their web-based distribution. Academic Radiology, vol. 11, pp. 661-668, 2004. [3] V. Strela, N. Heller, and G. Strang., The applications of multiwavelets filter banks to signal and image processing, IEEE Transactions on Image Processing, vol. 8, no. 4, pp. 548-563, 2006. [4] Dalel Bouslimi et al., A joint encryption/watermarking system for verifying the reliability of medical images, IEEE Trans on information technology in biomedicine, vol. 16, no. 5, pp 891-899, 2012. [5] Ruotsalainen P., Privacy and security in teleradiology. European Journal of Radiology vol. 73 pp. 31-35, 2010. [6] Kobayashi LOM, Furuie SS., Proposal for DICOM multiframe medical image integrity and authenticity. Journal of Digital Imaging, vol. 22, pp. 71-83, 2009. [7] Tan C, Ng J, Xu X, Poh C, Guan Y, Sheah K., Security protection of DICOM medical images using dual-layer reversible water-marking with tamper detection capability. Journal of Digital Imaging, vol. 24, pp. 528-540, 2011.

Manish Mahajan

International Journal of Computer Mathematics, vol. 88. pp. 2057-2071, 2011.