Table of contentsAbstractIntroductionLiterature surveyA. Classification based on typesB. Classification of Skin Lesions Based on CNNProblem StatementMethodologyA. Dataset usedB. CNN used - AlexNETC. Evaluation MetricsResultsConclusionReferencesAbstractThis paper addresses the call for an intelligent and rapid skin cancer classification system using highly efficient contemporary deep convolutional neural network. CNNs use convolutional layers to perform image processing on input images and learn to perform classification tasks. To take advantage of a tested and reliable network, Alex Net CNN was adopted. The developed program can be run by interacting with controls on its graphical user interface. There are two modes of operation. The first is to train the network with a dataset of your choice, while the second is to use the already trained network to classify the image via transfer learning. Say no to plagiarism. Get a tailor-made essay on "Why Violent Video Games Shouldn't Be Banned"? Get an original essay IntroductionAdvances in digital technology, image processing, machine learning, and more recently, deep learning have evolved the use of images for medical diagnoses. This trend is becoming popular thanks to the ever-increasing number of machine learning methods and growing modern computing power. To date, several deep learning models have been established or created and are applied in the field of medical diagnosis due to their ability and ability to recognize patterns in digital images (Cicero et al., 2016). Of the many deep learning techniques available, convolutional neural networks (CNNs) are the best at the moment. Convolutional neural networks have led to great advances in many medical image analysis tasks such as disease detection and classification. One of the main areas of application of this technique in medicine is the detection and classification of malignant and benign skin lesions from dermatoscopic images. Other deep learning techniques include deep neural networks, deep belief networks, recurrent neural networks, and deep Boltzmann machine. Eldeib, (2016) stated that an accurate CAD system can be used for early diagnosis of a disease and thus enable earlier and more effective treatment which could save lives. For example, the ability to cure and treat cancer efficiently and accurately depends variably on the ability to detect tumors in their early stages. According to Choi et al. (2010), cancer has been classified and differentiated from related diseases for which diagnosis and treatment are of great interest due to the high rate of occurrence. Cancer has been identified as a common cause of death for humans as data shows that there have been 14 million new cancer cases and 8.2 million cancer-related deaths worldwide (National Cancer Institute, 2017 ). For this reason, CAD has the suitability and potential to only use images of the skin lesion without other information because skin cancer is one of the most common types of cancer and usually these are skin forms that have been vulnerable to light solar. Literature surveyA . Classification based on typesIn this study, the focus is basically on three main types of skin cancer, which are primarily nevus, a group of skin cancers consisting of various types of dysplastic nevus, melanocytic nevus, nevusepidermal, etc. Secondly, we have the seborrheic keratosis class which is mostly benign and thirdly we have the malignant and highly dangerous melanoma.1. NaevusThe name naevus or nevi in Latin means birthmark. This skin cancer is usually benign and can have a similar appearance to melanoma. People who develop nevus are very prone to developing melanoma in a mole or other parts of the body. The greater the number of these acquired moles, the greater the risk. People who have ten or more are twelve times more likely to develop melanoma than those who have ten or more in the general population.2. Seborrheic keratosisThese are generally harmless skin growths that often appear as the skin ages. Some people have only one, but it is not uncommon to develop several others. This cancer poses no risk to the patient. They are often brown and bumpy and can appear anywhere on the body. The growth may appear stiff as if it were painted on the body. Some people mistake them for unusual-looking scabs. Occasionally causes pain and itching that later becomes inflamed or frightening.3. Melanoma It is the least frequent but the deadliest of the three common skin cancers. They spread rapidly and can potentially cause death. It is often difficult to distinguish between nevus skin cancer and early melanoma. Certain characteristics of a nevus skin mole can determine whether it is friendly or at moderate or high risk of becoming melanoma. The traditional way to detect this cancer is to use a handheld dermoscope, a high-tech magnifying device that allows viewing of internal skin structures and color that are not visible to the eye.B. CNN-Based Skin Lesion Classification There are two ways in which a CNN can be used to classify skin cancer. The first is to use a pre-trained CNN such as Alex Net, Google Net, or Image Net that has been trained and configured with a dataset containing millions of images (Hinton et al., 2012). The CNN is retrained using a new dataset containing skin cancer images. CNN completes its function as a simple feature extractor. Classification is performed using another classifier, such as support vector machines or artificial neural networks. The second way, the one used in this study, is to have the CNN process images from a dataset, extract and learn peculiar characteristics of the different categories of the dataset. CNN directly learns the relationship between raw pixel data and class labels through end-to-end learning. Feature learning is integrated into the workflow and is not separated from classification. In other cases, to develop a fully functional CNN-based classification computer program, the process is divided into two. The first is to train the CNN which in our case is Alex Net. After training, the CNN is saved for future use. Secondly, the saved CNN is used to classify images through the use of transfer learning. The CNN's training accuracy is only as good as its dataset. To successfully train deep CNN models, all images contained in the dataset must be properly labeled. The labels are based on the categories available for classification. If this is done incorrectly, overfitting of the network can occur which will lead to improper generalization in the network for unfamiliar input images. Statement of the Problem Currently, DERMOSCOPY is used as a tool to specifically detect skin lesions of themelanoma, because it can be diagnosed in its initial stage and there is a guarantee that there are certain good chances of recovery. However, diagnosis through dermatoscopic images is quite difficult as it requires extensively and extremely trained specialists. For this reason, deep learning models such as CNNs are currently used for automatic melanoma detection from dermatoscopic images. These existing CNN architectures have used different types of convolutional architectures and various ways to classify their prediction accuracy. Therefore, the challenge studied in this work is to create a convolutional architecture such that it can absorb useful properties or features from biomedical images for high-precision melanoma classification. This article proposes a model based on methodology A. Datasets used The data used in this study is a combination of datasets from the following sources which include the International Skin Imaging Collaboration (ISIC, 2018), the MED-NODE database (Giotis et al., 2015) and the DERMOFIT image library. The images are publicly available dermatoscopic images from these sources. They are stored as RGB images of various sizes in JPG format. There were a total of 1609 images in the dataset. The images have been divided into two parts. These are 90% and 10% respectively. The first part was used for CNN training while the second part was used for validation.B. Used by CNN: AlexNETAlexNet is a convolutional neural network trained on over a million images from the ImageNet database. The network has a depth of 8 levels and can classify images into 1000 object categories, such as keyboard, mouse, pencil and many animals. As a result, the network learned feature-rich representations for a wide range of images. The network has an image input size of 227 x 227. Syntax net = alexnet returns a pre-trained AlexNet network.C. Evaluation Metrics To quantify the reliability of the trained convolutional network, two important factors are calculated. They include sensitivity and classification accuracy. These are based on validation loss, all cases, true cases detected, true positive cases, and false negative cases. Sensitivity=true positive casesTrue positive cases+false negative casesAccuracyAccuracy=true cases detectedAll casesResultsThe dataset used for this study contains 545 melanoma images, 651 nevi images, and 413 seborrheic keratosis images. The average training time is 40 minutes and 18 seconds. Additionally, all training sessions passed the validation criteria of completing training before the last epoch. A maximum of 6 epochs was used for training. For the system configuration, the minimum batch size is 10, the initial learning rate is 1e-6, the input data is shuffled after each epoch, the validation of the learning progress to determine the learning accuracy is performed after each era. 10 iterations. Verbose is set to false, so there is no logging information. Once training is complete, the network is saved to be used for image classification using transfer learning. When the training of the convolutional network begins, the training accuracy is just above 20% while the losses are up to 1.5. As training progresses and the number of iterations increases, accuracy increases rapidly and slows down towards the completion of epoch 1. Losses are inversely related to training accuracy. As accuracy increases, the value of losses decreases. WithAs the number of epochs increases, training slows down and increases become gradual. It eventually approaches constant training accuracy values. In the training progress result presented in Figure, the constant value for training accuracy progress achieved during epoch 6 at the 800th iteration. An epoch is a complete set of input data fed into the convolutional network that is to be learned. More epochs are needed for the deep learning algorithm during the learning phase to achieve high classification accuracy. An epoch in our experiment consists of 152 iterations. In all there is a maximum of 912 iterations. An iteration means updating the deep learning algorithm. After the network is trained, it is tested with validation images and some of them are displayed in a results window. From the tests conducted, the maximum accuracy of the network is 74% while one iteration a minimum of 68% was achieved. Furthermore, a sensitivity analysis was performed which reached 70%. This reflects the fact that the potential of the program to achieve almost 100% accuracy is very high if the correct variables are optimized. However, it should be noted that optimization requires time and optimization power. The dataset size will be increased along with the batch size and number of epochs. Both underfitting and overfitting of data will be totally eliminated. Conclusion The use of deep learning in the field of medicine for medical diagnoses through dermoscopic images has been identified as a future dominant technology due to its potential for high-precision resource reduction in terms of time and cost. With a large number of medical image datasets available for public use, it is essential to develop tools to be able to fully utilize these resources. The program was developed in MATLAB and one of its special features is a graphical user interface that makes it easy to use. The deep learning program developed in this study is capable of the following: Record results on highly challenging datasets The program can be retrained separately Its program is easy to use. The program promises to provide a viable alternative to skin cancer grading. The quality of the dataset determines the training result. From the experiments carried out, the maximum precision achieved was 74%, while the minimum was 68%. Sensitivity is up to 70%. Further development of this program will ensure that the accuracy is close to 100% and can serve as the basis for professional skin cancer classification software in the medical sector. Please note: this is just a sample. Get a customized document from our expert writers now. Get a Custom Essay References Abbas, Q., Emre Celebi, M., Garcia, I.F., & Ahmad, W. (2013). Melanoma recognition framework based on expert definition of ABCD for dermoscopic images. Skin Research and Technology, 19(1), e93–e102. https://doi.org/10.1111/j.1600-0846.2012.00614.xAbdel-Zaher AM, Eldeib AM. Breast cancer classification using deep belief networks. Expert systems with applications. 2016;46:139–144.Barata, C., Ruela, M., Francisco, M., Mendonça, T., & Marques, J. S. (2014). Two systems for melanoma detection in dermatoscopic images using texture and color features. IEEE Systems Journal, 8(3), 965–979.CTR Kathirvel. Classification of diabetic retinopathy using deep learning architecture. International Journal of Engineering Research Technology, 5(6), 2016.Cavalcanti, P.G., Scharcanski, J., and Baranoski, G.V.G. (2013). A two-step approach to discrimination
tags